Topics in Algorithmic Data Analysis SS'23


News

more ▾

Course Information

Type Advanced Lecture (6 ECTS)
Lecturer Prof. Dr. Jilles Vreeken
Email vreeken (at) cispa.de
Lectures Thursdays, 10–12 o'clock (sharp) in 0.05 (CISPA, E9.1) and online via Zoom and YouTube
Registration Not necessary, see below
Summary In this advanced course we'll be investigating hot topics in data mining and machine learning that the lecturer thinks are cool. This course is for those of you who are interested in Machine Learning, Data Mining, Data Science – or, as the lecturer prefers to call it – Algorithmic Data Analysis. We'll be looking into how to determine causal relations from observational data, how to extract non-linear dependencies, discover significant and useful patterns from data, as well as how to gain insight into structured data.

Schedule

Month Day Topic Slides Assignment Req.
Reading
Opt.
Reading
Apr 13 Introduction and Practicalities PDF 1st assignment out
20 Causality PDF [1] Ch 1, Ch 6 [8,9,10]
27 Causal Discovery PDF deadline 1st, 2nd out [1] Ch 7 [11]
May 4 Causal Inference PDF [1] Ch 2, Ch 4 [12,13,14,15,16]
11 Causal Insight PDF [2] [17,18,19,20]
18 yay holiday — no class
25* (E1.7) Subgroup Discovery PDF deadline 2nd, 3rd out [3] [21,22,23]
Jun 1 Useful Patterns PDF [4] [24,25,26]
8 yay holiday — no class
15 Insightful Patterns PDF [5] [27,28]
Jun 22 Sequential Patterns PDF deadline 3rd, 4th out [25] [29,30]
29 Graph Summarization PDF [6] [31,32,33,34]
Jul 6 Graph Epidemics PDF [7] [35,36,37,38]
13 Wrap-Up PDF deadline 4th
20 oral exams
Oct 5 oral re-exams

* Lecture will be held in Room 0.01 of E1.7 (MMCI)

All report deadlines are on the indicated day at 10:00.

Materials

All required and optional reading will be made available here. You will need a username and password that will be given out in the first lecture.

In case you do not have a strong enough background in data mining, machine learning, or statistics, these books [1,39,40] may help to get you on your way. The university library kindly keeps hard copies of these books available in a so-called Semesteraparat.

Required Reading

[1] Peters, J., Janzing, D. & Schölkopf, B. Elements of Causal Inference. MIT Press, 2017.
[2] Kaltenpoth, D. & Vreeken, J. We Are Not Your Real Parents: Telling Causal from Confounded by MDL. In Proceedings of the 2019 SIAM International Conference on Data Mining (SDM), SIAM, 2019.
[3] Atzmueller, M. Subgroup Discovery. WIRE's Data Mining and Knowledge Discovery, 5:35-49, Wiley, 2015.
[4] van Leeuwen, M. & Vreeken, J. Mining and Using Sets of Patterns through Compression. In Frequent Pattern Mining, Aggarwal, C. & Han, J., pages 165-198, Springer, 2014.
[5] Fischer, J. & Vreeken, J. Sets of Robust Rules, and How to Find Them. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Data (ECMLPKDD), Springer, 2019.
[6] Koutra, D., Kang, U., Vreeken, J. & Faloutsos, C. VoG: Summarizing Graphs using Rich Vocabularies. In Proceedings of the 14th SIAM International Conference on Data Mining (SDM), Philadelphia, PA, pages 91-99, SIAM, 2014.
[7] Prakash, B.A., Vreeken, J. & Faloutsos, C. Spotting Culprits in Epidemics: How many and Which ones?. In Proceedings of the 12th IEEE International Conference on Data Mining (ICDM), Brussels, Belgium, IEEE, 2012.

Optional Reading

[8] Pearl, J. The do-calculus revisited. In Proceedings of Uncertainty in AI, 2012.
[9] Pearl, J. Causality. Cambridge University Press, 2009.
[10] Pearl, J. & Mackenzie, D. The Book of Why. Basic Books, 2018.
[11] Chickering, D.M. Optimal Structure Identification With Greedy Search. JMLR, 3:507-554, 2002.
[12] Janzing, D. & Schölkopf, B. Causal Inference Using the Algorithmic Markov Condition. IEEE Transactions on Information Technology, 56(10):5168-5194, 2010.
[13] Janzing, D., Mooij, J., Zhang, K., Lemeire, J., Zscheischler, J., Daniusis, P., Steudel, B. & Schölkopf, B. Information-geometric Approach to Inferring Causal Directions. , 182-183:1-31, 2012.
[14] Budhathoki, K. & Vreeken, J. Accurate Causal Inference on Discrete Data. In Proceedings of the IEEE International Conference on Data Mining (ICDM'18), IEEE, 2018.
[15] Marx, A. & Vreeken, J. Identifiability of Cause and Effect using Regularized Regression. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), ACM, 2019.
[16] Tatasovska, N., Chavez-Demoulin, V. & Vatter, T. Distinguishing Cause from Effect Using Quantiles: Bivariate Quantile Causal Discovery. In Proceedings of the International Conference on Machine Learning (ICML), PMLR, 2020.
[17] Mian, O., Marx, A. & Vreeken, J. Discovering Fully Directed Causal Networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), AAAI, 2021.
[18] Maurage, P., Heeren, A. & Pesenti, M. Does chocolate consumption really boost Nobel Award chances? The peril of over-interpreting correlations in health studies. J Nutr., 143(6):931-3, 2013.
[19] Marx, A., Gretton, A. & Mooij, J. A Weaker Faithfulness Assumption based on Triple Interactions. AUAI
[20] Mameche, S., Kaltenpoth, D. & Vreeken, J. Discovering Invariant and Changing Mechanisms from Data. In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), ACM
[21] Kalofolias, J., Boley, M. & Vreeken, J. Efficiently Discovering Locally Exceptional yet Globally Representative Subgroups. In Proceedings of the 17th IEEE International Conference on Data Mining (ICDM), New Orleans, LA, IEEE, 2017.
[22] Sutton, C., Boley, M., Ghiringhelli, L., Rupp, M., Vreeken, J. & Scheffler, M. Identifying Domains of Applicability of Machine Learning Models for Materials Science. Nature Communications, 11:1-9, Nature Research, 2020.
[23] Budhathoki, K., Boley, M. & Vreeken, J. Rule Discovery for Exploratory Causal Reasoning. In Proceedings of the SIAM Conference on Data Mining (SDM), SIAM, 2021.
[24] Vreeken, J., van Leeuwen, M. & Siebes, A. Krimp: Mining Itemsets that Compress. Data Mining and Knowledge Discovery, 23(1):169-214, Springer, 2011.
[25] Bhattacharyya, A. & Vreeken, J. Efficiently Summarising Event Sequences with Rich Interleaving Patterns. In Proceedings of the SIAM International Conference on Data Mining (SDM'17), SIAM, 2017.
[26] Cueppers, J. & Vreeken, J. Just Wait For It... Mining Sequential Patterns with Reliable Prediction Delays. In Proceedings of the IEEE International Conference on Data Mining (ICDM), IEEE, 2020.
[27] Fischer, J., Oláh, A. & Vreeken, J. What's in the Box? Exploring the Inner Life of Neural Networks with Robust Rules. In Proceedings of the International Conference on Machine Learning (ICML), PMLR, 2021.
[28] Fischer, J. & Vreeken, J. Differentiable Pattern Set Mining. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), ACM, 2021.
[29] Tatti, N. & Vreeken, J. The Long and the Short of It: Summarizing Event Sequences with Serial Episodes. In Proceedings of the 18th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), Beijing, China, ACM, 2012.
[30] Cueppers, J., Kalofolias, J. & Vreeken, J. Omen: Discovering Sequential Patterns with Reliable Prediction Delays. Knowledge and Information Systems, Springer, 2022.
[31] Chakrabarti, D., Papadimitriou, S., Modha, D.S. & Faloutsos, C. Fully automatic cross-associations. In Proceedings of the 10th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), Seattle, WA, pages 79-88, 2004.
[32] Kang, U. & Faloutsos, C. Beyond Caveman Communities: Hubs and Spokes for Graph Compression and Mining. In Proceedings of the 11th IEEE International Conference on Data Mining (ICDM), Vancouver, Canada, pages 300-309, IEEE, 2011.
[33] Goeble, S., Tonch, A., Böhm, C. & Plant, C. MeGS: Partitioning Meaningful Subgraph Structures Using Minimum Description Length. In Proceedings of the IEEE International Conference on Data Mining (ICDM), pages 889-894, IEEE, 2016.
[34] Coupette, C. & Vreeken, J. Graph Similarity Description. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), ACM, 2021.
[35] Lappas, T., Terzi, E., Gunopulos, D. & Mannila, H. Finding effectors in social networks. In Proceedings of the 16th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), Washington, DC, pages 1059-1068, ACM, 2010.
[36] Shah, D. & Zaman, T. Rumors in a Network: Who's the Culprit?. IEEE Transactions on Information Technology, 57(8):5163-5181, 2011.
[37] Sundareisan, S., Vreeken, J. & Prakash, B.A. Hidden Hazards: Finding Missing Nodes in Large Graph Epidemics. In Proceedings of the SIAM International Conference on Data Mining (SDM'15), SIAM, 2015.
[38] Rozenshtein, P., Gionis, A., Prakash, B.A. & Vreeken, J. Reconstructing an Epidemic over Time. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'16), pages 1835-1844, ACM, 2016.
[39] Wasserman, L. All of Statistics. Springer, 2005.
[40] Aggarwal, C.C. Data Mining - The Textbook. Springer, 2015.

Registration

There is no need to register for the course with the lecturer. The credentials to the Zoom meetings, YouTube stream, and necessary materials, will be shared in the first (publicly available) lecture.

As is usual, you will have to register for the exam via LSF. You can do so up to one week before the exam.

Prerequisites

Students should have basic working knowledge of machine learning, data mining, and/or statistics, e.g. by successfully having taken courses such as Machine Learning, Probabilistic Graphical Models, Probabilistic Machine Learning, Elements of Machine Learning, etc.

The skills you will benefit most are reading comprehension and critical thought. We will practice these in both lectures and assignments.

Lectures

Each lecture will start at 10:06. (This weird time is a compromise between the slight majority preferring 10 sharp and the slight minority preferring 10:15.)

TADA will be a hybrid course. You can attend all lectures in-person in the CISPA lecture hall (room 0.05 of E9.1), as well as via online via Zoom. We will additionally stream every lecture to YouTube, and keep the recorded videos until the end of the semester. The Zoom meetings, YouTube streams, and edited video will be linked from the schedule above.

The credentials to access the course materials will be shared during the first lecture.

Assignments

Students will individually do one assignment per topic – four in total. For every assignment, you will have to read one or more research papers and hand in a report that critically discusses this material and answers the assignment questions. Reports should summarise the key aspects, but more importantly, should include original and critical thought that show you have acquired a meta level understanding of the topic – plain summaries will not suffice. All sources you've drawn from should be referenced. The expected length of a report is 4 pages, but there is no limit.

The deadlines for the reports are on the day indicated in the schedule at 10:00 Saarbrücken standard-time. You are free to hand in earlier.

You will find some well-graded example reports here.

Grading and Exam

The assignments will be graded in scale of Fail, Pass, Very Good, and Excellent. Any assignment not handed in by the deadline is automatically considered failed, and cannot be re-done. You are allowed to re-do one Failed assignment: you have to hand in the improved assignment within two weeks. Two failures mean you are not eligible for the exam, and hence failed the course.

You can earn up to three bonus points by obtaining Excellent or Very Good grades for the assignments. An Excellent grade gives you one bonus point, as do every two Very Good grades, up to a maximum of three bonus points. Each bonus point improves your final grade by 1/3 assuming you pass the final exam. For example, if you have two bonus points and you receive 2.0 from the final exam, your final grade will be 1.3. You fail the course if you fail the final exam, irrespective of your possible bonus points. Failed assignments do not reduce your final grade, provided you are eligible to sit the final exam.

The final exams will be oral, and will cover all the material discussed in the lectures and the topics on which you did your assignments. The main exam will be on July 20th. The re-exam will be on October 5th. The exact time slot per student will be announced per email. Inform the lecturer of any potential clashes as soon as you know them.