Assignment 2: Patterns


The deadline is May 27th at 10:00 Saarbrücken standard-time. You are free to hand in earlier. You will have to choose one topic from the list below, read the articles, and hand in a report that critically discusses this material and answers the assignment questions. Reports should summarise the key aspects, but more importantly, should include original and critical thought that show you have acquired a meta level understanding of the topic – plain summaries will not suffice. All sources you use should be appropriately referenced, any text you quote should be clearly identified as such. The expected length of a report is between 3 to 5 pages, but there is no limit.

For the topic of your assignment, choose one of the following:

  1. Grab it

    To retrieve succinct sets of patterns, MDL based approaches have proven quite successfull. To extend beyond simple conjunctive patterns, such as Krimp [1] and Slim [2] discover, Fischer and Vreeken recently proposed Grab [3] to discover interesting sets of rules. Your task is to read each of these papers critically, and investigate their conceptual differences and similarities.

    All three employ an MDL based score, yet the authors of Grab decided for binomial codes over canonical enumerations to encode the data, rather than optimal prefix codes that Krimp and Slim use to define \(L(D|M)\). What are the differences between these two approaches, and what are the implications? In particular, how does each score behave when there are overlapping patterns in the set and what does that mean for the interpretation of the pattern sets? Are there any algorithmic advantages/disadvantages to either approach?

    Another difference to Slim is that Grab models noise explicitly using error matrices. For which type of patterns and what kind of data is such an encoding useful? Is it always preferable to use a model that explicitly models noise? If not, when would you prefer the one, and when would you prefer the other?

  2. Squeeze it

    Mining sets of patterns that together describe the data well has effectively solved the pattern explosion. The question remains, how to score, and how to mine such sets? This are difficult questions. The first determines what the ideal solution looks like, whereas the second determines what we can possibly find. Both involve choices that have far-reaching consequences that are not always easy to oversee.

    To summarise event sequences, Tatti and Vreeken [4], for example, proposed the sqs algorithm. They use MDL, the Minimum Description Length principle, to define their score, and propose algorithms to both score the data, as well as to discover good pattern sets directly from data. A few years later, Fowkes and Sutton [5] take a related but slightly different probabilistic approach that does not directly punish gaps, and does allow patterns to interleave. Recently, Bhattacharyya and Vreeken [6] presented Squish, which can discover interleaving and nested patterns, as well as considers a richer class of patterns than both SQS and ISM.

    Your assignment, if you choose to accept is, is to read and analyse these papers critically, and connect the dots. Basic questions that can help you on your way, include the following. Are there any hidden, or obvious, biases and assumptions in the scores, in the cover, or search algorithms that may influence the results? If so, how? What are the advantages of the probabilistic over the MDL based score? How different are they really? What are the implications of not punishing gaps, in theory and in practice? What about the comparison between the different methods, are these convincing, fair, or are they comparing apples and oranges? Does ISM fare well on discovering the types of patterns they are after? How about interleaving? Why are the results as they are? (And, are the experiments presented fairly?)

    (Bonus) Squish much faster discovers a model that is at least as good as SQS, yet convergence takes a while. How could we speed this up, in a principled way? Also, the SQS-Search procedure requires many passes over the data, how could we reduce this and still (likely) obtain good models? Further, is it possible to include some of the ideas of ISM back into SQS or Squish? How?

  3. Group it (Hard)

    Loosely speaking, in subgroup discovery we are after discovering subpopulations of our data that are a) selectable with a simply interpretable query, i.e. a pattern, and b) that exhibit a different distribution over the target variable than we see for the global distribution. Of course, there are many ways to define what makes a good subgroup—and over time many such definitions have been proposed. Commonly used scores include weighted relative accuracy for discrete targets, and mean-shift for continuous-valued targets.

    Only surprisingly recently, however, we realized that 'standing out' by itself is not necessarily a virtue. If we want to use subgroups to better predict the value of the target attribute, or want to use them to explain the target attribute using simple terms, only standing out does not suffice.

    Read the following two papers, one by Song et al. [7], and one by Boley et al. [8]. Focus your discussion on what is common between the goal of these two (technically very different) approaches, and how they (try to) solve what problem. Discuss critically on whether they achieve this goal, be it in general, or in the specific use case they consider. Last, go into detail what is different between the two approaches, beyond the obvious. Whatever you do, try not to be distracted by the details of the search schemes, as that's all boilerplate.

    (Bonus) Read the recent paper by Kalofolias et al. [9], who propose a variant of subgroup discovery in which we additionally consider a control variable. Ignoring the technical solution they propose, but rather focusing on the general problem where this control variable could be either continuous-valued or discrete valued, could we use the ideas of Song et al. [7] and Boley et al. [8] to build a stronger method? Or, do simple notions of (not) standing out suffice here?

  4. Bop it

    A long time ago, frequency was thought to be a good measure for the interestingness of a pattern – you want to know what has been sold often, after all. The patterns that this resulted in, however, were terribly boring – the result sets were both incredibly large in size as well as incredibly redundant. A natural idea to alleviate this problem is to instead only report those patterns for which the frequency is significant with regard to some background model. Unsurprisingly, this turned out to be much harder than expected.

    Read Hanhijärvi et al [10], Webb [11] and De Bie [12]. Each of these papers somehow address the above problem, each defines some way to determine which patterns are significant, and all take a quite different. Or, are they?

    Your task is to critically assess these proposals, connect the dots between them, as well as identify the core differences. Some questions to get you started include the following. What are the assumptions they make, e.g. about the data, about the user, and what are the implications of these? Do you expect (large) differences in which patterns they will return? Why? Do you expect any of these proposals to solve the main problem? Are the results of these methods guaranteed to be not boring? If so, why, and if not, why not? Ignoring computation, or difficulty of implementation, which of these methods do you find most promising, and why?


Return the assignment by email to tada-staff@mpi-inf.mpg.de by 27 May, 1000 hours. The subject of the email must start with [TADA]. The assignment must be returned as a PDF and it must contain your name, matriculation number, and e-mail address together with the exact topic of the assignment.

Grading will take into account both Hardness of questions, as well as whether you answer the Bonus questions.



References

You will need a username and password to access the papers outside the MPI network. Contact the lecturer if you don't know the username or password.

[1] Vreeken, J., van Leeuwen, M. & Siebes, A. Krimp: Mining Itemsets that Compress. Data Mining and Knowledge Discovery, 23(1):169-214, Springer, 2011.
[2] Smets, K. & Vreeken, J. Slim: Directly Mining Descriptive Patterns. In Proceedings of the 12th SIAM International Conference on Data Mining (SDM), Anaheim, CA, pages 236-247, Society for Industrial and Applied Mathematics (SIAM), 2012.
[3] Fischer, J. & Vreeken, J. Sets of Robust Rules, and How to Find Them. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Data (ECMLPKDD), Springer, 2019.
[4] Tatti, N. & Vreeken, J. The Long and the Short of It: Summarizing Event Sequences with Serial Episodes. In Proceedings of the 18th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), Beijing, China, ACM, 2012.
[5] Fowkes, J. & Sutton, C. A Subsequence Interleaving Model for Sequential Pattern Mining. In Proceedings of the 22nd ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), San Francisco, CA, 2016.
[6] Bhattacharyya, A. & Vreeken, J. Squish: Efficiently Summarising Event Sequences with Rich Interleaving Patterns. In Proceedings of the 17th SIAM International Conference on Data Mining (SDM), Houston, TX, SIAM, 2017.
[7] Song, H., Kull, M., Flach, P. & Kalogridis, G. Subgroup Disocvery with proper scoring rules. , 2016.
[8] Boley, M., Goldsmith, B.R., Ghiringhelli, L. & Vreeken, J. Identifying Consistent Statements about Numerical Data with Dispersion-Corrected Subgroup Discovery. Data Mining and Knowledge Discovery, 31(5):1391-1418, Springer, 2017.
[9] Kalofolias, J., Boley, M. & Vreeken, J. Efficiently Discovering Locally Exceptional yet Globally Representative Subgroups. In Proceedings of the 17th IEEE International Conference on Data Mining (ICDM), New Orleans, LA, IEEE, 2017.
[10] Hanhijärvi, S., Ojala, M., Vuokko, N., Puolamäki, K., Tatti, N. & Mannila, H. Tell me something I don't know: randomization strategies for iterative data mining. In Proceedings of the 15th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), Paris, France, pages 379-388, ACM, 2009.
[11] Webb, G.I. Self-sufficient itemsets: An approach to screening potentially interesting associations between items. ACM Transactions on Knowledge Discovery from Data, 4(1):1-20, 2010.
[12] De Bie, T. Maximum entropy models and subjective interestingness: an application to tiles in binary databases. Data Mining and Knowledge Discovery, 23(3):407-446, Springer, 2011.