Assignment 1: The Warm-Up

The deadline is May 2nd at 10:00 Saarbrücken standard-time. You are free to hand in earlier. You will have to choose one topic from the list below, read the articles, and hand in a report that answers the assignment questions but above all, critically discusses the material beyond these. Reports should summarise the key aspects, but more importantly, should include original and critical thought that show you have acquired a meta level understanding of the topic – plain summaries will not suffice. All sources you use should be appropriately referenced, any text you quote should be clearly identified as such. The expected length of a report is between 3 to 5 pages, but there is no limit.

For the topic of your assignment, choose one of the following:

  1. The Scientific Discourse

    Reshef et al. [1] introduced a novel measure of correlation (or dependence) to detect associations in large data sets. They present their findings very confidently as the next big thing. Among others, Simon & Tibshirani [2] and Kinney & Atwal [3] think differently, and wrote rather strong rebuttals. Reshef et al. [4] did not quite agree with these rebuttals, leading to yet another answer by Kinney & Atwal [5], and ultimately to [6].

    Who is right? Are Reshef et al. [1] presenting false claims and over-selling the results? Are Kinney & Atwal [3] purposefully mis-interpreting Reshef et al. [1] and ignoring their claims? Are Simon & Tibshirani [2] presenting sensible criticism, or are they pedantic and besides the point? What is your opinion, is the concept of equitability a useful one, and is MIC a useful measure? To what extent are MIC and 'equitability' useful concepts for when we want to explore (mine) and learn from data?

    What does this exchange of letters and notes tell about the process of doing science? Has the general public been mis-led by the tone and impact of these publications? Was the Science magazine wrong at publishing [1] so soon? Should they have retracted it, weren't that it attracted attention and hence money? Was it scientifically acceptable for PNAS to publish [3], or might attention and money be part of the equation? What is the value of pre-print servers such as arXiv (where [2] and [6] are published)?

    Your report should answer to both the technical questions and the above questions about the process.

  2. Deep Learning and the Wall

    To do well on this assignment, you must have a good understanding of deep learning. Do not choose it because omg so hype!.

    The most talked-about machine learning related topic of the last years has definitely been deep learning. Many researchers have claimed that they have obtained impressive results with deep learning: they can classify images [7], beat humans in GO [8], and achieve scientific breakthroughts [9], and generate wonderful images from text [10], and do your homework for you (e.g. ChatGPT).

    Explain what is deep about deep learning, and how do these applications really use this 'deepness'? Do they all use the same "deep learning"? Are they all truly (only) about deep learning? For example did AlphaGo do all feature selection automatically end-to-end, or were there hand-crafted rules? Would you credit the deep net for the discovery of the new antibiotic, or did it get a bit of help?

    Are all of these resounding success stories? Read also [11] and [12]. Dall-E 2 [10] was announced mere days after [12]. Did it break through the 'wall' or is the wall still there? Or, as some argue, is there no wall to begin with?

    The overarching question is, can deep learning and knowledge discovery go together, can they support each other, or are they mostly mutually exclusive?

  3. Connecting the Dots

    Read the following three papers by Shahaf & Guestrin [13], Shahaf et al. [14], and Hope et al. [15]. Each considers a very different but also fascinating topic. Besides sharing an author, what are the similarities, differences, and non-trivial connections between these papers?

    To help you get started, consider the following example questions. To what extend do these papers, the methods, and results convince you? Why? Are the goals clearly defined? Why? Are the choices principled or rather ad-hoc? Why? Is the evaluation convincing? Why? Are the results convincing? Why? Would there have been any experiments would you think would have been doable and necessary? Can you identify possible improvements for the algorithms, how would you approach these problems?

Return the assignment by email to vreeken (at) by 2 May, 1000 hours. The subject of the email must start with [TADA]. The assignment report must be returned as a PDF and it must contain your name, matriculation number, and e-mail address together with the exact topic of the assignment on the first page.


You will need a username and password to access the papers. The first lecture gives you the password.

[1] Reshef, D.N., Reshef, Y.A., Finucane, H.K., Grossman, S.R., McVean, G., Turnbaugh, P.J., Lander, E.S., Mitzenmacher, M. & Sabeti, P.C. Detecting Novel Associations in Large Data Sets. Science, 334(6062):1518-1524, 2011.
[2] Simon, N. & Tibshirani, R. Comment on Detecting Novel Associations in Large Data Sets by Reshef et al, Science Dec 16, 2011. arXiv, 1401(7645), 2011.
[3] Kinney, J.B. & Atwal, G.S. Equitability, mutual information, and the maximal information coefficient. Proc. Natl. Acad. Sci. USA, 111(9):3354-3359, 2014.
[4] Reshef, D.N., Reshef, Y.A., Mitzenmacher, M. & Sabeti, P.C. Cleaning up the record on the maximal information coefficient and equitability. Proc. Natl. Acad. Sci. USA, 111(33):E3362-E3363, 2014.
[5] Kinney, J.B. & Atwal, G.S. Reply to Reshef et al.: Falsifiability or bust. Proc. Natl. Acad. Sci. USA, 111(33):E3364-E3364, 2014.
[6] Reshef, Y.A., Reshef, D.N., Sabeti, P.C. & Mitzenmacher, M.M. Equitability, interval estimation, and statistical power. arXiv, 2015.
[7] Krizhevsky, A., Sutskever, I. & Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS '12, pages 1097-1105, 2012.
[8] Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T. & Hassabis, D. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.
[9] Stokes, J.M., Yang, K., Swanson, K., Jin, W., Cubillos-Ruiz, A., Donghia, N.M., MacNair, C.R., French, S., Carfrae, L.A., Bloom-Ackermann, Z., Tran, V.M., Chiappino-Pepe, A., Badran, A.H., Andrews, I.W., Chory, E.J., Church, G.M., Brown, E.D., Jaakkola, T.S., Barzilay, R. & Collins, J.J. A Deep Learning Approach to Antibiotic Discovery. Cell, 180(4):688 - 702.e13, 2020.
[10] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. & Chen, M. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv, 2022.
[11] Khurshudov, A. Suddenly, a leopard print sofa appears. Rock'n'Roll Nerd, 2015.
[12] Marcus, G. Deep Learning is Hitting a Wall. Nautilus, 2022.
[13] Shahaf, D. & Guestrin, C. Connecting the dots between news articles. In Proceedings of the 16th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), Washington, DC, pages 623-632, 2010.
[14] Shahaf, D., Horvitz, E. & Mankoff, R. Inside Jokes: Identifying Humorous Cartoon Captions. In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 1065-1074, ACM, 2015.
[15] Hope, T., Chan, J., Kittur, A. & Shahaf, D. Accelerating Innovation Through Analogy Mining. In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 235-243, ACM, 2017.