The Learning and Mining for Cybersecurity (LEMINCS) workshop aims to boost the interest for security and privacy in data mining and machine learning, with specific interests in analyzing and detecting threats in cybersecurity domain, determining trustworthiness of data and results, to catching fake news, and pushing the envelope in fair and accountable mining methods. In other words, we aim to increase the data science footprint in the cyber security domain.
|13:00||Welcome to LEMINCS'19|
Adversarial Robustness of Machine Learning Models for Graphs
Stephan Günnemann is a Professor at the Department of Informatics, Technical University of Munich. He acquired his doctoral degree at RWTH Aachen University, Germany in the field of computer science. From 2012 to 2015 he was an associate of Carnegie Mellon University, USA; initially as a postdoctoral fellow and later as a senior researcher. He was a visiting researcher at Simon Fraser University, Canada, and a research scientist at the Research & Technology Center of Siemens AG. His main research interests include the development of robust and scalable machine learning techniques for graphs and temporal data. His works on subspace clustering on graphs as well as his analysis of adversarial robustness of graph neural networks have received the best research paper awards at ECML-PKDD and KDD.
Graph neural networks and node embedding techniques have recently achieved impressive results in many graph learning tasks. Despite their proliferation, studies of their robustness properties are still very limited -- yet, in domains where graph learning methods are often used, e.g. the web, adversaries are common. In my talk, I will shed light on the aspect of adversarial robustness for state-of-the art graph-based learning techniques. I will highlight the unique challenges and opportunities that come along with the graph setting and I will introduce perturbation approaches showcasing the methods vulnerabilities. I will conclude with a discussion of robustness certificates as well a learning principles for improving robustness.
Research Talks (time allocation: 15+5 each)
Dr. Neil Shah
Outlier Detection for Mining Social Misbehavior
Neil is a Research Scientist at Snap Inc, with research interests in computational user modeling and user understanding, especially in the contexts of misbehavior, abuse and fraud on web platforms. His work has resulted in 30+ top conference and journal publications, in venues such as KDD, ICDM, WWW, SDM, DSAA, PAKDD, TKDD and more. He has had previous research experiences at Lawrence Livermore National Laboratory, Microsoft Research, and Twitch.tv. Neil earned a PhD in Computer Science in 2017 from Carnegie Mellon University's Computer Science Department, advised by Professor Christos Faloutsos. He likes watches that tick.
Outlier detection has historically been used in a variety of domains for identifying rare samples in datasets. In this talk, I will discuss the particular application of outlier detection for identifying misbehavior and malicious actors in online social platforms. I will overview several previous works which leverage outlier detection techniques for identifying abusive following behavior and fake viewership on online platforms, and discuss technical takeaways and meta-lessons learned from these projects and more. The purpose of this talk is to communicate the great value that outlier detection approaches add in misbehavior detection, but also to pose a call to action for prioritizing the discovery or identification of specific outlying behaviors as opposed to outliers in general.
Research Talks (time allocation: 15+5 each)
Over the last decades we have become more and more interconnected with each other: our computers are constantly connected to the internet, we store our data in cloud services, and our normal household devices have become smarter and remotely accessible. An unfortunate by-product of these advances is both a significant increase in information leaks, privacy breaches, as well as malicious behaviour. This includes increase and industrialization of malware, more sophisticated targeted attacks of companies and persons, as well as, malicious behavior over social and peer-to-peer networks. Moreover, as the decision systems are becoming more and more datadriven, it is vital to avoid any algorithmic bias, as this may lead to undesired results, for example by making certain groups of people more vulnerable. While there has been great success stories in using data mining techniques in cyber security domain, such as, spam detection, the consensus of the cyber security experts is that more data science techniques are needed in order to detect, act upon, and prevent malicious behaviour, algorithmic bias, and preserve privacy.
The goal of LEMINCS is to increase data science footprint in cyber security domain. We are interested in novel methodology papers that have strong applications in security, privacy, as well as, successful applications of existing methodology. In addition to more traditional problem settings, such as malware analysis, we are also highly interested in developing topics such as adversarial machine learning, malicious behaviour in social network (e.g., spreading fake news), and assessing whether the developed algorithms are fair.
|Workshop||Mon, August 5, 2019|
Topics of interests for the workshop include, but are not limited to:
All papers will be peer reviewed, single-blinded. We welcome many kinds of papers, such as (and not limited to):
Note that we especially encourage position papers, as well as data set submissions. Both are extremely important for the field as the cyber security field is changing at neck-breaking pace, and there is a significant shortage of modern data.
Authors should clearly indicate in their abstracts the kinds of submissions that the papers belong to, to help reviewers better understand their contributions. Submissions must be in PDF, written in English, no more than 10 pages long — shorter papers are welcome — and formatted according to the standard double-column ACM Sigconf Proceedings Style.
The accepted papers will be posted on the workshop website and will not appear in the KDD proceedings.
For accepted papers, at least one author must attend the workshop to present the work.
For paper submission, proceed to the LEMINCS 2019 submission website.