Abstract. In many scientific applications, we observe a system in different conditions in which its components may change, rather than in isolation. In our work, we are interested in explaining the generating process of such a multicontext system using a finite mixture of causal mechanisms. Recent work shows that this causal model is identifiable from data, but is limited to settings where the sparse mechanism shift hypothesis holds and only a subset of the causal conditionals change. As this assumption is not easily verifiable in practice, we study the more general principle that mechanism shifts are independent, which we formalize using the algorithmic notion of independence. We introduce an approach for causal discovery beyond partially directed graphs using Gaussian Process models, and give conditions under which we provably identify the correct causal model. In our experiments, we show that our method performs well in a range of synthetic settings, on realistic gene expression simulations, as well as on realworld cell signaling data.
Learning Causal Models under Independent Changes. In: Proceedings of Neural Information Processing Systems (NeurIPS), PMRL, 2023. (26.1% acceptance rate) 

Learning Independent Causal Mechanisms. In: Proceedings of the 2nd ICML Workshop on Spurious Correlations, Invariance and Stability (SCIS), 2023. 

Discovering Invariant and Changing Mechanisms from Data. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp 12421252, ACM, 2022. (15.0% acceptance rate) 

Causal Inference from Different Contexts using Algorithmic Causal Models. M.Sc. Thesis, Saarland University, 2021. 