Identifiability of Cause and Effect using Regularized Regression

Abstract. We consider the problem of telling apart cause from effect between two univariate continuous-valued random variables $$X$$ and $$Y$$. In general it is impossible to make definite statements about causality without making assumptions on the underlying model; one of the most important aspects of causal inference is hence to determine under which assumptions are we able to do so.

In this paper we show under which general conditions we can identify cause from effect by simply choosing the direction with the best regression score. We define a general framework of identifiable regression-based scoring functions, and show how to instantiate it in practice using regression splines. Compared to existing methods that either give strong guarantees, but are hardly applicable in practice, or provide no guarantees, but do work well in practice, our instantiation combines the best of both worlds; it gives guarantees, while empirical evaluation on synthetic and real world data shows that it performs at least as well as the state of the art.

## Implementation

the R source code (May 2019) by Alexander Marx.

## Related Publications

 Xu, S, Mian, O, Marx, A & Vreeken, J Inferring Cause and Effect in the Presence of Heteroscedastic Noise. In: Proceedings of the International Conference on Machine Learning (ICML), PMLR, 2022. (21.9% acceptance rate) Xu, S, Marx, A, Mian, O & Vreeken, J Causal Inference with Heteroscedastic Noise Models. In: Proceedings of the AAAI Workshop on Information Theoretic Causal Inference and Discovery (ITCI'22), 2022. Marx, A Information-Theoretic Causal Discovery. Dissertation, Saarland University, 2021. Marx, A & Vreeken, J Identifiability of Cause and Effect using Regularized Regression. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), ACM, 2019. (oral presentation 9.2% acceptance rate; overall 14.2%)