Julius von Kügelgen
Exploring the intersection of causal inference and machine learning
Image credit: MPI for Intelligent Systems / W. Scheible
I am a 5th year PhD candidate in the Cambridge-Tübingen programme, co-supervised by Bernhard Schölkopf at the Max Planck Institute for Intelligent Systems in Tübingen and Adrian Weller at the University of Cambridge.
My research interests lie at the intersection of causal inference and machine learning, see Research Interests and the below publications for more details.
News & Updates
11/2023 - I will give an invited talk at the Causal Representation Learning Workshop at NeurIPS (also, drop-by our 4 posters)
09/2023 - With fantastic co-authors, 3 papers accepted at NeurIPS 2023:
05/2023 - Causal Effect Estimation from Observational and Interventional Data Through Matrix Weighted Linear Estimators accepted at UAI 2023
04/2023 - Provably Learning Object-Centric Representations accepted at ICML 2023 (oral)
04/2023 - We organised a Workshop on Causal Representation Learning in Tübingen.
04/2023 - Our work on Backtracking Counterfactuals received the Best Paper Award at CLeaR 2023.
03/2023 - Evaluating vaccine allocation strategies using simulation-assisted causal modelling accepted at Patterns (CellPress)
01/2023 - DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability accepted at ICLR 2023
01/2023 - With great collaborators, 2 papers accepted at CLeaR 2023:
01/2023 - Had a great time at the Bellairs Workshop on Causality: Inference and Representation Learning.
09/2022 - With great collaborators, 4 papers accepted at NeurIPS 2022:
09/2022 - I have been awarded a 2022 Google PhD Fellowship in Machine Learning; huge thanks to all my collaborators and mentors for their support!
Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations.
Cian Eastwood, JvK, Linus Ericsson, Diane Bouchacourt, Pascal Vincent, Bernhard Schölkopf, Mark Ibrahim
Multi-View Causal Representation Learning with Partial Observability.
Dingling Yao, Danru Xu, Sebastien Lachapelle, Sara Magliacane, Perouz Taslakian, Georg Martius, JvK, Francesco Locatello
Nonparametric Identifiability of Causal Representations from Unknown Interventions.
JvK, Michel Besserve, Wendong Liang, Luigi Gresele, Armin Kekić, Elias Bareinboim, David M. Blei, Bernhard Schölkopf
On the Fairness of Causal Algorithmic Recourse.
AAAI 2022 (Oral)
(also at: ICML 2021 Workshop Algorithmic Recourse; NeurIPS 2020 Workshop Algorithmic Fairness through the Lens of Causality and Interpretability (AFCI) )
Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach
NeurIPS 2020 (Spotlight)
(also at: ICML 2020 Workshops
XXAI: Extending Explainable AI Beyond Deep Models and Classifiers (oral; 4/20 papers)
WHI: Workshop on Human Interpretability in Machine Learning (oral; 4/50 papers))
Other Publications & Preprints
Kernel-Based Independence Tests for Causal Structure Learning on Functional Data
Felix Laumann, JvK, Junhyung Park, Bernhard Schölkopf, Mauricio Barahona
Entropy, 2023 (Special Issue on Causality and Complex Systems)
Unsupervised Object Learning via Common Fate.
Matthias Tangemann, Steffen Schneider, JvK, Francesco Locatello, Peter Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf.
DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability.
Cian Eastwood*, Andrei Liviu Nicolicioiu*, JvK*, Armin Kekic, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf. (*equal contribution)
(Previously at: Workshop on Causal Representation Learning @ UAI 2022)
Embrace the Gap: VAEs Perform Independent Mechanism Analysis.
Patrik Reizinger, Luigi Gresele, Jack Brady, JvK, Dominik Zietlow, Bernhard Schölkopf, Georg Martius, Wieland Brendel, Michel Besserve.
(Previously at: 5th Workshop on Tractable Probabilistic Modeling @ UAI 2022)
Complex interlinkages, key objectives and nexuses amongst the Sustainable Development Goals and climate change: a network analysis
Felix Laumann, JvK, Thiago Hector Kanashiro Uehara, Mauricio Barahona
The Lancet Planetary Health, 2022
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain.
Lukas Schott, JvK, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel.
(Previously at: ICLR 2021 Workshop Generalization beyond the training distribution in brains and machines )
Semi-supervised learning, causality and the conditional cluster assumption.
JvK, Alexander Mey, Marco Loog, Bernhard Schölkopf.
(also at: NeurIPS 2019 Workshop “Do the right thing”: machine learning and causal inference for improved decision making)
Optimal experimental design via Bayesian optimisation: active causal structure learning for Gaussian process networks.
JvK, Paul K Rubenstein, Bernhard Schölkopf, Adrian Weller.
NeurIPS 2019 Workshop “Do the right thing”: machine learning and causal inference for improved decision making