CAUSALI-T-AI
CAUSALI-T-AI
Seminars and reading group
Feel free to contact us if you want to be added to the causal-xai mailing list : causalxaiscai@gmail.com
2025
Oct 8. 4p.m.-5p.m. Talk of A. Machado (UQAM). It is only online via Teams. Slides
Title : Assessing Counterfactual Fairness via (Marginally) Optimal Transport
Abstract : Algorithmic fairness refers to the set of principles and techniques aimed at ensuring that the decisions produced by an algorithm are fair and non-discriminatory toward all users, regardless of personal characteristics such as sex, ethnicity, or other so-called sensitive attributes. Its assessment can be carried out at multiple levels: on the one hand, at the group level, by comparing a model’s predictions across different groups defined by sensitive variables; and on the other hand, at the individual level, by focusing on a specific individual from a minority group and asking counterfactual questions such as: “What would this woman’s salary be if she were a man?” To evaluate algorithmic fairness at the individual level, we adopt the notion of Counterfactual Fairness proposed by Kusner et al. (2017). This approach relies on the mutatis mutandis principle, in contrast to the ceteris paribus principle: rather than checking whether a model’s prediction for an individual remains unchanged when only the sensitive attribute is modified while keeping all other explanatory variables constant, we ask whether the prediction remains the same when only the variables not causally influenced by the sensitive attribute are held constant. The definition of Counterfactual Fairness relies on Pearl’s (2009) causal inference framework and involves computing an individual’s counterfactual in which the sensitive attribute is modified, assuming prior knowledge of a causal graph over the model’s explanatory variables. In this study, we link two existing approaches to derive counterfactuals: intervention-based approaches on a causal graph with quantile preservation, as proposed by Plečko et al. (2020), and multivariate optimal transport introduced by Lara et al. (2024). We extend the concepts of “Knothe’s rearrangement” and “triangular transport” to probabilistic graphical models and establish the theoretical foundations of a counterfactual approach, called sequential transport, to discuss individual-level fairness.
Oct 7. 5p.m.- 6p.m. Talk of C. Gourieroux (TSE & U. Toronto). It is only online. Contact us for the Teams link
Titre : Bulles et modèles non-causaux : anticiper l’explosion, comprendre l’éclatement ?
Dans cet exposé, l'orateur présentera :
— Les idées de base : modèles autorégressifs mixtes causaux / non-causaux (MAR), dépendance aux valeurs futures, et leur rôle dans la modélisation des bulles spéculatives.
— Développements récents : dynamique extrême (tail-process), extensions aux modèles affines non-causaux (affine reverse time), et méthodes de prévision et détection en temps réel.
— Applications concrètes : exemples empiriques sur les bulles numériques (Bitcoin, futures, matières premières) et stratégies de prévision pour identifier les débuts et fins de bulles.
Sept 25. 11 a.m. -12 p.m. Talk of J. Peters (ETH Zürich).
The talk is part of the Aptikal team seminar series at LIG, Grenoble. It is also online, you can join via Zoom.
Title: Causality, Hidden Confounding, and Robustness
When learning predictive models on real world data, we usually expect them to work well on test data that follow the same distribution as the training data. Often, however, such an assumption is unrealistic. Instead, we would like to apply the fitted model to different locations, to different times or under different boundary conditions. How to learn
models that perform as well as possible in such difficult settings is mainly an unsolved problem. In this talk, we discuss a few ideas and draw connections to causality and hidden confounding. It hopefully serves as an inspiration to think about this interesting, general, and important problem.
Sept 22. Talk of J. Wahl (DFKI, Saabrücken). 1p.m.-2p.m. Slides
The talk is part of the SimulGroup@CRAN seminar series at Nancy. You can join via Teams.
Title: Are our DAGs correct? Recent developments in causal model evaluation
Causal graphical models are considered important tools to integrate expert knowledge into statistical data analysis. As consequence, practitioners often face the challenge to evaluate the quality of their hypothesized causal models. This issue becomes particularly salient for causal graphical models obtained through a causal discovery method in which case most or all of the available data has already been used in the discovery task. In this talk, we will review recent developments regarding the evaluation of causal models and structure learning methods. In particular, we will discuss to which degree assumption violations can be detected in causal discovery and how method testing often introduces additional assumptions that need to be weighted carefully against the assumptions of the initial learning algorithm.
2024
November 21rst. Reading group on causality and SDEs
October 1rst. Reading group on causal discovery with tensorial approaches
April 22nd. Reading group on causal inference and optimal transport
2023