Causality in
Medical Image Computing
An official MICCAI 2023 Satellite Event - October 12
About
This MICCAI 2023 half-day tutorial is a follow-up of a successful first tutorial at MICCAI 2020, introducing fundamental concepts of causality and its role in medical imaging. We will illustrate how causal reasoning provides a fresh perspective on important topics including key challenges in image-based predictive modelling such as generalization, data scarcity, confounding, robustness, reliability and responsible reporting. Theoretical concepts will be introduced and related to real-world examples from medical imaging such as image classification for computer-aided diagnosis.
The goal of the tutorial is to raise awareness of the importance of taking causal considerations into account when conducting machine learning research for medical imaging. We hope that this tutorial can provide new inputs to the community and may lay the path to exciting new research directions in medical image computing.
VIdeos
TOPICS
Introduction to causality
Causality in medical imaging and machine learning
Distribution shift and generalization
Fairness, biases, confounding
Shortcut learning and mitigation strategies
Image counterfactuals
SPEAKERS
Jessica Schrouff
Google Deepmind
Fabio De Sousa Ribeiro
Imperial College London
David Ouyang
Cedars-Sinai Medical Center
Maggie Makar
University of Michigan
SCHEDULE
08:00 - 10:00 PDT - Session 1
08:00 PDT - Welcome & Introduction by Ben Glocker, Kayhan Batmanghelich
08:30 PDT - A causal lens for fair medical AI by Jessica Schrouff (Google Deepmind)
09:30 PDT - Image counterfactuals by Fabio De Sousa Ribeiro (Imperial College London)
10:00 - 10:30 PDT - Coffee break
10:30 - 12:30 PDT - Session 2
10:30 PDT - The uncanny and unreasonable performance of AI in medical imaging by David Ouyang (Cedars-Sinai Medical Center)
11:30 PDT - Causally motivated shortcut removal by Maggie Makar (University of Michigan)
12:30 PDT - Closing remarks
ABOUT THE SPEAKERS
Jessica Schrouff
Abstract: Causally motivated fairness criteria and mitigation strategies have helped us in our path towards medical AI that benefits all of society. However, these techniques rely on strong assumptions that are typically not satisfied in real-world medical settings. In this talk, we will identify two cases that illustrate this complexity and use a graphical framework to identify potential solutions.
Bio: Jessica is a research scientist at Google DeepMind, working on responsible AI through a causal perspective. Previously, she was at Google Research where she investigated responsible machine learning for healthcare. Before joining Alphabet in 2019, she was a Marie Curie post-doctoral fellow at University College London (UK) and Stanford University (USA), developing machine learning techniques for neuroscience discovery and clinical predictions. Throughout her career, Jessica's interests have lied not only in the technical advancement of machine learning methods, but also in critical aspects of their deployment such as their credibility, fairness, robustness or interpretability. She is also involved in DEI initiatives, such as Women in Machine Learning (WiML) and founded the Women in Neuroscience Repository.
Fabio De Sousa Ribeiro
Abstract: The ability to generate plausible counterfactuals has wide scientific applicability and is particularly valuable in fields like medical imaging, wherein data are scarce and underrepresentation of subgroups is prevalent. Answering counterfactual queries such as 'why?' and 'what if?' in the language of causality could greatly benefit key research areas, including: (i) explainability; (ii) data augmentation; (iii) robustness to spurious correlations; and (iv) fairness in both observed and hypothetical outcomes.
Despite recent advancements, accurate estimation of interventional and counterfactual queries for high-dimensional structured variables like medical images remains an open problem. Evaluating inferred counterfactuals also poses inherent challenges, as they are unobservable by definition. In this talk, we will present a system and method for developing and evaluating deep causal mechanisms for structured variables. Leveraging insights from causal mediation analysis and advances in deep generative modeling, we will show how such deep causal mechanisms are capable of plausible counterfactual inference, as measured by the axiomatic soundness of counterfactuals.
Bio: Fabio is a postdoctoral research associate in the BioMedIA group at Imperial College London. His primary research interests lie at the intersection of causality and deep generative modelling for medical imaging and healthcare applications. His work bolsters the ongoing effort by the machine learning community to combine the central ideas behind causality and deep representation learning to help tackle several challenging research areas such as explainability, robustness and fairness.
David Ouyang
David is a cardiologist and researcher in the Department of Cardiology and Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center. As a physician-scientist and statistician with focus on cardiology and cardiovascular imaging, he works on applications of deep learning, computer vision, and the statistical analysis of large datasets within cardiovascular medicine. As an echocardiographer, he works on applying deep learning for precision phenotyping in cardiac ultrasound and the deployment and clinicial trials of AI models. David majored in statistics at Rice University, obtained his MD at UCSF, and received post-graduate medical education in internal medicine, cardiology, and a postdoc in computer science and biomedical data science at Stanford University.
Maggie Makar
Abstract: Robustness to certain forms of distribution shift is a key concern in many ML applications. Often, robustness can be formulated as enforcing invariances to particular interventions on the data generating process. In this talk, I will discuss causally-motivated approaches to enforcing such invariances, paying special attention to shortcut learning, where a robust predictor can achieve optimal i.i.d generalization in principle, but instead it relies on spurious correlations or shortcuts in practice. I will discuss approaches which utilize auxiliary labels, typically available at training time, to enforce conditional independences between the latent factors that determine these labels. I will focus on settings where the shortcuts are known a priori and when the shortcuts are unknown, and highlight important theoretical properties of these causally motivated training techniques commonly used in causal inference, fairness, and disentanglement literature.
Bio: Maggie is an assistant professor of computer science and engineering at the University of Michigan in Ann Arbor. She completed her PhD in computer science at MIT, and has held research positions at Google Brain and Microsoft. Dr. Makar's work is funded by the NSF and the Schmidt Futures foundation.Â
Organizers
Ben Glocker, Imperial College London
Kayhan Batmanghelich, Boston University