Workshop on Spurious Correlations, Invariance, and Stability
ICML Page: https://icml.cc/Conferences/2023/Schedule?showEvent=21493
Submission deadline: May 30 (Anywhere on Earth)
Please direct any queries to spurious.icml@gmail.com.
Overview
The workshop brings together domain experts and researchers to facilitate discussions and forge collaborations on problems with spurious correlations, and instability of machine learning models. Models built without accounting for spurious correlations often break when deployed in the wild, despite excellent performance on benchmarks. In particular, models can learn to rely on apparently unnatural or irrelevant features. Such examples abound in recent literature:
In detecting lung disease from chest X-rays, models rely on the type of scanner and marks that technicians use in specific hospitals, instead of the physiological signals of the disease [1, 2]
In Natural Language Processing, when reasoning whether a premise entails a hypothesis, models rely on the number of shared words rather than the subject’s relationship with the object. [3]
In precision medicine, polygenic risk scores for diseases like diabetes and breast cancer rely on genes prevalent mainly in people of European ancestry, and are not as accurate in other populations [4].
Extensive work on resolving problems akin to spurious correlations has sprung up in several communities. These include works on invariance constraints and graph-based methods rooted in Causality, methods to avoid discrimination of compromised subgroups in Algorithmic Fairness, and stress testing procedures to discover unexpected model dependencies in reliable ML. Yet there is little consensus on best practices, useful formal frameworks, rigorous evaluations of models, and fruitful avenues for the future.
We invite work addressing all aspects of ML in the presence of spurious correlations, from formalization to deployment.
Invited Speakers
Sanmi Koyejo
Stanford University
Sara Magliacane
University of Amsterdam, MIT-IBM Watson Lab
Ludwig Schmidt
University of Washington
Adarsh Subbaswamy
US Food and Drug Administration
Suchi Saria
Johns Hopkins University,
Bayesian Health
Francesco Locatello
Amazon AWS
Panel Discussion
The Future of Generalization: Scale, Safety and Beyond
The Future of Generalization: Scale, Safety and Beyond
Adam Gleave
FAR AI
Maggie Makar
University of Michigan
Samuel R. Bowman
New York University,
Anthropic
Zachary C. Lipton
Carnegie Mellon University
Organizers
Yoav
Wald
Wald
Johns Hopkins University
Claudia
Shi
Shi
Columbia University
Amir
Feder
Feder
Columbia University, Google
Limor
Gultchin
Gultchin
University of Oxford, The Alan Turing Institute
Mark
Goldstein
Goldstein
New York University
Aahlad
Puli
Puli
New York University
Maggie
Makar
Makar
University of Michigan
Victor
Veitch
Veitch
University of Chicago, Google
Uri
Shalit
Shalit
Technion
Program Committee
Junye Wang
Shantanu Sharma
Limor Gultchin
Adriel Saporta
Yongqiang Chen
Kirtan Padh
Elan Rosenfeld
Wenlin Chen
Yihe Deng
Goutham Rajendran
Claudia Shi
Thomas Goerttler
Jiaxin Yuan
Shantanu Ghosh
Irina Cristali
Thibaud Godon
Vitória Barin-Pacela
Ziwei Jiang
Xiaoyu Liu
Jason Hartford
Kiho Park
David Brandfonbrener
Nitay Calderon
Lily Zhang
Inwoo Hwang
Yujia Bao
Kevin Bello
Amir Feder
Mark Goldstein
Nitish Joshi
Carolina Zheng
Alex Markham
Jiacheng Zou
Ido Greenberg
Kianté Brantley
Gina Wong
Andrew Jesson
Drew Prinster
Simon Zhang
Elliot Creager
Yoav Wald
Gal Yona
Martin Ferianc
Nokyung Park
Stefan Groha
Polina Kirichenko
Yaning Jia
David Reber
Felipe del Rio
Francesco Quinzan
Aayush Mishra
Katie Kang
Divyat Mahajan
Taro Makino
Bhavya Vasudeva
Simon Buchholz
Aahlad Manas Puli