Detailed information

Title: Algorithmic Recourse: from Counterfactual Explanations to Interventions


Abstract: As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -- "how the world would have (had) to be different for a desirable outcome to occur" -- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, one of the main objectives of "explanations as a means to help a data-subject act rather than merely understand" has been overlooked. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, moving the focus from explanations to recommendations. Finally, we provide the reader with an extensive discussion on how to realistically achieve recourse beyond structural interventions.


Bio: Having lived in Shiraz, Toronto, Waterloo, New York, San Francisco, and Tuebingen, Amir is driven by a desire to bring about change through engineering, elegant software, and robust AI.

Amir-Hossein Karimi has undertaken collaborative research at five internationally acclaimed research institutions: University of Toronto, University of Waterloo, Stanford University, Facebook AI Research (FAIR) group, and the Max Planck Institute for Intelligent Systems (MPI-IS). He is currently a PhD candidate at the Centre for Learning Systems sponsored by ETH-Zurich and MPI-IS. With multiple published academic papers, his work spans various fields of research, primarily in interpretable machine learning, causal and counterfactual learning, representation learning (random projections), and computer vision. Presently, Amir's PhD thesis focus is on causal methods for interpretable machine learning, specifically on generating counterfactual explanations (https://arxiv.org/abs/1905.11190) and minimal interventions (https://arxiv.org/abs/2002.06278, https://arxiv.org/abs/2006.06831) to inform individuals on the best actions to take to achieve algorithmic recourse. Thus my focus is on the intersection of machine learning interpretability, causal and probabilistic modelling, and social philosophy and psychology.

Amir has had a diverse and fruitful career path. He brings 8+ years of technical experience from startups (CitoPrint, Brizi, Aerialytic) to large multi-national companies (BlackBerry, Facebook), with significant experience in large-scale software projects, product design & management, and rapid prototyping of research ideas. He has experience in leading small teams with diverse backgrounds and seeks to cultivate a spirit of creativity and cross-pollination of ideas. Amir has also worked as an independent consultant assisting with AI strategy, literature review, and competitor analysis; he has consulted 7+ startups and delivered 2 contracts in the field of AI.

Amir's contributions have been recognized with such prestigious awards as the Spirit of Engineering Science Award (UofT, 2015) for outstanding community contributions, Best Paper Runner-up (CVIS, 2017), the Alumni Gold Medal Award (UWaterloo, 2018) for highest standing across all Master's programs, and the NSERC Alexander Graham Bell CGS-D (2018-2022) award from the Canadian government to pursue a PhD.