Speakers


Elias Bareinboim

Causal Fairness Analysis

In this talk, I will discuss recent progress and ideas on how to perform fairness analysis using causal lenses.



Tobias Gerstenberg

Going beyond the here and now: Counterfactual simulation in human cognition

As humans, we spend much of our time going beyond the here and now. We dwell on the past, long for the future, and ponder how things could have turned out differently. In this talk, I will argue that people's knowledge of the world is organized around causally structured mental models, and that much of human thought can be understood as cognitive operations over these mental models. Specifically, I will highlight the pervasiveness of counterfactual thinking in human cognition. Counterfactuals are critical for how people make causal judgments, how they explain what happened, and how they hold others responsible for their actions. Based on these empirical insights, I will share some thoughts on the relationship between counterfactual thought and algorithmic recourse.



Been Kim

Decision makers, practitioners and researchers, we need to talk.


This talk presents oversimplified but practical concepts that practitioners and researchers must know in using and developing interpretability methods for algorithmic recourse. The concepts are WATSOP: (W)rongness, (A)track, (T)esting for practitioners, (S)keptics, (O)bjectives, (P)roper evaluations for researchers. While oversimplified, these are the core points that lead the field to success or failure. I’ll provide concrete steps for each and related work how you may apply these concepts to your work.



Berk Ustun

On Predictions without Recourse


One of the most significant findings that we can produce when evaluating recourse in machine learning is that a model has assigned a "prediction without recourse."

Predictions without recourse arise when the optimization problem that we solve to search for recourse actions is infeasible. In practice, the "infeasibility" of this problem shows that a person cannot change their prediction through their actions - i.e., that the model has fixed their prediction based on input variables beyond their control.

In this talk, I will discuss these issues and discuss how we can address them by studying the "feasibility" of recourse. First, I will present reasons for why we should ensure the feasibility of recourse -- i.e., even in settings where we may not wish to provide recourse. Next, I will discuss technical challenges that we must overcome to ensure recourse reliably.



Sandra Wachter

How AI weakens legal recourse and remedies

AI is increasingly used to make automated decisions about humans. These decisions include assessing creditworthiness, hiring decisions, and sentencing criminals. Due to the inherent opacity of these systems and their potential discriminatory effects, policy and research efforts around the world are needed to make AI fairer, more transparent, and explainable.

To tackle this issue the EU commission recently published the Artificial Intelligence Act – the world’s first comprehensive framework to regulate AI. The new proposal has several provisions that require bias testing and monitoring as well as transparency tools. But is Europe ready for this task?

In this session I will examine several EU legal frameworks including data protection as well as non-discrimination law and demonstrate how AI weakens legal recourse mechanisms. I will also explain how current technical fixes such as bias tests - which are often developed in the US - are not only insufficient to protect marginalised groups but also clash with the legal requirements in Europe.

I will then introduce some of the solutions I have developed to test for bias, explain black box decisions and to protect privacy that were implemented by tech companies such as Google, Amazon, Vodaphone and IBM and fed into public policy recommendations and legal frameworks around the world.