Programme

Timetable

Time zone: CEST (GMT+2)

  • 08:50-09:00 Welcome

  • 09:00-09:50 Keynote talk

Kristian Kersting
Making Deep Neural Networks Right for the Right Scientific Reasons

  • 09:50-10:50 Presentations (3x20 min each)

Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi.
On the Applicability of ML Fairness Notions

Tim Draws, Nava Tintarev, Ujwal Gadiraju, Alessandro Bozzon, Benjamin Timmermans.
Assessing Viewpoint Diversity in Search Results Using Ranking Fairness Metrics

Pieter Delobelle, Paul Temple, Gilles Perrouin, Benoît Frénay, Patrick Heymans, Bettina Berendt.
Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning

  • 10:50-11:00 Break

  • 11:00-11:50 Keynote talk

Nikolaus Forgó
Can Law Steer the Development of AI and if so, How?

  • 11:50-12:30 Presentations (2x20 min each)

Cora van Leeuwen, Annelien Smets, An Jacobs, Pieter Ballon.
Blind Spots in AI: the Role of Serendipity and Equity in Algorithm-Based Decision-Making

Jesse Russell.
The Limits of Computation in Solving Equity Trade-Offs in Machine Learning and Justice System Risk Assessment

  • 12:30-14:00 Lunch break

  • 14:00-14:50 Keynote talk

Dietmar Huebner
Two Kinds of Discrimination in AI-Based Penal Decision-Making

  • 14:50-15:30 Presentations (2x20 min each)

Atoosa Kasirzadeh, Andrew Smart.
The use and misuse of counterfactuals in fair machine learning

Eduard Fosch Villaronga, Adam Poulsen, Roger A. Søraa, Bart H.M. Custers.
Don’t guess my gender, gurl: The inadvertent impact of gender inferences

  • 15:30-15:40 Break

  • 15:40-16:30 Keynote talk

Viktoriia Sharmanska
Discovering Fair Interpretable Representations in Visual Data

  • 16:30-17:30 Discussion

Keynote Speakers

Nikolaus Forgó

Law

University of Vienna, AT.

Can Law Steer the Development of AI and if so, How?

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.


Dietmar Hübner

Philosophy

Leibniz University Hannover, DE.

Two Kinds of Discrimination in AI-Based Penal Decision-Making

The famous COMPAS case has demonstrated the difficulties in identifying and combatting bias and discrimination in AI-based penal decision-making. In this presentation, I will distinguish two kinds of discrimination that need to be addressed in this context. The first is related to the well-known problem of trade-offs between mutually incompatible accounts of statistical fairness, while the second concerns the specific demands of discursive fairness that apply when basing human decisions on empirical evidence. I will sketch the essential requirements for non-discrimination in each case. In the former, we must consider the relevant causes of observed correlations between race and recidivism, in order to judge the moral adequacy of alternative standards for statistical fairness. In the latter, we should analyse the specific reasons that are admissible in penal trials, in order to establish what types of information must be provided when justifying court decisions through AI evidence.


Kristian Kersting

Computer Science

Technical University Darmstadt, DE.

Making Deep Neural Networks Right for the Right Scientific Reasons

Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping. Unfortunately, they may show "Clever Hans"-like behaviour, making use of confounding factors within datasets, to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model. Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance.

Based on joint work with Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, and Anne-Katrin Mahlein.


Viktoriia Sharmanska

Computer Science

Imperial College London, UK.

Discovering Fair Interpretable Representations in Visual Data

Evidence that discrimination is an issue in computer vision systems has been reported recently in face recognition, emotion analysis, pedestrian detection to name a few. These are examples of human-centric computer vision, i.e. AI systems where automatic decisions are made about humans based on their appearance and behaviour from visual data such as images or videos. My talk will address the key question: how to enable human-centric computer vision to be fair (non-discriminative w.r.t. protected characteristics) and explainable (interpreting how the design intent of the AI application has been met). I will highlight some of my recent works on 1) how to achieve fairness by learning representations that remove the semantics of protected characteristics as a data-to-data translation, and 2) how to mitigate the problem called tyranny of the majority (when the algorithms favour groups of individuals that are better represented in the training data) via GAN-generated contrastive examples. I will conclude with promising future directions.