2ND WORKSHOP ON BIAS AND FAIRNESS IN AI

BIAS 2021

ECMLPKDD | September 13-17, 2021 | Online


News! Recorded presentations (YouTube playlist)

Springer joint workshop proceedings are accessible here.

Call for papers for the special issue on Bias and Fairness in AI in the Data Mining and Knowledge Discovery journal.

Bias and Fairness in Artificial Intelligence

Artificial Intelligence (AI) techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.

This Workshop

Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified.

"How can standards of unbiased attitudes and non-discriminatory practices be met in (big) data analysis, AI and algorithm-based decision-making?"

Topics of Interest

The workshop solicits contributions including but not limited to the following topics in all areas of AI (supervised/unsupervised learning, information retrieval and recommender systems, HCI, constraint solving, complex systems and networks, etc.) and bridging interdisciplinary studies (law, social sciences).

Bias and Fairness by Design

  • Fairness measures

  • Counterfactual reasoning

  • Metric learning

  • Impossibility results

  • Multi-objective strategies for fairness, explainability, privacy, class-imbalancing, rare events, etc.

  • Federated learning

  • Resource allocation

  • Personalized interventions

  • Debiasing strategies on data, algorithms, procedures

  • Human-in-the-loop approaches

Methods to Audit, Measure, and Evaluate Bias and Fairness

  • Auditing methods and tools

  • Benchmarks and case studies

  • Standard and best practices

  • Explainability, traceability, data and model lineage

  • Visual analytics and HCI for understanding/auditing bias and fairness

  • HCI for bias and fairness

  • Software engineering approaches

Submission and review process

Papers should be submitted in accordance to the ECMLPKDD formatting instruction. Submission should be limited to 16 pages including references (following again the main conference guidelines). Papers must be written in English and are to be submitted in PDF format online via the Easychair submission interface.

Each submission will be evaluated on the basis of relevance, significance of contribution and quality by at least three members of the program committee. All accepted papers will appear online at the workshop prior to the workshop date. At least one author of each accepted paper is required to attend the workshop to present. For the accepted papers, we plan to have regular talks and additional poster presentations to foster further discussions, based on local venue capabilities. Authors can also opt for their papers to be included in the ECMLPKDD 2021 Selected Workshops Papers proceedings published by Springer.

Important Dates

We will follow the suggestions for workshops from the ECMLPKDD website.

  • Workshop paper submission deadline: July 2, 2021

  • Workshop paper author notification: August 6, 2021

  • Workshop paper camera ready deadline: August 27, 2020

  • Workshop program and papers available online: September 1, 2021

  • Workshop date: September 13, 2021

Schedule

The workshop takes place online on September 13th, 8:55 - 14:45. Registered participants will be provided with an access link.


8:55 - 9:00 Welcome Opening by the workshop chairs

09:00 - 09:50 Keynote What's fair about fair ML? (slides)
Linnet Taylor

09:50 -10:10 Contributed paper Algorithmic Factors Influencing Bias in Machine Learning
William Blanzeisky and Padraig Cunningham

10:10 - 10:30 Contributed paper Desiderata for Explainable AI in statistical production systems of the European Central Bank,
Carlos Mougan, Georgios Kanellos and Thomas Gottron

10:30-10:50 Coffee break

10:50-11:40 Keynote The Fairness-Accuracy tradeoff revisited (slides)
Toon Calders

11:40 - 12:00 Contributed paper Robustness of Fairness: An Experimental Analysis,
Serafina Kamp, Andong Luis Li Zhao and Sindhu Kutty

12:00 - 12:20 Contributed paper Co-clustering for fair recommendation,
Gabriel Frisch, Jean-Benoist Leger and Yves Grandvalet

12:20 - 13:00 Lunch break

13:00 - 13:50 Keynote Strengths and weaknesses of European legal protection against discriminatory AI (slides)
Frederik Zuiderveen Borgesius

13:50 -14:10 Contributed paper Learning a Fair Distance Function for Situation Testing
Daphne Lenders and Toon Calder

14:10-14:30 Contributed paper Towards Fairness Through Time
Alessandro Castelnovo, Lorenzo Malandri, Fabio Mercorio,
Mario Mezzanzanica
and Andrea Cosentini

14:30 - 14:45 Closing discussion

Invited Speakers

Linnet Taylor
Tilburg Institute for Law, Technology, and Society, The Netherlands

Linnet Taylor is Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), in the Netherlands. Her research focuses on digital data, representation and democracy, with particular attention to transnational governance issues. She leads the ERC Global Data Justice project, which aims to develop a social-justice-informed framework for governance of data technologies on the global level. The research is based on insights from technology users, providers and civil society organisations around the world.

Keynote title: What's fair about fair ML?

Abstract. The process of setting rules and norms for AI technologies by a spectrum of different actors, societal and legal/regulatory often turn on tacit assumptions about what constitutes – and limits – the notion of fairness and, in connection with this, responsibility. What dimensions of fairness does the conventional field of ‘fair ML’ envisage, and how does this translate into possible paths for responsible AI? This talk will explore the limits of the current idea of both fairness and responsibility with regard to ML, looking at the processes and politics involved in formalising concepts such as fairness, accountability and trust. I will argue that a deep mismatch currently exists between conceptualisations and practices of fair ML and responsible AL, and the claims to justice arising around the use of ML. Claims to justice often revolve around fundamental issues of recognition and representation such as decolonisation, feminism and geopolitical equity, whereas fairness and responsibility framings tend to actively deflect this type of claim. Starting from Fraser’s theory of abnormal justice, I will explore the argument for recognising and incorporating such claims into ML research and AI development practice, and the risks involved in not doing this.

Frederik Zuiderveen Borgesius
Radboud Universiteit, The Netherlands

Since 2019, Frederik Zuiderveen Borgesius is Professor of ICT and private law at Radboud University. At Radboud University he is affiliated with the Digital Security Group of the iCIS Institute for Computing and Information Sciences, and with the iHub, the Interdisciplinary Hub for Security, Privacy and Data Governance.

His research interests include privacy, data protection, discrimination, and freedom of expression, especially in the context of new technologies. He often enriches legal research with insights from other disciplines. He has co-operated with, for instance, economists, computer scientists, and communication scholars.

Keynote title: Strengths and weaknesses of European legal protection against discriminatory AI

Artificial intelligence (AI) can improve our society in many ways, but can also lead to illegal discrimination or other types of unfair differentiation. This talk assesses to what extent European law can help to protect people against such unfair effects of AI. We focus on the two most relevant legal instruments: non-discrimination law and data protection law (the GDPR). The talk shows that both legal instruments can help to protect people against unfair AI. However, the talk also shows that both instruments have severe weaknesses in the context of AI too. Lastly, we explore how the law could be improved.

Toon Calders
University of Antwerp, Belgium

Toon Calders is a professor in the computer science department of the University of Antwerp in Belgium. He is an active researcher in the area of data mining and machine learning. He is editor of the data mining journal, and has been program chair of a number of data mining and machine learning conferences, including ECML/PKDD 2014 and Discovery Science 2016. Toon Calders was one of the first researchers to study how to measure and avoid algorithmic bias in machine learning and is one of the editors of the book “Discrimination and Privacy in the Information Society - Data Mining and Profiling in Large Databases”, published by Springer in 2013. He is currently leading a group of 6 researchers studying theoretical aspects of fairness in machine learning, as well as looking into practical use cases in collaboration with Flemish tax authorities, public welfare organizations, and an insurance company..

Keynote title: The Fairness-Accuracy tradeoff revisited

Demographic parity, equality of opportunity, calibration, individual fairness, direct and indirect discrimination: these are just a few of the many measures for bias in data and algorithms. Although for each of these measures strong arguments in favor can be found, it has been shown that they cannot be combined in a meaningful way. What is the right measure is hence commonly accepted to be "situation-dependent" and in the eye of the beholder. Nevertheless, unfortunately, surprisingly little guidelines for selecting the right measure are available for practitioners. A second issue in fairness-aware machine learning is the perception that we need to give up something in order to attain fair models: the so-called "fairness-accuracy trade-off". Arguably, this assumption is in many situations counter-intuitive given the goal of fair machine learning of undoing unfair bias. Thirdly, I believe that for many fairness-aware algorithms we do not properly understand and subsequently ignore *how* they satisfy the fairness constraints, which, as I will argue, may lead to even more unfair decision procedures. In this talk I will go deeper into this issues and end with proposing an alternative way of looking at fairness-aware machine learning as optimizing accuracy in a theoretical fair world.

Organizing Committee

Eirini Ntoutsi


Free University Berlin

Mykola Pechenizkiy


Eindhoven University of Technology

Bodo Rosenhahn


Leibniz University of Hanover

Program Committee Members

  • Bettina Berendt, KU Leuven (Belgium) and TU Berlin (Germany)

  • Toon Calders, University of Antwerpen (Belgium)

  • Tim Draws, Delft University of Technology (The Netherlands)

  • Michael Ekstrand, Boise State University (USA)

  • Atoosa Kasirzadeh, University of Toronto (Canada)

  • Katharina Kinder-Kurlanda, University of Klagenfurt (Austria)

  • Masoud Mansoury, University of Amsterdam (The Netherlands)

  • Symeon Papadopoulos, CERTH-ITI (Greece)

  • Jürgen Pfeffer, Technische Universität München (Germany)

  • Evaggelia Pitoura, University of Ioannina (Greece)

  • Salvatore Ruggieri, Università di Pisa (Italy)

  • Jatinder Singh, University of Cambridge (United Kingdom)

  • Maryam Tavakol, Eindhoven University of Technology (The Netherlands)

  • Hilde Weerts, Eindhoven University of Technology (The Netherlands)

  • Frederik Zuiderveen Borgesius, Radboud Universiteit (The Netherlands)