Accepted Papers

In-depth track

Algorithmic Audit of Italian Car Insurance: Evidence of Unfairness in Access and Pricing
Alessandro Fabris, Alan Mishler, Stefano Gottardi, Mattia Carletti, Matteo Daicampi, Gian Antonio Susto and Gianmaria Silvello

We conduct an audit of pricing algorithms employed by companies in the Italian car insurance industry, primarily by gathering quotes through a popular comparison website. While acknowledging the complexity of the industry, we find evidence of several problematic practices. We show that birthplace and gender have a direct and sizeable impact on the prices quoted to drivers, despite national and international regulations against their use. Birthplace, in particular, is used quite frequently to the disadvantage of foreign-born drivers and drivers born in certain Italian cities. In extreme cases, a driver born in Laos may be charged 1,000€ more than a driver born in Milan, all else being equal. For a subset of our sample, we collect quotes directly on a company website, where the direct influence of gender and birthplace is confirmed. Finally, we find that drivers with riskier profiles tend to see fewer quotes in the aggregator result pages, substantiating concerns of differential treatment raised in the past by Italian insurance regulators.

rca_ewaf.pdf

Arbitrariness in Automated Decision-Making as a Moral Problem: A reply to Creel and Hellman
Conny Knieling

Automated decision-making systems are increasingly involved in or are replacing human decision-making. Just like human decision-making, their decisions can be arbitrary. I will look at arbitrary decisions from a moral point of view and will argue that even isolated arbitrary decisions can wrong the individual who is affected by this decision and that this harm is constituted qua being arbitrary. This puts me in disagreement with contemporary positions, such as those offered by Creel & Hellman (2021). In addition to responding to Creel and Hellman, this paper develops a positive proposal about what "arbitrariness" might amount to in decision-making broadly and in algorithmic decision-making in particular that goes beyond what Creel and Hellman (2021), among others, have identified. I will concede that not all arbitrary decision-making needs to be of ethical concern and that the label of arbitrariness might not be applicable for many decisions. Nevertheless, when arbitrariness becomes a moral issue, it constitutes a harm for the individual affected qua the decision being arbitrary and this harm persists even when the automated decision-making systems can be considered “fair”. I will show that in many cases individuals have a right to a non-arbitrary decision. I will argue here also that the literature on biased algorithms and fair AI has overlooked the importance of procedural considerations in algorithmic decision-making and that looking at the political philosophy literature can help us inform that question. Given all this, I will provide an ethical evaluation of how we can understand arbitrary decision-making from a moral point of view and what this means for automated decision-making.

Arbitrariness in Automated Decision-Making as a Moral Problem - Zürich conference - Knieling.pdf

Data-Centric Factors in Algorithmic Fairness
Nianyun Li, Naman Goel and Elliott Ash

Notwithstanding the widely held view that data generation and data curation processes are prominent sources of bias in machine learning algorithms, there is little empirical research seeking to document and understand the specific data dimensions leading to algorithmic unfairness. Contra the previous work, which has focused on modeling using simple, small-scale benchmark datasets, we hold the model constant and methodically intervene on relevant dimensions of a much larger, more diverse dataset. For this purpose, we introduce a new dataset on recidivism in 1.5 million criminal cases from courts in the U.S. state of Wisconsin, 2000-2018. From this main dataset, we generate multiple auxiliary datasets to simulate different kinds of biases in the data. Focusing on algorithmic bias toward different race/ethnicity groups, we assess the relevance of training data size, base rate difference between groups, representation of groups in the training data, temporal aspects of data curation, including race/ethnicity or neighborhood characteristics as features, and training separate classifiers by race/ethnicity or crime type. We find that these factors often do influence fairness metrics holding the classifier specification constant, without having a corresponding effect on accuracy metrics. These results provide a useful reference point for a data-centric approach to studying algorithmic fairness in recidivism prediction and beyond.

Data-Centric_Factors.pdf

Definitions of Fairness are Biased: Inclusive Definitions of Fairness
Eva Yiwei Wu and Karl Reimer

Existing definitions of fairness applied in the development of AI are biased because they are exclusively derived from two strands of philosophical theories: deontology and consequentialism. This paper aims to broaden the examination of ethical theories by the Fair AI community’s to an inclusive set of philosophical traditions including Pragmatism, Empiricism, Indigenous theories and Daoism. In this paper, we focus on the explication of a definition of fairness based on pragmatism, which we dub pragmatic fairness. Inclusive definitions of fairness can guide the Fair AI community to design and develop artificial agents that are fairer and ethical for diverse communities.

InclusiveDefinitionsFairness.pdf

Perceptions of Efficiency vs. Fairness Tradeoffs in Algorithm-based HR Selection: Insights from Two Online Experiments
Serhiy Kandul and Ulrich Leicht-Deobald

Organizations increasingly rely on algorithms to increase the efficiency of their HR processes. However, research shows that higher accuracy of an algorithm often conflicts with group fairness metrics. With two online experiments on Mechanical Turk (Study 1: N= 283; Study 2: N = 277) we address people's perception of efficiency vs. fairness tradeoff in HR context, where participant weigh the utilities of the decision maker against the utilities of the job candidates. In Study 1, we x the degree of efficiency gain and vary the degree of violation of a fairness metric (with 4/5 rule satisfied or not) compared to a nearly perfectly fair benchmark. We find that the higher the degree of the fairness violation (for the same gain in efficiency), the more likely people are to choose a fair algorithm. If a disparity produced by a more efficient algorithm is low, within the 4/5 rule, signicantly more people prefer an efficient algorithm over a fair one. Furthermore, we find that people's preferences over algorithms are driven by their fairness perceptions; this effect holds for both statistical parity and equality of opportunity. Interestingly, participants who believe that females are under-represented in the industry tolerate disparities that favour female job candidates. In Study 2, we find participants are weakly more sensitive to eciency vs. fairness tradeoffs when the baseline efficiency of the algorithm is low.

Efficiency_vs__Fairness_Tradeoff__AI_in_HR_selection_V2.pdf

Predictions Are Not Decisions. Why Algorithmic Fairness Is Not Enough for Real-World Decision-Making
Maël Pégny and Julien Gossa

In this presentation, we want to argue that the criteria of algorithmic (group) fairness are not enough to account for the fairness of real-world decision-making. In a nutshell, the issue starts from the fact that those criteria all attempt to measure possible bias in predictive performances of statistical models across groups. However, predictions do not necessarily translate directly into decisions, and we will argue that in some real-world use cases, they cannot. Some decision procedures are thus bound to be composite, in the sense that they are going to a predictive subpart with a non-predictive one. For those realistic use cases, the current criteria of group fairness cannot in the general case assess whether the distribution of outcomes is going to be fair, because this distribution of outcomes is not and cannot be purely based on the predictions produced by the model. Consequently, we need to supplement the current reflection on predictive models with a reflection on decision-making with non-predictive criteria to formulate genuine statistical criteria of fair decision-making.

Predictions Are Not Decisions.pdf

Lightning round track

A Process Model for fair AI development
Sarah Cepeda, Fabian Eberle, Lena Müller-Kress, Rania Wazir, Magdalene Kunze, Jakob Hirtenlehner, Andreas Rauber, Christiane Wendehorst and Gertraud Leimüller

A Representative Swiss Population Survey on Algorithmic Fairness Reveals Positive Attitudes towards AI but Bias in Gender Discrimination Perception
Markus Christen

Algorithmic Fairness and Secure Information Flow
Bernhard Beckert, Michael Kirsten
and Michael Schefczyk

Algorithmic fairness as a path: start measuring your current process fairness
Jesus Salgado Criado

Algorithmic Fairness in Biomedical Research and Practice: A Use-Case Study on Alzheimer's Dementia Prediction
Derya Şahin, Frank Jessen and Joseph Kambeitz

An Impact Assessment Tool for Public Authorities
Anna Mätzener

Automatic Fairness Testing of Machine Learning Models
Arnab Sharma and Heike Wehrheim

Autonomous Vehicles, Business Ethics, and Risk Distribution in Hybrid Traffic
Brian Berkey

Can Algorithms be Decent?: Developing Value Frameworks for Artificial Intelligence
Darby Vickers

Corporate influence in public education through data extractive systems for student profiling
Velislava Hillman

Counterfactual reasoning for meaningful situation testing
Jose M. Alvarez and Salvatore Ruggieri

Demographic Parity's Not Dead
Nicolas Schreuder and Evgenii Chzhen

Exploitation and Algorithmic Pricing
Arianna Dini

Fairness-Aware Dimensionality Reduction
Amaya Nogales-Gómez, Luc Pronzato and Maria-João Rendas

Fairness metrics for visual privacy preservation
Sophie Noiret, Siddharth Ravi, Martin Kampel and Francisco Florez-Revuelta

Policy Fairness in Sequential Allocations under Bias Dynamics
Meirav Segal, Anne-Marie George and Christos Dimitrakakis

The Fairness in Algorithmic Fairness
Sune Holm

Towards a faithfull assessment of fairness
In alphabetical order: Bilel Benbouzid, Ruta Binkyte-Sadauskiene, Karima Makhlouf, Catuscia Palamidessi, Carlos Pinzon and Sami Zhioua

Understanding and Explainable AI
Will Fleisher

Unfair World. Fair Decisions. Knowledge transport data pre-processing method for fair AI
Ruta Binkyte-Sadauskiene and Catuscia Palamidessi

What's Ideal about Fair Machine Learning?
Otto Sahlgren