Program

Schedule

09.00 - 09.10: Welcome Note
09.10 - 09.50: Keynote 1
Fariba Karimi, Network Inequalities and Fairness
09.50 - 11.00: Presentation Session 1

11.00 - 11.30: Coffee Break
11.30 - 12.20: Poster Presentations of Session 1
12.20 - 13.00: Keynote 2
Roberta Calegari, Towards Ethical Intelligence - Navigating Bias and Fairness in AI
13.00 - 14.00: Lunch Break
14.00 - 15.10: Presentation Session 2  

15.10 - 16.00: Poster Presentations of Session  2
16.00 - 16.30: Coffee Break

Keynote Speakers




Fariba Karimi
Technical University of Vienna

Keynote Title: Network Inequalities and Fairness

Abstract: Social networks are inherently complex, and the position of individuals within them greatly influences decision-making processes. Various mechanisms drive connections between individuals, resulting in diverse structures within social networks. These driving mechanisms play a crucial role in both social and algorithmic processes, which, albeit unintentionally, can contribute to the creation or amplification of existing inequalities and harm specific individuals or groups. To address these concerns, the scientific community has focused on measuring fairness for individuals and groups, yet often neglects the potential biases stemming from the interconnections among individuals and their network positions. In this talk, I will discuss inequalities that emerge in complex socio-technical networks and how we can mitigate them. 

Bio: Fariba Karimi is assistant professor of “Complexity Science for Societal Good” at Vienna University of Technology (TU Wien) and a leader of Network Inequality Group at Complexity Science Hub Institute. She has made significant contributions to the field of social complexity and digital humanism, particularly in the study of inequalities in networks and algorithms. She has a background in Computational social science, physics, and data science. In 2023, she received the prestigious young scientist award from the German Physics Society for her works in the area of network science and social good. 




Roberta Calegari
University of Bologna

Keynote Title: Towards Ethical Intelligence: Navigating Fairness and Bias in Artificial Intelligence 

Abstract: As artificial intelligence (AI) becomes increasingly integrated into various aspects of our lives, addressing issues of fairness and bias has emerged as a critical concern. This keynote presentation aims to provide a comprehensive overview of existing techniques, challenges, and open research directions in the realm of AI fairness and bias. The keynote will delve into the fundamental concepts surrounding AI fairness, highlighting the importance of mitigating biases in decision-making systems. We will explore various dimensions of bias, including data bias, algorithmic bias, and societal bias, and their implications for different domains such as employment, and healthcare.

In the talk, we will discuss state-of-the-art techniques designed to detect, measure, and mitigate bias in AI systems. We will explore the complexities involved in defining and quantifying fairness, taking into account the ethical considerations and trade-offs associated with different fairness metrics. Moreover, we will try to shed light on the challenges that researchers and practitioners face when addressing bias and fairness in AI. We will discuss issues such as the lack of diverse and representative datasets, the interpretability of AI systems, and the potential for unintended consequences when applying fairness interventions.

Bio: Roberta Calegari is a researcher and assistant professor at the Department of Computer Science and at the Alma Mater Research Institute for Human-Centered Artificial Intelligence at the University of Bologna. Her research field is related to trustworthy and explainable systems, distributed intelligent systems, software engineering, multi-paradigm languages and AI & law. She is the coordinator of the project Horizon Europe 2020 (G.A. 101070363) about Assessment and engineering of equitable, unbiased, impartial and trustworthy AI systems. The project aims to provide an experimentation playground to assess and repair bias on AI. She is part of the EU Horizon 2020 Project “PrePAI” (G.A. 101083674) working on the definition of requirements and mechanisms that ensure all resources published on the future AIonDemand platform can be labelled as trustworthy and in compliance with the future AI regulatory framework. Her research interests lie within the broad area of knowledge representation and reasoning in AI for trustworthy and explainable AI and in particular focus on symbolic AI including computational logic, logic programming, argumentation, logic-based multi-agent systems, non-monotonic/defeasible reasoning. She is Member of the Editorial Board of ACM Computing Surveys for the area of Artificial Intelligence. She is author of more than 80 papers in peer-reviewed international conferences and journals. She is leading many European, Italian and regional projects and she is responsible for collaborations with industries.


 

Accepted papers

Presentation Session 1

(Local) Differential Privacy has NO Disparate Impact on Fairness
Karima Makhlouf, Héber H. Arcolezi & Catuscia Palamidessi

Automated discovery of trade-off between accuracy, privacy and fairness in machine learning models
Bogdan Ficiu, Neil D. Lawrence & Andrei Paleyes

Fairness Implications of Encoding Protected Categorical Attributes
Carlos Mougan, Jose M. Alvarez, Salvatore Ruggieri & Steffen Staab

Counterfactual Situation Testing: Uncovering Discrimination under Fairness given the Difference
Jose M. Alvarez & Salvatore Ruggieri

Using Explainable AI to Understand Bias
Sofie E. Goethals, Toon Calders & David Martens

Counterfactual Explanations for Recommendation Bias
Evaggelia Pitoura, Leonidas Zafeiriou & Panayiotis Tsaparas

Mitigating Discrimination in Insurance with Wasserstein Barycenters
Arthur charpentier, François Hu & Philipp Ratz

Bias on Demand: A Modelling Framework That Generates Synthetic Data With Bias
Joachim Baumann, Alessandro Castelnovo, Riccardo Crupi, Nicole Inverardi, Daniele Regoli

How Different Is Stereotypical Bias Across Languages?
Ibrahim Tolga Öztürk, Rostislav Nedelchev, Christian Heumann, Esteban Garces Arias, Marius Roger, Bernd Bischl & Matthias Aßenmacher 

Presentation Session 2

Facial Analysis Systems and Down Syndrome
Marco Rondina, Fabiana Vinci, Antonio Vetrò & Juan Carlos De Martin

Sampling strategies for mitigating bias in face synthesis methods
Emmanouil Maragkoudakis, Iraklis Varlamis, Symeon Papadopoulos & Christos Diou

Towards Fair Face Verification: An In-depth Analysis of Demographic Biases
Ioannis Sarridis, Christos Koutlis, Symeon Papadopoulos & Christos Diou

Fairness Without Harm: A New Intersectional Fairness Definition
Gaurav Maheshwari, Aurélien Bellet, Pascal Denis & Mikaela Keller

Beliefs, Relationships, and Equality: An Alternative Source of Discrimination in a Symmetric Hiring Market via Threats
Serafina Kamp, Therese Nkeng, Vicente Riquelme & Benjamin Fish

Towards Inclusive Fairness Evaluation via Eliciting Disagreement Feedback from Non-Expert Stakeholders
Mukund Telukunta & Venkata Sriram Siddhardh Nadendla

De-Biasing Ethical AI - a Safeguarding Case Study
Anita Nandi, Joshua Hughes & Hayley Watson

A Fairness Assessment Framework
Emmanouil Krasanakis & Symeon Papadopoulos

Inherent Limitations of AI Fairness
Maarten Buyl & Tijl De Bie