09.00 - 09.10: Welcome note
09.10 - 10.15: Presentations of Papers
(see schedule below)
10.15 - 11.30: Coffee break & Poster session
11.30 - 12.30: Keynote
Prof. Dr. Jürgen Pfeffer - Algorithms, Bias, and the Societal Impact on Democracy
12.40 - 14.00: Lunch
14.00 - 14.45: Poster session
14.45 - 15.00: Introduction to the Interactive sessions
15.00 - 15.40: Interactive session Part I
15.40 - 16.00: Coffee Break
16.00 - 16.40: Interactive session Part II
16.45 - 17.00: Closing
Abstract: In this talk, we explore the pressing issues of Bias and Fairness in AI by drawing connections between technological influence and the fundamental democratic rights at risk in today's algorithm-driven world. Through a socio-technical lens, we examine how AI systems, particularly those used in social media, impact decision-making, self-determination, and public deliberation. The talk focuses on the online-offline spillover, where AI-driven recommendations and content moderation shape behavior, reinforce biases, and challenge democratic participation. With examples from social media platforms and generative AI, we will explore how algorithms nudge users, often limiting freedom by prioritizing engagement and profit over fairness and transparency. By examining scenarios where AI systems affect key societal elements such as justice, equality, and fundamental rights, we highlight the risk of entrenched biases exacerbating social and economic inequalities. Through this perspective, we will discuss ethical, legal, and philosophical questions that challenge current AI technologies, advocating for a future where AI aligns with societal values, fairness, and justice.
Review of Data-Driven Bias: Analysis of Concepts for Fairness Audits in the Regulation of High-Risk AI Systems
Jan Grenzebach & Thea Radüntz
Interpretable Fair Distance Learning for Categorical Data
Alessio Famiani, Federico Peiretti & Ruggero G. Pensa
``Patriarchy Hurts Men Too.'' Does Your Model Agree? A Discussion on Fairness Assumptions
Marco Favier & Toon Calders
How to (Un)Link the Bias: Linked Data to Counteract Dataset Bias
Ana Cimitan, Ana Alves Pinto & Michaela Geierhos
Properties of Fairness Measures in the Context of Class Imbalance and Protected Group Ratios
Dariusz Brzezinski
Bias-aware synthetic data generation: a tailored use case-driven approach
Barbara Draghi, Allan Tucker & Puja Myles
Synthetic Tabular Data Generation for Class Imbalance and Fairness: A Comparative Study
Emmanouil Panagiotou, Eirini Ntoutsi & Arjun Roy
Quantifying group fairness with fuzzy-rough sets in pattern classification problems
Lisa Koutsoviti Koumeri, Koen Vanhoof & Gonzalo Napoles
Adversarial Robustness of Variational Autoencoders across Intersectional Subgroups
Chethan Krishnamurthy Ramanaik, Arjun Roy & Eirini Ntoutsi
Everyone deserves their voice to be heard: Analyzing Predictive Gender Bias in ASR Models Applied to Dutch Speech Data
Rik Raes, Saskia E Lensink & Mykola Pechenizkiy
Quantifying the Trade-Offs between Dimensions of Trustworthy AI - An Empirical Study on Fairness, Explainability, Privacy, and Robustness
Nils Kemmerzell & Annika Schreiner