Workshop Schedule
Pre-Proceedings are available at https://seafile.rlp.net/d/bf0e58562dc94ae19f87/.
The password can be asked for at the registration desk.
Monday, 1st of July
08:00 Registration & Badge Pickup
09:00 Welcome Address
09:30 Keynote Talk: Prof. Bettina Berendt, TU Berlin, Weizenbaum Institute, and KU Leuven
De-biased, diverse, divisive - On ethical perspectives regarding the de-biasing of GenAI and their actionability
AI tech companies cannot seem to get it right. After years of evidence-based criticism of biases in AI, in particular decision models, LLMs and other generative AI, after years of research and toolbox provision for de-biasing, many companies have implemented such safeguards into their services. However, ridicule and protests have recently erupted when users discovered generated images that were “(overly?) diversified” with respect to gender and ethnicity and answers to ethical questions that were “(overly?) balanced” with regard to moral stances. Is this seeming contradiction just a backlash, or does it point to deeper issues? In this talk, I will analyse instances of recent discourse on too little or too much “diversification” of (Gen)AI and relate this to methodological criticism of “de-biasing”. A second aim is to contribute to the broadening and deepening of answers that computer science and engineering can and should give to enhance fairness and justice.
10:30 Coffee & Posters
11:00 Lightning Round 1
Room: Linke Aula
Algorithmic Fairness Over Time: Advances & Prospects
Miriam Rateike
Can generative AI-based data balancing mitigate unfairness issues in Machine Learning?
Benoît Ronval, Siegfried Nijssen and Ludwig Bothmann
Governing online platforms after the Digital Services Act: an analysis of the Commission decision on initiating proceedings against X
Matteo Fabbri
Algorithmic Fairness in Geo-intelligence Workflows through Causality
Brian Masinde, Caroline Gevaert, Michael Nagenborg, Marc van den Homberg and Jaap Zevenbergen
Threshold Recourse for Dynamic Allocations
Meirav Segal, Anne-Marie George and Christos Dimitrakakis
Categorizing algorithmic recommendations: a matter of system, agent, patient
Matteo Fabbri and Jesus Salgado
Exploring Fusion Techniques in Multimodal AI-Based Recruitment: Insights from FairCVdb
Swati Swati, Arjun Roy and Eirini Ntoutsi
Social Assessment and Cultural Resistance: The Public Distribution System in Tamil Nadu, India
Sumathi Rajesh
12:00 Lunch
13:00 Keynote Talk: Prof. Dr. Virginia Dignum, Umeå University
Beyond the AI hype: Balancing Innovation and Social Responsibility
AI can extend human capabilities but requires addressing challenges in education, jobs, and biases. Taking a responsible approach involves understanding AI's nature, design choices, societal role, and ethical considerations. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems. In all these developments, is vital to understand that AI is not an autonomous entity but rather dependent on human responsibility and decision-making.
In this talk, I will further discuss the need for a responsible approach to AI that emphasize trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations, but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms. Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation. Responsible Artificial Intelligence (AI) is not an option but the only possible way to go forward in AI.
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations. She has a PHD in Artificial Intelligence from Utrecht University in 2004, is member of the Royal Swedish Academy of Engineering Sciences (IVA), and Fellow of the European Artificial Intelligence Association (EURAI). She is a member of the United Nations Advisory Body on AI, the Global Partnership on AI (GPAI), UNESCO’s expert group on the implementation of AI recommendations, OECD’s Expert group on AI, founder of ALLAI, the Dutch AI Alliance, and co-chair of the WEF’s Global Future Council on AI. She was a member of EU’s High Level Expert Group on Artificial Intelligence and leader of UNICEF's guidance for AI and children. Her new book “The AI Paradox” is planned for publication in late 2024.
14:00 Coffee & Posters
14:30 Lightning Round 2
Room: Linke Aula
Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness
Luca Deck, Jan-Laurin Müller, Conradin Braun, Domenique Zipperling and Kühl Niklas
Policy Advice and Best Practices on Bias and Fairness in Artificial Intelligence
Jose M. Alvarez, Alejandra Bringas-Colmenarejo, Alaa Elobaid, Simone Fabrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Russo Mayra, Kristen M. Scott, Laura State, Xuan Zhao and Salvatore Ruggieri
Watching the Watchers: A Comparative Fairness Audit of Cloud-based Content Moderation Services
David Hartmann, Amin Oueslati and Dimitri Staufer
Enhancing Fairness through Time-Aware Recourse: A Pathway to Realistic Algorithmic Recommendations
Isacco Beretta, Martina Cinquini and Isabel Valera
Beyond Silos: An Interdisciplinary Analysis of Intersectional Discrimination from an EU Perspective
Stephan Wolters
Risk Scores as Statistical Fatalism
Sebastian Zezulka and Konstantin Genin
Eliciting Discrimination Risks in Algorithmic Systems: Taxonomies and Recommendations
Marta Marchiori Manerba
Risky Complaints: Unpacking Recent Trends in Risk Assessment Across Global Supply Chains
Gabriel Grill
On Prediction-Modelers and Decision-Makers: Why Fairness Requires More Than a Fair Prediction Model
Teresa Scantamburlo, Joachim Baumann and Christoph Heitz
15:30 Coffee & Posters
16:30 In-Depth Session 1
Room: Linke Aula
Mapping Policymakers' and Laypeople's Perceptions of genAI and FPT in the Context of the EU AI Act
Chiara Ullstein, Michel Hohendanner and Jens Grossklags
Fair Balancing? Evaluating LLM-based Privacy Policy Ethics Assessments
Vincent Freiberger and Erik Buchmann
Room: Atrium Maximum
Unfairness in AI Anti-Corruption Tools: Main Drivers and Consequences
Fernanda Odilla
Mapping the Potential of Explainable Artificial Intelligence (XAI) for Fairness Along the AI Lifecycle
Luca Deck, Astrid Schoemäcker, Timo Speith, Jakob Schöffer, Lena Kästner and Niklas Kühl
17:30 Poster Session
Room: Atrium Maximum
Papers from Lightning Rounds 1&2, Papers from In-Depth Session 1
Various Authors
19:00 Social Event @Salon 3SEIN
Salon 3Sein, Große Bleiche 60-62, 55116 Mainz
https://maps.app.goo.gl/2KrkFGffTvzDFBgb8
Tuesday, 2nd of July
08:00 Registration & Badge Pickup
09:15 Keynote Talk: Prof. Seth Lazar, Australian National University (online)
10:30 Coffee & Posters
11:00 Lightning Round 3
Room: Linke Aula
A Fair Selective Classifier to Put Humans in the Loop
Daphne Lenders, Andrea Pugnana, Roberto Pellungrini, Toon Calders, Fosca Giannotti and Dino Pedreschi
Fairness Beyond Binary Decisions: A Case Study on German Credit
Deborah Dormah Kanubala, Isabel Valera and Kavya Gupta
Challenging "Subgroup Fairness": Towards Intersectional Algorithmic Fairness Based on Personas
Marie Decker, Laila Wegner and Carmen Leicht-Scholten
Deciding the Future of Refugees: Rolling the Dice or Algorithmic Location Assignment?
Clara Strasser Ceballos and Christoph Kern
How to be fair? A discussion and future perspectives
Marco Favier and Toon Calders
Enabling users’ control on recommender systems for short videos: a design proposal for the implementation of the requirements of the Digital Services Act
Matteo Fabbri, Jingyi Jia and Wolfgang Woerndl
Algorithmic Fairness in Clinical Natural Language Processing: Challenges and Opportunities
Daniel Anadria, Anastasia Giachanou, Jacqueline Kernahan, Roel Dobbe and Daniel Oberski
Trust in Fair Algorithms: Pilot Experiment
Mattia Cerrato, Marius Köppel, Kiara Stempel, Alesia Vallenas Coronel and Niklas Witzig
Identifying Gender Stereotypes and Biases in Automated Translation from English to Italian using Similarity Networks
Fatemeh Mohammadi, Marta Anamaria Tamborini, Paolo Ceravolo, Costanza Nardocci and Samira Maghool
12:00 Lunch
13:00 Interactive Sessions 1
Room: Linke Aula
Building Bridges from and Beyond the EU Artificial Intelligence Act. Regulating AI-based Discrimination in the European Scenario
Marilisa D'Amico, Ernesto Damiani, Costanza Nardocci, Paolo Ceravolo, Samira Maghool, Marta Annamaria Tamborini, Paolo Gambatesa and Fatemeh Mohammadi
Abstract:
The panel aims to discuss the implications following the approval of the EU Artificial Intelligence Act also in light of the additional initiatives ongoing in the European and global scenario (e.g. Council of Europe, UNESCO, United Nations) from the perspective of their adequacy to ensure the fairness of algorithms and to tackle discriminations resulting from the massive resort to AI technologies. By bringing together constitutional law and computer science expertise, the discussion has a twofold aim. On the one hand, the panel aims to illustrate the criticisms that underlie the risks of AI systems from a constitutional and human rights perspective, supported by the analysis of computer science; on the other hand, it aims to explore innovative strategies to promote an inclusive use of AI technologies by designers and implementers in order to enhance their positive potential.
Room: Atrium Maximum [30 participants]
Moral Exercises for Human Oversight of AI Systems
Teresa Scantamburlo and Silvia Crafa
Abstract:
The interactive session addresses the challenge of ethical reflection in AI research and practice. Human judgement is crucial in balancing accuracy and fairness trade-offs and overseeing AI system behaviour. To support ethical reflection and judgement we propose the experience of moral exercises: structured activities aimed at engaging AI actors in realistic ethical problems involving the development or use of an AI system. Participants undergo guided exercises involving scenario analysis, individual and group work, highlighting consensus and divergences on the problem at stake. The initiative will promote the development of moral exercises and help refine the methodology for AI ethics education.
14:30 Coffee & Posters
15:30 Interactive Sessions 2
Room: Linke Aula
How the Digital Services Act can enable researcher access to data of the largest online platforms and search engines - interactive session with the European Commission
EC Joint Research Centre
The Digital Services Act (DSA) entered into full force in February, and aims to create a safer and more trustworthy digital space, where the fundamental rights of all users are protected. As part of the DSA’s transparency obligations, Article 40 of the DSA establishes the obligation of Very
Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to provide researchers with access to data for the purposes of conducting research that
contributes to the detection, understanding and mitigation of systemic risks in the European Union, such as discrimination and the spread of disinformation.
In particular, Article 40(12) of the DSA obliges providers of VLOPs and VLOSEs to give researchers access to data that is publicly available in their interfaces. In addition, Article 40(4) of the DSA establishes a data access mechanism through which researchers who undergo a vetting procedure can obtain access to non-public data for the study of systemic risks in the European Union.
In this workshop, the Commission will present the data access mechanism for vetted researchers and participants will have the opportunity to provide feedback on a detailed proposal for its procedural, technical and operational elements, which is currently being prepared in the form of a delegated act. Participants will get the chance to explore how data access could benefit their research, which challenges they foresee and how these may be overcome. Participants will also hear about how the DSA protects researcher access to publicly available data, and will be able to provide feedback on the data access tools and procedures made available by VLOPs and VLOSEs to this end so far.
Room: Atrium Maximum [30 participants]
Fairness, or not Fairness, That is the Question. Rethinking Virtual Assistants' Responses From an Ethical Perspective
Giulia Teverini, Joy Ciliani and Alessia Nicoletta Marino
Abstract: The workshop aims to delve into the intricate aspects of ethics applied to human-computer interaction. Participants will engage in practical activities in which they will analyze conversations with virtual assistants, distinguishing between appropriate and inappropriate interactions. In this context, the ethical principles proposed by the European Commission will be used as guidelines to ensure favorable conditions for the development of trustworthy AI-based systems. Immediately after participants will be asked to propose scenario-based solutions to tackle ethical issues which have emerged in the previous stage. At the end of the workshop, each group will show the result to the audience. The collaborative nature of the workshop aims at fostering critical thinking, practical understanding, and collective reasoning.
17:00 Poster Session
Room: Atrium Maximum
Papers from Lightning Round 3
Various Authors
What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice
Corinna Hertweck, Christoph Heitz and Michele Loi
Reranking individuals: the effect of fair classification within-groups
Sofie Goethals and Toon Calders
More than Minorities and Majorities: Understanding Multilateral Bias in Language Generation
Jiaxu Zhao, Zijing Shi, Yitong Li, Yulong Pei, Ling Chen, Meng Fang and Mykola Pechenizkiy
An epistemic-based decision-making framework for modeling fairness in the spread of technologies for sustainability
Camilla Quaresmini, Eugenia Villa, Valentina Breschi, Viola Schiaffonati and Mara Tanelli
Scaling Up Causal Algorithmic Recourse with CARMA
Ayan Majumdar and Isabel Valera
What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds
Ludwig Bothmann, Kristina Peters and Bernd Bischl
Can generative AI-based data balancing mitigate unfairness issues in Machine Learning?
Benoît Ronval, Siegfried Nijssen and Ludwig Bothmann
A Neuro-Symbolic Approach to Counterfactual Fairness
Xenia Heilmann, Chiara Manganini, Mattia Cerrato and Vaishak Belle
Towards Fair Co-clustering
Federico Peiretti and Ruggero G. Pensa
Integrating Global Justice into AI Fairness Criteria: A Novel Approach to Environmentally Sustainable and Equitable Fleet Management
Ahmet Bilal Aytekin, Batuhan Çınar, Ahu Ece Hartavi Karci
19:00 Social Event @Schlossbiergarten
Wednesday, 3rd of July
08:00 Registration & Badge Pickup
09:00 Keynote Talk: Prof. Isabel Valera, University of Saarland
"Society-centered AI: An Integrative Perspective on Algorithmic Fairness"
Abstract: In this talk, I will share my never-ending learning journey on algorithmic fairness. I will give an overview of fairness in algorithmic decision making, reviewing the progress and wrong assumptions made along the way, which have led to new and fascinating research questions. Most of these questions remain open to this day, and become even more challenging in the era of generative AI. Thus, this talk will provide only few answers but many open challenges to motivate the need for a paradigm shift from owner-centered to society-centered AI. With society-centered AI, I aim to bring the values, goals, and needs of all relevant stakeholders into AI development as first-class citizens to ensure that these new technologies are at the service of society.
10:00 Coffee Break
10:30 Interactive Sessions 3
Room: Linke Aula
Evaluating Research Results on Fairness Issues of AI-based Social Assessment in Asylum Processes and Integration of Refugees – Science meets Practice
AI-FORA
Room: Atrium Maximum [30 participants]
Striving for Equity: Navigating Algorithmic Fairness for AI in the Workplace
Thea Radüntz, Martin Brenzke and Dominik Köhler
12:00 Lunch
13:00 Lightning Round 4
Room: Linke Aula
Building job seekers’ profiles: can LLM’s level the playing field?
Susana Lavado and Leid Zejnilovic
FAIRRET: A flexible PyTorch library for fairness
Marybeth Defrance, Maarten Buyl and Tijl De Bie
Unveiling the Blindspots: Examining Availability and Usage of Protected Attributes in Fairness Datasets
Jan Simson, Alessandro Fabris and Christoph Kern
Beyond Distributions: A Systematic Review on Relational Algorithmic Justice
Laila Wegner, Marie Decker and Carmen Leicht-Scholten
Pricing Risk: Analysis of Irish Car Insurance Premiums
Adrian Byrne
20% Increase in Fairness for Black Applicants: A Critical Examination of Fairness Measurements Offered by Startups
Corinna Hertweck and Maya Guido
Integrating Fairness in AI Development: Technical Insights from the fAIr by design Framework
Mira Reisinger and Rania Wazir
Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists
Joachim Baumann and Celestine Mendler-Dünner
Fairness in Graph-Theoretical Optimization Problems
Christopher Hojny, Frits Spieksma and Sten Wessel
It is not about Bias but Discrimination
Chaewon Yun, Claudia Wagner and Jan-Christoph Heilinger
Exploration of potential new benchmark for fairness evaluation in Europe
Magali Legast, Lisa Koutsoviti Koumeri, Yasaman Yousefi and Axel Legay
14:15 Coffee & Posters
15:00 In-Depth Session 2
Room: Linke Aula
Measurement Modeling of Predictors and Outcomes in Algorithmic Fairness
Elisabeth Kraus and Christoph Kern
Proxy Fairness under the GDPR and the AI Act: A Perspective of Sensitivity and Necessity
Ioanna Papageorgiou
Room: Atrium Maximum
When the Ideal Does Not Compute: Nonideal Theory and Fairness in Machine Learning
Otto Sahlgren
Inherent Limitations of AI Fairness
Maarten Buyl and Tijl De Bie
16:00 Poster Session
Room: Atrium Maximum
Papers from Lightning Round 4, Papers from In-Depth Session 2
Various Authors
Exploration of potential new benchmark for fairness evaluation in Europe
Magali Legast, Lisa Koutsoviti Koumeri, Yasaman Yousefi and Axel Legay
Benchmarking Audio DeepFake Detection
Maria-Isavella Manolaki-Sempagios and Mykola Pechenizkiy
Input-debias: A Post-Processing Technique for Bias Mitigation in Contextualized Models
Nick Wils, Ewoenam Kwaku Tokpo and Toon Calders
FairFlow: An Automated Approach to Model-based Counterfactual Data Augmentation For NLP
Ewoenam Kwaku Tokpo and Toon Calders
Enhancing Model Fairness and Accuracy with Similarity Networks: A Methodological Approach
Samira Maghool and Paolo Ceravolo
17:00 Closing