The workshop will be organized in the morning of Monday the 15th of September
09.00 - 09.10: Welcome
09.10 - 09.50: Keynote - Prof. Tijl De Bie
Neutrality and bias in (generative) AI: conceptual, methodological, and regulatory challenges
09.50 - 10.30: Presentations round 1
(See schedule below)
10.30 - 11.00: Coffee break & Poster session
11.00 - 11.35: Presentations round 2
(See schedule below)
11.35 - 12.05: Poster session 2
12.05 - 12.55: Interactive Session by Anna Monreale
Bias in AI Hiring Systems
12.55 - 13.00: Closing
Abstract: Neutrality and bias in classical AI are elusive concepts that can be formalized in a multitude of ways. In generative AI, the conceptual and methodological challenges are far greater yet. In this talk, I will investigate the notions of `neutrality' and `bias' from the perspective of ideological diversity, content moderation, and censorship, and I will touch upon the implications for regulatory initiatives that aim to mitigate the harmful impact of generative AI on public discourse.
Bio: Prof. Tijl De Bie is a senior full professor at Ghent University. Before joining Ghent University in 2015, he studied or held research positions at KU Leuven, UC Berkeley, UC Davis, Southampton University, and the University of Bristol. He has worked on the foundations of machine learning and data science, as well as on applications of AI in fields ranging from bioinformatics, over music informatics, sports analytics, to social and mainstream media analysis. His current research interests include exploratory data science, trustworthiness of AI, the impact of AI on information integrity and democracy, and applications of AI to the labor market and human resources management. His work has been funded by several prestigious research grants, including an ERC Consolidator grant (FORSIED), an ERC Proof of Concept grant (FEAST), and an ongoing ERC Advanced grant (VIGILIA).
A Representation-Level Assessment of Bias Mitigation in Foundation Models
Svetoslav Nizhnichenkov, Rahul Nair, Elizabeth Daly & Brian Mac Namee
No LLM is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language Models
Vinayak Kumar Charaka, Ashok Urlana, Gopichand Kanumolu, Bala Mallikarjunarao Garlapati & Pruthwik Mishra
An Empirical Investigation of Gender Stereotype Representation in Large Language Models: The Italian Case
Gioele Giachino, Marco Rondina, Antonio Vetrò, Riccardo Coppola & Juan Carlos De Martin
Word Overuse and Alignment in Large Language Models: The Influence of Learning from Human Feedback
Tom Juzek & Zina B Ward
Assessing Trustworthiness of AI Training Dataset using Subjective Logic - A Use Case on Bias
Koffi Ouattara, Ioannis Krontiris, Theo Dimitrakos & Frank Kargl
Lightweight Fairness for LLM-Based Recommendations via Kernelized Projection and Gated Adapters
Nan Cui, Hui Wang & Yue Ning
How to Choose a Fairness Measure: A Decision-Making Workflow for Auditors
Federica Picogna
BiMi Sheets: Infosheets for bias mitigation methods
MaryBeth Defrance, Guillaume Bied, Maarten Buyl, Jefrey Lijffijt & Tijl De Bie