We look forward to meeting you for discussions and exchange on Trustworthy AI. In response to the large number of submissions, the workshop program has been extended to 1,5 days - October 25-26 - on-site as part of ECAI 2025.
ECAI registration opens at 07:30
9:00 - 9:15
9:15 - 10:30
Diverse and private synthetic datasets generation for RAG evaluation: A multi-agent framework. Ilias Driouich, Hongliu Cao and Eoin Thomas
Automated and augmented evaluation of bias in LLMs for high- and low-resource languages. Alessio Buscemi, Cédric Lothritz, Sergio Morales García, Marcos Gomez-Vazquez, Robert Clarisó Viladrosa, Jordi Cabot and German Castignani
Learning fairer representations with FairVIC. Charmaine Barker, Daniel Bethell and Dimitar Kazakov
Assessing the fairness of AI systems for education. Velislava Hillman, Katarzyna Barud, Ibrahim Sabra, Clara Saillant, Syed Zulkifil Haider Shah, Lukas Faymann, Edoardo Pareti, Leo Bianchi and Manuele Barbieri
FairEnsemble: An adaptive weighted framework for fairness-aware voting in ensemble models. Ibomoiye Domor Mienye, Theo G. Swart and George Obaido
Coffee break 10:30 - 11:00
11:00 - 11:30
Poster authors are available for presenting and discussing their work by their posters. This is a great opportunity workshop participants to engage with the presented research.
Discriminator-guided unlearning: A framework for selective forgetting in conditional GANs. Byeongcheon Lee, Sangmin Kim, Sungwoo Park, Seungmin Rho and Mi Young Lee
A comparison of human and machine learning errors in face recognition. Marina Estévez-Almenzar, Ricardo Baeza-Yates and Carlos Castillo
Artificial conversations, real results: Fostering language detection with synthetic data. Fatemeh Mohammadi, Tommaso Romano, Samira Maghool and Paolo Ceravolo
Balancing accuracy and interpretability in multi-sensor fusion through dynamic bayesian networks. Franca Corradini, Carlo Grigioni, Alessandro Antonucci, Jerome Guzzi and Francesco Flammini
Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness. Ilaria Vascotto, Alex Rodriguez, Alessandro Bonaita and Luca Bortolussi
Is the non-interpretability of AI systems an ethical barrier to their use in high-risk situations? Jiahua Liang
Monitoring Historical Cultural Hacking in Large Language Models. Fabio Celli and Astik Samal
Fictionalism about Agentic AI. Anthony R.J. Fisher
Human-centered risk governance for adaptive Al: Why educational requirements belong in trustworthy Al frameworks (position paper). Alexandru Mateescu
The right to distrust: Designing clinical AI for robust comparison (position paper). Cornelia Käsbohrer, Tim Barz-Cech and Lili Jiang
Trust as an outcome of trustworthy AI: A case for increasing research of trust of agentic AI tools (position paper). Elizabeth Darnell, Emma Murphy and Dympna O'Sullivan
Bridging the AI trustworthiness gap between functions and norms (position paper). Daan Di Scala, Sophie Lathouwers and Michael van Bekkum
11:30 - 12:45
Pollution with purpose: The role of data quality in trustworthy AI. Leonie Louisa Etzold, Tim Robin Kosack, Oscar Hernán Ramírez-Agudelo, Clemens Danda and Michael Karl
Cross-layer attention probing for fine-grained hallucination detection. Malavika Suresh, Rahaf Aljundi, Ikechukwu Nkisi-Orji and Nirmalie Wiratunga
Explaining concept drift via neuro-symbolic rules. Pietro Basci, Salvatore Greco, Francesco Manigrasso, Tania Cerquitelli and Lia Morra
Trustworthiness-as-reward: Improving LLM performance on text classification through reinforcement learning. Yiqing Zhao, Xiaohui Shen and Lanfeng Pan
Intersectional Fairness in Healthcare AI: A Pipeline-Wide Evaluation of Multi-Stage Mitigation Strategies. Shane Kennedy, Michael Farayola, Daniel Kelly, Irina Tal, Takfarinas Saber, Regina Connolly and Malika Bendechache - presented by Matias Duran.
Lunch 12:45 - 14:00
14:00 - 15:30
All participants engage in group discussions to identify and prioritize key research challenges for European and global research on trustworthy AI.
Coffee break 15:30 - 16:00
16:00 - 16:30
Poster authors are available for presenting and discussing their work by their posters. Workshop participants are highly encouraged to use this opportunity to engage with the presented research.
SP-Guard: Selective prompt-adaptive guidance for safe text-to-image generation. Sumin Yu and Taesup Moon
Transductive Model Selection under Prior Probability Shift. Lorenzo Volpi, Alejandro Moreo and Fabrizio Sebastiani
How to build trust in AI systems with misclassification detectors and local misclassification explorations. Pål Vegard Bun Johnsen, Milan De Cauwer, Joel Bjervig, and Brian Elvesæter
Multi‑domain calibration framework for SAR‑XAI: A systematic approach to trustworthy explainable AI with transparency enhancements . Diego Argüello Ron, Christyan Cruz Ulloa, Kristina Livitckaia, Orfeas Menis Mastromichalakis, Oscar Garcia, Perales and Pawel Andrzej Herman
Quantifying dataset trustworthiness from labeling bias using subjective logic. Koffi Ismael Ouattara and Ioannis Krontiris
Do foundation models learn fair representations? A critical evaluation of TabPFN on algorithmic fairness benchmarks. Sam Schiffman
Fair enough? A map of the current limitations of the requirements to have fair algorithms. Daniele Regoli, Alessandro Castelnovo, Nicole Inverardi, Gabriele Nanino and Ilaria Penco
EAIIM - Ethical AI Impact Matrix for high-stakes decision systems framework. Pedro Oliveira, Tomás Francisco, Pedro Oliveira and Manuel Rodrigues
Actionable trustworthy AI with a knowledge-based debugger (position paper). Priyabanta Sandulu, Andrea Šipka, Sergey Redyuk and Sebastian J. Vollmer
A risk index to guide responsible adoption of artificial intelligence (position paper) Mahboubehsadat Jazayeri, Paolo Ceravolo and Samira Maghool
Trustworthy-by-design: Building a generative AI chatbot for Italian public administration (position paper). Chandana Sree Mala, Gizem Gezici, Sezer Kutluk and Fosca Giannotti
Labelling the trustworthiness of medical AI (position paper). María Villalobos-Quesada
16:30 - 17:30
Trust in vision-language models: Insights from a participatory user workshop. Agnese Chiatti, Lara Piccolo, Sara Bernardini, Matteo Matteucci and Viola Schiaffonati
Towards trustworthy AI in STEM education: Challenges and strategies from the Trust-AI platform. Nikolaos Antonios Grammatikos, Evangelia Anagnostopoulou, Dimitris Apostolou and Gregoris Mentzas
Rethinking trust in responsible AI. Marina Tropmann-Frick, Michael Gille, Susanne Draheim, Philine Pommerencke, Maximilian Kiener and Jonas Bozenhard
Can I trust my trajectory prediction model? Franz Motzkus, Christian Schlauch, Sebastian Bernhard and Ute Schmid
Day closing 17:30
Social dinner (self-cost) 20:00 (TBD)
9:00 - 9:15
9:15 - 10:30
Coverage of LLM trustworthiness metrics in the current tool landscape. Lennard Helmer, Benny Stein, Tim Ufer, Elanton Fernandes, Hammam Abdelwahab, Abhinav Pareek and Joshua Woll
TAI Scan Tool: A RAG-based tool with minimalistic input for trustworthy AI self-assessment. Athanasios Davvetas, Xenia Ziouvelou, Ypatia Dami, Alexios Kaponis, Konstantina Giouvanopoulou and Michael Papademas
Towards the assessment of trustworthy AI: A catalog-based approach. Marco Anisetti, Claudio Agostino Ardagna, Nicola Bena and Aneela Nasim
Patterns for semi-automated trustworthiness risk assessment of AI systems in cyber-physical environments. Samuel Senior and Steve Taylor
An entropic metric for measuring calibration of machine learning models. Daniel James Sumler, Richard Lane, Lee Devlin and Simon Maskell
Coffee break 10:30 - 11:00
11:00 - 11:30
Each group presents top three research challenges for trustworthy AI. Presentation time: 2-3 minutes pr. group
11:30 - 12:30
XAI desiderata for trustworthy AI: Insights from the AI act. Martin Krutský, Jiří Němeček, Jakub Peleška, Paula Gürtler and Gustav Šír
Towards abductive latent explanations. Jules Soria, Zakaria Chihani, Julien Girard-Satabin, Alban Grastien, Romain Xu-Darme and Daniela Cancila
Topic modelling on the European AI Act. Ettore Carbone, Alex Giulio Berton, Purbasha Chowdhury, Teresa Scantamburlo and Paolo Falcarin (#43)
Lightweight AI Governance (LAIG) framework for SMEs . Aleksandra Wolniak and Aleksander Młodawski
12:30 - 12:45
Lunch until 14:00