About | Program | Participate | Submit | Committee | Contact | FR
4 november 2025
Toulouse, France
The explainability of intelligent systems has become a central issue for fostering informed adoption, greater transparency, and responsible use of AI technologies.
While many approaches have emerged, they too often focus primarily on technical aspects or narrow categories of users, neglecting the real and diverse needs of end users.
The XAI4U – Explainability of AI for End Users workshop, co-organized with the GT EXPLICON (GDR RADIA), aims to explore these challenges at the intersection of explainable AI (XAI), human–computer interaction (HCI), and user experience (UX).
We will discuss methods to:
characterize users’ expectations regarding explanations,
evaluate their impact on understanding, trust, or reliance,
design interactive, adaptive, and context-appropriate explanations.
The goal is to develop an interdisciplinary methodological framework useful to researchers, practitioners, and designers for better designing, integrating, and evaluating explanations produced in the context of XAI.
The workshop is open to everyone, including those who are not HCI specialists but are interested in explainability and interaction with AI. It is intended to be simple, open, and participatory.
This workshop will be held as part of the 36th Francophone Conference on Human–Computer Interaction (IHM 2025), which will take place from November 3 to 7, 2025, in Toulouse, France.
09:00 - WELCOME
Participant welcome and workshop introduction.
09:30 - Transparency with Boundaries: Designing Explainable AI for Usability and Privacy in Interactive Systems
Camille Fayollas, Moncef Garouani, Célia Martinie
(IRIT, UMR5505 CNRS, Université Toulouse Capitole)
10:00 - Formal Abductive Latent Explanations for Prototype-Based Networks
Jules Soria, Zakaria Chihani, Julien Girard-Satabin, Alban Grastien, Romain Xu-Darme, Daniela Cancila
(Université Paris-Saclay, CEA, List, F-91120, Palaiseau, France)
10:30 - BREAK
11:00 - A protocol to study the impact of XAI on AI-assisted decision making
Jules Leguy
(SyCoIA, IMT Mines Alès)
11:30 - Explaining Tournament Solutions with Minimal Supports
Clément Contet, Umberto Grandi, Jérôme Mengin
(IRIT, Université de Toulouse)
12:00 - La programmation visuelle comme soutien à l’explicabilité et à la personnalisation du processus de conception architectural assistée par l’IA
Yann Blanchi
(Laboratoire MHA, ENSAG – Université Grenoble Alpes)
12:30 - LUNCH BREAK
14:00 - Reliance-Awareness Design and Evaluation in Explainable User Interfaces
José Cezar de Souza Filho, Rafik Belloum, Kathia Marçal de Oliveira
(Université Polytechnique Hauts-de-France, LAMIH, UMR CNRS 8201)
14:30 - eXplanations Improve Streaming Learning for Vision Transformers
Meghna P. Ayyar, Jenny Benois-Pineau, Akka Zemmari
(LaBRI, CNRS, Université de Bordeaux, UMR 5800, Talence)
15:00 - Vers des classifieurs d’images explicables pour les experts de domaine
Arnaud Lewandowski, Grégory Bourguin
(LISIC – Université du Littoral Côte d'Opale)
15:30 - BREAK
16:00 - Interpreto: An Explainability Library for LLMs
Antonin Poché
(IRT Saint Exupéry, IRIT)
16:30 - Explicabilité et acceptabilité de l’IA en contexte opérationnel militaire
Luca Mourgaud, Marc-Eric Bobillier-Chaumon, Olivier Grisvard, Denis Lemaitre
(THALES ; CRTD – CNAM ; Thales Airborne Systems / Télécom Bretagne ; École Navale)
17:00 - Une approche visuelle de l’explicabilité pour l’argumentation
Sylvie Doutre, Théo Duchatelle, Marie-Christine Lagasquie-Schiex
(IRIT – Université Toulouse Capitole ; Akkodis ; IRIT – Université de Toulouse)
17:30 - OPEN DISCUSSION
A collective discussion to reflect on the presentations, share ideas or ongoing work, and connect around the challenges explored throughout the day. An opportunity to initiate informal exchanges and future collaborations.
18:00 - CLOSING
To participate in the workshop, you need to register on the conference website:
🟥 Submissions are now closed
We invite participants to propose a presentation on the theme of AI explainability for end users. This includes, for example, the visualization of AI-generated data, the design of understandable interfaces, consideration of cognitive biases, or any other issue related to transparency, trust, or user appropriation of intelligent systems.
Contributions on explanatory systems not directly related to AI may also be accepted if they help enrich the design of explanations in the context of XAI.
Presentations may cover research work (ongoing or completed), tools, methods or prototypes, experience reports, or conceptual reflections.
Examples of possible topics
Designing explanations tailored to user profiles and usage contexts
Data visualization methods and immersive visualization for explainability
Evaluation of trust, transparency, or reliance
UX studies or fieldwork on explanatory systems
Effects of cognitive biases on the reception or interpretation of explanations
Non-textual forms of explanation (interactive visualization, graphic storytelling, spatial layouts, etc.)
Comparison of XAI methods with or without a focus on interaction
Proposals for new explanatory formats, metaphors, or interaction styles
Social or ethical issues related to explainability
Submission guidelines
Interested participants should submit an abstract of up to one page (including references) in the conference format.
Submission deadline: Friday, 19 September 2025
Notification to authors: Friday, 26 September 2025
Presentations: Tuesday, 04 November 2025
Each accepted presentation will last approximately 15 minutes, followed by a Q&A session with the audience. The main language of the workshop is French, but proposals and presentations in English are welcome and will be accepted. If participants wish, a compilation of abstracts will be shared on arXiv.