About | Program | Participate | Submit | Committee | Contact | FR
4 november 2025
Toulouse, France
The explainability of intelligent systems has become a central issue for fostering informed adoption, greater transparency, and responsible use of AI technologies.
While many approaches have emerged, they too often focus primarily on technical aspects or narrow categories of users, neglecting the real and diverse needs of end users.
The XAI4U – Explainability of AI for End Users workshop, co-organized with the GT EXPLICON (GDR RADIA), aims to explore these challenges at the intersection of explainable AI (XAI), human–computer interaction (HCI), and user experience (UX).
We will discuss methods to:
characterize users’ expectations regarding explanations,
evaluate their impact on understanding, trust, or reliance,
design interactive, adaptive, and context-appropriate explanations.
The goal is to develop an interdisciplinary methodological framework useful to researchers, practitioners, and designers for better designing, integrating, and evaluating explanations produced in the context of XAI.
The workshop is open to everyone, including those who are not HCI specialists but are interested in explainability and interaction with AI. It is intended to be simple, open, and participatory.
This workshop will be held as part of the 36th Francophone Conference on Human–Computer Interaction (IHM 2025), which will take place from November 3 to 7, 2025, in Toulouse, France.
09:30 – 10:00
Welcome of participants and workshop introduction
10:00 – 12:00
Session 1: Presentations of contributions (papers, experience reports, demonstrations). Presentations may cover ongoing work, field feedback, prototypes, or conceptual reflections.
→ Each presentation (15–20 min) is followed by a Q&A session with the audience.
12:00 – 13:30
Free lunch break
13:30 – 15:00
Session 2: Presentations (continued)
→ Each presentation is followed by a discussion with participants.
15:00 – 16:00
Open discussion, conclusion
→ Review of the day’s key ideas, informal exchanges on the topics covered, and an opportunity to make contacts for future collaborations.
To participate in the workshop, you need to register on the conference website:
We invite participants to propose a presentation on the theme of AI explainability for end users. This includes, for example, the visualization of AI-generated data, the design of understandable interfaces, consideration of cognitive biases, or any other issue related to transparency, trust, or user appropriation of intelligent systems.
Contributions on explanatory systems not directly related to AI may also be accepted if they help enrich the design of explanations in the context of XAI.
Presentations may cover research work (ongoing or completed), tools, methods or prototypes, experience reports, or conceptual reflections.
Examples of possible topics
Designing explanations tailored to user profiles and usage contexts
Data visualization methods and immersive visualization for explainability
Evaluation of trust, transparency, or reliance
UX studies or fieldwork on explanatory systems
Effects of cognitive biases on the reception or interpretation of explanations
Non-textual forms of explanation (interactive visualization, graphic storytelling, spatial layouts, etc.)
Comparison of XAI methods with or without a focus on interaction
Proposals for new explanatory formats, metaphors, or interaction styles
Social or ethical issues related to explainability
Submission guidelines
Interested participants should submit an abstract of up to one page (including references) in the conference format.
Submission deadline: Friday, 19 September 2025
Notification to authors: Friday, 26 September 2025
Final version due: Friday, 03 October 2025
Each accepted presentation will last approximately 15 minutes, followed by a Q&A session with the audience. The main language of the workshop is French, but proposals and presentations in English are welcome and will be accepted. If participants wish, a compilation of abstracts will be shared on arXiv.