This workshop represents a distinctive perspective in the realm of XAI, going on with the series started last year with the first MAI-XAI@ECAI2024.
We not only delve into innovative techniques for providing some insights into the explainability of data and models but also emphasizes the pressing need to address nuanced challenges for real-world applications, with multi-modal affective interaction emerging as a crucial requirement. Beyond the proposal of novel XAI approaches, it is essential to generate open-source resources to be shared and reused by the XAI community. It is also worthy to provide the community with corpus and benchmark datasets not only for generating but also for fine-tuning pre-existent models and for measuring the impact of explanations when employed in real-world decision-making processes, ensuring their fidelity to the underlying model and evaluating their utility in fostering meaningful hybrid human-AI interactions aimed at obtaining high-performance (but also human-friendly and environmental-friendly) AI models that also reason in a human-expert-like manner.
We focus on the following three pilars toward creating more natural explanations:
Multimodal XAI: Multi-modality is demanded at the level of both data and models. Multi-modality requires dealing properly with structured and non-structured heterogeneous data (i.e., tabular data, text, images, sound, video, etc.). Multi-modal explanations must be customizable and easy to adapt not only to either user preferences or user needs but also adaptable to different communication channels in the form of natural phenotropics multi-lingual human–machine interactions. Nonetheless, most existing resources are developed ad-hoc for specific applications, usually considering only one or two modalities, being hard to combine, reuse and recycle in a human-centred and sustainable way.
Affective XAI: The extent to which XAI systems should be equipped with abilities to detect and express human emotions remains an open question. Some researchers have hypothesized that including an affective component might increase the predictability of systems and help users in reasoning about the causality of systems and predictions. The technical challenges for the systems developed within the affective computing spectrum are related to multimodal natural language processing, such as sentiment analysis tools that use natural language processing and text analysis in addition to emotion detection from signals and modalities, including gestures, posture, facial information, heart rate, electrodermal activity, voice, speech rate, pitch, and intensity.
Interactive XAI: Beyond regarding an explainee as a passive receiver of an (adapted) explanation, previous research has proposed that explainees should have a more active role, being able to actively co-shape the explanation in an interactive manner. However, there has been little emphasis so far on methods that adapt the explanation dynamically to a user's needs by evaluating whether the user has understood the explanation. We, therefore, need novel methods to better identify the information needs of a user as well as novel methods to measure the degree to which a user has understood the explanation, both in order to adapt the explanation further as well as to determine whether the explanation has been successful.
We aim to offer researchers and practitioners the opportunity to identify new promising research directions on XAI. Attendants are encouraged to present case studies in real-world applications where XAI has been successfully applied, emphasizing the practical benefits and challenges encountered. It is worth noting that the workshop is partly supported by the MSCA HYBRIDS project, so submissions related to explainable hybrid human-AI approaches for combating disinformation and abusive language are welcome. Moreover, the workshop is sponsored by the SAIL network and the XAI4SOC project (PID2021-123152OB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and by “ESF Investing in your future”).
@ECAI 2025 - Workshop "Multimodal, Affective and Interactive eXplainable Artificial Intelligence" (MAI-XAI 25)