TOPICS
Multimodal XAI
XAI for multi-modal data retrieval, collection, augmentation, generation, and validation: From data explainability to understanding and mitigating data bias
XAI for Human-Computer Interaction (HCI): From explanatory user interfaces to interactive and interpretable machine learning approaches with human-in-the-loop and machine-in-the-loop approaches
Augmented reality for multi-modal XAI
XAI approaches leveraging application-specific domain knowledge: From concepts to large knowledge repositories (ontologies) and corpus
Design and validation of multi-modal explainers: From endowing explainable models with multi-modal explanation interfaces to measuring model explainability and evaluating the quality of XAI systems
Quantifying XAI: From defining metrics and methodologies to assessing the effectiveness of explanations in enhancing user understanding, reliance, and trust
Large knowledge bases and graphs that can be used for multi-modal explanation generation
Large language models and their generative power for multi-modal XAI
Proof-of-concepts and demonstrators of how to integrate effective and efficient XAI into real-world human decision-making processes
Ethical, Legal, Socio-Economic and Cultural (ELSEC) considerations in XAI: Examining ethical implications surrounding the use of high-risk AI applications, including potential biases and the responsible deployment of sustainable “green” AI in sensitive domains
Affective XAI
Explainable affective computing in healthcare, psychology, physiology, education, entertainment, and gaming
Privacy, fairness, and ethical considerations in affective computing
Multimodal (textual, visual, vocal, physiological) emotion recognition systems
User environments for the design of systems to better detect and classify affect
Sentiment analysis and explainability
Social robots and explainability
Emotion-aware recommender systems
Accuracy and explainability in emotion recognition
Machine learning using biometric data to classify biosignals
Virtual reality in affective computing
Human–Computer Interaction (HCI) and Human in the Loop (HITL) approaches in affective computing
Interactive XAI
Dialogue-based approaches to XAI
Use of multiple modalities in XAI systems
Approaches to dynamically adapt explainability in interaction with a user
XAI approaches that use a model of the partner to adapt explanations
XAI approaches for collaborative decision-making between humans and AI models
Methods to measure and evaluate the understanding of the users of a model
Methods to measure and evaluate the ability to use models effectively in downstream tasks
Interactive methods by which a system and a user can negotiate what is to be explained
Modelling the social functions and aspects of an explanation
Methods to identify users’ information and explainability needs
@ECAI 2025 - Workshop "Multimodal, Affective and Interactive eXplainable Artificial Intelligence" (MAI-XAI 25)