Tracks and topics
The workshop is made up of three tracks:
MULTIMODAL XAI
Multimodal XAI is concerned with building and validating multi-modal resources that contribute to the generation and evaluation of effective multi-modal explanations. Case studies in real-world applications where XAI has been applied, emphasizing the benefits and challenges.
AFFECTIVE XAI
Affective XAI concerns challenges, opportunities and solutions for applying explainable machine learning algorithms in affective computing (known also as artificial emotional intelligence), and refers to machine systems that sense and recognize emotions.
INTERACTIVE XAI
Interactive XAI poses the question of how to achieve, improve and measure users’ understanding and ability to operate effectively at the center of the XAI process as a basis to dynamically and interactively adapt the explanation to users’ needs and level of understanding.
The topics of interest include (but are not limited to):
MULTIMODAL
XAI for Multi-modal Data Retrieval, Collection, Augmentation, Generation and Validation: from data explainability to understanding and mitigating data bias.
XAI for Human-Computer Interaction (HCI): from Explanatory User Interfaces to interactive and interpretable machine learning approaches with human-in-the-loop.
Augmented Reality for Multi-modal XAI.
XAI approaches leveraging application-specific domain knowledge: from concepts to large knowledge repositories (ontologies) and corpus.
Design and Validation of Multi-modal explainers: from endowing explainable models with multi-modal explanation interfaces to measuring model explainability and evaluating quality of XAI systems.
Quantifying XAI: from defining metrics and methodologies to assess the effectiveness of explanations in enhancing user understanding and trust.
Large knowledge bases and graphs that can be used for Multi-modal Explanation generation.
Large language models and their generative power for Multi-modal XAI.
Proof-of-concepts and demonstrators of how to integrate effective and efficient XAI into real-world human decision-making processes.
Ethical, Legal, Socio-Economic, and Cultural (ELSEC) Considerations in XAI: Examining ethical implications surrounding the use of high-risk AI applications, including potential biases and the responsible deployment of sustainable “green” AI in sensitive domains.
AFFECTIVE
Explainable Affective Computing in Healthcare, Psychology and Physiology
Explainable Affective Computing in Education, Entertainment and Gaming
Privacy, Fairness and Ethical considerations in Affective Computing and Explainable AI applied in Affective Computing
Bias in Affective Computing and Explainable AI applied in Affective Computing
Multimodal (textual, visual, vocal, physiological) Emotion Recognition Systems
User environments for the design of systems to better detect and classify affect
Sentiment Analysis and Explainability
Social Robots and Explainability
Emotion Aware Recommender Systems
Accuracy in Emotion Recognition and Explainable AI applied in Affective Computing
Affective Design
Machine learning using biometric data to classify biosignals
Virtual Reality in Affective Computing
Human-computer interaction (HCI) and human in the loop (HITL) approaches in Affective Computing
INTERACTIVE
Dialogue-based approaches to XAI
Use of multiple modalities in XAI systems
Approaches to dynamically adapt explainability in interaction with a user
XAI approaches that use a model of the partner to adapt explanations
Methods to measure and evaluate the understanding of the users of a model
Methods to measure and evaluate the ability to use models effectively in downstream tasks
Interactive methods by which a system and a user can negotiate what is to be explained
Modelling the social functions and aspects of an explanation
Methods to identify a user’s information and explainability needs