SICSA XAI Workshop 2021

Welcome to the SICSA Workshop on eXplainable Artificial Intelligence (XAI).

The SICSA XAI Workshop 2021 has now finished. Thanks to all of our authors and programme committee for helping to deliver an excellent workshop. Recorded paper presentations (where the authors have permitted) are available here: Recorded Presentations.


The use of AI and ML systems is increasingly becoming more commonplace in everyday life. In everything from recommender systems for media streaming services to machine vision for clinical decision support, intelligent systems are supporting both the personal and professional spheres of our society. However explaining the outcomes and decision-making of these systems remains a challenge. As the prevalence of AI grows in our society, so too does the complexity and expectation surrounding the ability of autonomous models to explain their actions.


Regulations increasingly support users rights to fair and transparent processing in automated decision-making systems. This can be difficult when the latest trends in data-driven ML systems, such as deep learning architectures, tend to be black-boxes with opaque decision-making processes. Furthermore, the need for accountability means that pipeline, ensemble and multi-agent systems may require complex combinations of explanations before being understandable to their target audience. Beyond the models themselves, designing explainer algorithms for users remains a challenge due to the highly subjective nature of the explanation itself.

The SICSA XAI workshop will provide a forum to share exciting research on methods targeting explanation of AI and ML systems. Our goal is to foster connections among SICSA researchers interested in Explainable AI by highlighting and documenting promising approaches, and encouraging further work. We expect to draw interest from AI researchers working in a number of related areas including NLP, ML, reasoning systems, intelligent user interfaces, conversational AI and adaptive user interfaces, causal modelling, computational analogy, constraint reasoning and cognitive theories of explanation and transparency.

SICSA Logo
RGU Logo
IDUN Logo

This workshop is produced using funding from the Scottish Informatics and Computer Science Alliance (SICSA).

It has been developed in collaboration between the Robert Gordon University (RGU) Artificial Intelligence and Reasoning (AIR) research group, and the Norwegian University of Science and Technology (NTNU) Idun research group.

This collaboration has been partially supported by NFR 295920 IDUN.