Int-XAI 2023

Interactive Explanations of Neural Networks and Artificial Intelligence 

Welcome to the Workshop on Interactive Explanations of Neural Netowrks and Artificial Intelligence (Int-XAI).

Paper submission is open via EasyChair. Please see Call for Papers for more details.


Deep learning architectures have become synonymous with state-of-the-art performance across a broad spectrum of domains. In everything from natural language processing and generation for conversation, to machine vision for clinical decision support, intelligent systems are supporting both the personal and professional spheres of our society. Explaining the outcomes and decision-making of these systems remains a challenge. As the prevalence of AI grows in our society, so too does the complexity and expectation surrounding the ability of autonomous models to explain their actions.


While there are significant and well-documented challenges in explaining different types of data-driven intelligent systems, a particularly interesting and often overlooked element is the need to provision different explanations for different stakeholders. eXplainable Artificial Intelligence (XAI) systems must provision solutions that are personalised to stakeholder needs. Furthermore these systems must be flexible to the changing explanation needs of end-users and acknowledge the  highly subjective nature when satisfying their explanation needs. A clinician trying to interpret the outcome of an MRI scan, for example, requires a very different explanation from a patient who is wondering what impact this scan may have on their life. And both may request further clarifications or expansions following on from the initial explanation. While current state-of-the-art research into XAI tends to categorise users into stakeholder groups, there is also a growing body of work on personalisation of explanations for individual users. But what is lacking and what this workshop aims to address is the view that explanations cannot be one-shot and is an interactive process and therefore places importance on how interactive explanations are presented and absorbed by different stakeholders. 


An interesting avenue of research for this problem is the idea of interactive explanations. Interactive explanations allow the user to refine the explanation that is provided to them, either via provision of additional information (i.e. conversational interaction to understand a user’s explanation needs) or by direct manipulation of the explanation artefact itself (i.e. navigating an explanation provided in virtual reality). Interactivity presents a strong research avenue to improve user experience of explanation across all data modalities, allowing users to personalise explanations to fit their individual needs. 


The aim of the Int-XAI Workshop is to foster connections between research communities to encourage further collaboration leading to the next generation of explanation algorithms. We envision a future whereby these algorithms are flexible to multiple data modalities, while simultaneously their interactive nature allows them to service a broad range of stakeholders and explanation needs. The workshop will provide an opportunity to share exciting research on methods targeting interactive explanations of AI and NN systems, with a focus on deep neural networks. 

RGU Logo