Submission: extended to 8 May, 2024
Late submission HCAI Track: 18 May, 2024 (only for papers rejected from the IJCAI Human-Centred AI Track)
Notification: 4 June, 2024
Camera-ready submission: 30 June, 2024
In-person workshop: 5 August, 2024
Virtual event: 15 August, 2024 (7-11am UCT, 3-7pm UCT)
19 July: The workshop chairs are delighted to announce that the keynote speaker at the XAI workshop this year will be Professor Elisabeth André from Universität Augsburg in Germany. Elisabeth is an incredible researcher sitting at the intersection of computer science and design, who has worked on areas such as affective computing and social robotics. Elisabeth is a true example of a contemporary researcher combining application with theory. We look forward to her talk!
Explainable Artificial Intelligence (XAI) addresses the challenge of how to communicate to human users how AI systems work and how their specific decisions come about. The need for explainability and interpretability increases as AI systems are deployed in critical applications.
The need for interpretable AI models exists independently of how the models were acquired (i.e., perhaps they were hand-crafted, or interactively elicited without using ML techniques). This raises several questions, such as: How should explainable AIs be designed? What queries should AI systems be able to answer about their models, processes, and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?
The IJCAI 2024 Workshop on Explainable Artificical Intelligence is interested in providing a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.
In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.