IJCAI 2024
Workshop on Explainable
Artificial Intelligence (XAI)
Important dates (all times are 23:59 Anywhere on Earth, UTC-12)
Submission: extended to 8 May, 2024
Late submission HCAI Track: 18 May, 2024 (only for papers rejected from the IJCAI Human-Centred AI Track)
Notification: 4 June, 2024
Camera-ready submission: 30 June, 2024
In-person workshop: 5 August, 2024
Virtual event: 15 August, 2024 (7-11am UCT, 3-7pm UCT)
News
19 July: The workshop chairs are delighted to announce that the keynote speaker at the XAI workshop this year will be Professor Elisabeth André from Universität Augsburg in Germany. Elisabeth is an incredible researcher sitting at the intersection of computer science and design, who has worked on areas such as affective computing and social robotics. Elisabeth is a true example of a contemporary researcher combining application with theory. We look forward to her talk!
Workshop overview
Explainable Artificial Intelligence (XAI) addresses the challenge of how to communicate to human users how AI systems work and how their specific decisions come about. The need for explainability and interpretability increases as AI systems are deployed in critical applications.
The need for interpretable AI models exists independently of how the models were acquired (i.e., perhaps they were hand-crafted, or interactively elicited without using ML techniques). This raises several questions, such as: How should explainable AIs be designed? What queries should AI systems be able to answer about their models, processes, and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?
The IJCAI 2024 Workshop on Explainable Artificical Intelligence is interested in providing a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.
In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.
Day 1: In person, Jeju
- a keynote talk;
- presentation of papers accepted into the workshop
- poster presentations; and
- a 'fishbowl' panel for participants to discuss key ideas, gaps, and future directions.
(Image: Poster session from XAI 2023 workshop in Macau).
Day 2: Virtual via Zoom
- virtual presentations of papers accepted into the conference; and
- session discussions.
The virtual event is so that people who cannot attend the in-person event can still present, contribute ideas, and have discussions. We encourage those who attend IJCAI in person to also attend the virtual event.
(Image: Presentation from Mallika Mainali during the XAI 2023 virtual event)