Submission: 9 May 2025
Notification: 6 June 2025
Camera-ready submission: 31 July 2025
Workshop: 17 August 2025 (at IJCAI 2025)
Explainable Artificial Intelligence (XAI) addresses the challenge of how to communicate and explain the decision-making of AI systems. The need for explainability increases as AI systems are deployed in critical applications, raising questions such as: how should explainable AI systems be designed? What queries should AI systems be able to answer about their models and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?
The Explainable AI (XAI) workshop at IJCAI provides a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.
In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.
If you have any questions about the IJCAI Workshop on XAI, please contact the organizers at:
ijcai.xai.workshop at gmail.com