Paper submission: extended to 8 May, 2024
Late submission HCAI Track: 18 May, 2024 (only for papers rejected from the IJCAI Human-Centred AI Track)
Notification: 4 June, 2024
Camera-ready submission: 30 June, 2024
In-person workshop: 4-5 August, 2024
Virtual event: August, 2024 [Dates TBC]
XAI may interest researchers studying the topics listed below (among others). We are particularly interested in papers that draw out cross-disciplinary problems and solutions to explainability. The call for papers is divided into a general track and three special tracks: human-centered XAI; explainable sequential decision-making; and interpretable machine learning. We also have a special late-submission track for papers rejected from the IJCAI Human-centered AI track in the main conference.
Due to the author notification deadline of the IJCAI main conference human-centered AI track being on 16 May, which is after the workshop deadline, we have opened a special late submission for papers rejected from this IJCAI track. The deadline for this track is 18 May, 2024. To submit a paper to this track, please submit: (a) your paper; (b) the IJCAI reviews for the paper; and (c) optionally, a short description of any changes made in response to the reviewer comments.
XAI is particular in that its role is primarily to reveal the underlying workings of a system or elaborate on its reasoning. It deals with two core problems: (a) how to extract/uncover that reasoning; and (b) how to communicate that reasoning to the user. The intended focus of this track is on (b) whereby we are interested in how users interact with explanations, trust, automation bias, deception and performance. We encourage papers with emphasis on human-centered empirical evaluation and consideration for different stakeholders. Human-Centered XAI therefore, encompasses topics such as:
Trust, reliance, and XAI
Ethical XAI
Reports on human behavioural experiments/studies
Impact on users
Effect on automation bias
User preferences
Communication approaches
Metrics of explainability evaluation
HCI for explainability
Psychological and philosophical foundations of explainability and interpretability
Social aspects of XAI
Interactive XAI
XAI and social-, behavioural- and psychological-oriented disciplines
Actionable recourse
Contestability of (semi-)automated decisions
Commonsense reasoning
Decision making and sensemaking
The intended focus of the track is on explainable autonomous agents - systems that operate in the context of an environment, typically through a goal-driven sequence of decisions. This stands in contrast to the substantial existing work on interpretable machine learning, which generally focuses on the single input-output mappings of "black box" models such as neural networks. While such ML models are an important tool, intelligent behavior extends over time and needs to be explained and understood as such. Explainable SDM for example encompasses topics such as:
Explainable/interpretable/intelligible reinforcement learning
Explainable (classical) planning
Explainable search
Explainability in Multi-Agent Systems
Sequential decision-making approaches as models of explanatory dialogue with users
Sequential decision-making approaches for and through negotiations or argumentation
Explanation-aware sequential decision-making
Integration of explainable agents and explainable deep learning, e.g. when DL models are guiding agent behaviors
User interfaces/visualizations for explaining agent behavior, learning or planning
Evaluation methods for explainable agents
Explainability for embodied systems/robotics
Other practical applications for explainability in sequential or goal-oriented tasks, e.g. in planning/scheduling, in pathfinding, etc.
Agent policy summarization
Formal foundations of explainable agency
Cognitive, social, and philosophical theories of explainable agency
This track focuses on enhancing the transparency and explainability of the inner workings of machine learning models. With the increasing complexity of machine learning models and their widespread use in various domains, the need for interpretable machine learning has become more crucial than ever. This track welcomes submissions on intrinsically interpretable models (e.g., neuro-symbolic approaches), sparse interpretable models, as well as post-hoc explainability methods for machine learning. A particular focus of this track is the evaluation of the fidelity of explainability methods for machine learning models to ensure that they provide accurate and reliable insights into the model's decision-making processes. As such, the track encompasses topics such as:
Transparent and interpretable machine learning models
Sparse models
Approaches for simplifying and approximating complex models while preserving fidelity
Evaluation of post-hoc model fidelity
Post-hoc methods for visualizing and understanding deep neural networks
Theoretical foundations and metrics for interpretability in machine learning
Benchmarking interpretability methods
Causal inference in machine learning interpretability
Interpretable machine learning in natural language processing
Interpretable machine learning in computer vision
Human-in-the-loop systems for improving interpretability
Interpretability in the context of unsupervised and semi-supervised learning.
The general track will focus on other areas of explainability. Topics in this track include:
Technical approaches to explainability
Approaches for computational evaluation
Applied case studies
New benchmarks
Industry applications of XAI
Industry insights on XAI
Historical perspectives of XAI and mature surveys – please note that surveys should provide new and interesting insights into the field or focus on specific sub-areas, rather than 'just another set of descriptions about existing XAI techniques'.
Submission site: https://cmt3.research.microsoft.com/XAIIJCAI2024
Page limit: Authors may submit long papers (7 pages plus unlimited pages of references) or short papers (4 pages plus unlimited page of references).
Format: A PDF file, using the LaTeX styles or Word template for IJCAI.
Author details: The review process is double blind, so author details should be omitted from the submitted PDF file. Author information must be entered into the Microsoft CMT submission site.
Reviewing: All authors are expected to review 1-2 papers if called upon.
Supplementary material: Authors may submit supplementary material (e.g. appendices, data, source code) as an additional PDF file. Reviewers will not be expected to review this, so please make sure the main paper is self contained.
Copyright: The workshop proceedings will be non-archived, so authors keep the copyright and are free to submit the work to other venues after the review process (please do not submit to another venue simultaneously) or upload onto pre-print servers such as arXiv.