Paper Submission: 11 July, 2025
Decision Notification: 1 August, 2025
Camera Ready: 7 August, 2025
In-Person Event: 26 October, 2025
All deadlines are due 23:59 Anywhere on Earth (UTC-12).
Call for Papers
While explainable artificial intelligence (XAI) has become massively popular and impactful over the last years and has become an integral part of all major AI venues, progress in the field is, to some degree, still hindered by a lack of agreed-on evaluation methods and metrics. Many articles present only anecdotal evidence, and the large variation in explanation techniques and application domains makes it challenging to define, quantify, and compare the relevant performance criteria for XAI. This leads to a lack of standardized baselines and established state-of-the-art, making the contributions of newly proposed XAI methods difficult to evaluate. The discussion on how to evaluate explainability and interpretability, whether through user studies or with computational proxy measures, is ongoing.
In recent years, there has also been a growing interest in data- and AI-driven solutions for tackling complex decision-making problems in practice, including NP-hard combinatorial optimization problems such as nurse scheduling or blood matching, as well as sequential decision-making tasks such as optimizing energy use or warehouse order picking. Explaining solutions and decisions for such reinforcement learning, planning, or optimization problems introduces additional layers of complexity compared to the majority of work in XAI, which focuses on explaining the input-output mappings of "black box" models like neural classifiers. Solutions in these settings may, for example, feature a complex inner structure or a temporal dimension. Current AI approaches to complex decision-making problems still focus mainly on optimization performance, while comparatively less attention has been paid to explainability; and there is an even more significant gap in research on evaluation metrics and methods for explainability in this context.
This workshop brings together researchers interested in XAI in general and in AI planning, reinforcement learning, and data-driven optimization in particular, to discuss recent developments in XAI evaluation and collaboratively develop a roadmap to address this gap.
Workshop Themes
The following is a non-exhaustive list of topics that we would like to cover in the workshop. We welcome submissions in these or other categories that have a well-motivated justification for advancing our understanding of XAI and complex decision-making evaluation:
Evaluation metrics for XAI (even if not yet applied to complex decision-making problems);
Benchmarks for XAI evaluation;
LLMs and XAI evaluation;
Agentic system evaluation;
Reports on evaluation with different stakeholder groups in practice;
Computational evaluation approaches;
Evaluating interactive systems/open-ended interactions;
Learnings on experimental studies from psychology and social sciences;
Evaluation methods for explainable autonomous agents;
Evaluation of XAI in embodied systems/robotics;
Impact of users' preferences;
HCI for XAI evaluation;
Testing actionable algorithmic recourse;
Evaluating contestability of AI decisions;
Evaluation metrics for XAI in combinatorial optimization;
Evaluation of interpretable RL models;
Evaluation of explanations in AI planning, e.g. based on model reconciliation;
Performance trade-offs of interpretable methods for optimization.
Submission Details
Length: Authors may submit regular papers (up to 7 pages plus unlimited references).
Review transfer: We will also consider relevant papers which were rejected from the main conference based on review transfer; to use this option, authors are invited to submit a request after rejection. We will then make a decision based on the reviews written for the main conference, which will be made available to us in anonymized form. Please note the relevant deadlines at the top of this page.
Format: All papers should be typeset in the ECAI 2025 style (ECAI LaTeX Template). Accepted papers will be made available on the workshop website.
Supplementary material can be added as an appendix at the end of the main PDF file; that is, just submit a single PDF file with the main body of the paper, with an appendix that is not included in the page count. Reviewers will not be required to read the supplementary material, so ensure the body of the paper is self-contained.
Reviews are double-blind, so no identifying information should be on the papers. The reviewing criteria will be the soundness of the scientific approach, the novelty of the work, and its fit with the scope of the workshop, while explicitly welcoming preliminary results and work-in-progress.
Accepted papers will not be published in archival proceedings. This means that you can submit your paper to another venue after the workshop. However, we aim at editing a special issue on the topic of the workshop, giving an opportunity for selected papers to be published in an extended version.
Submission link: EasyChair