co-located with EurIPS
Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law to safeguard the fundamental rights of individuals subject to algorithmic decision-making, and they inform decisions in sensitive areas such as medicine. It is therefore essential that XAI methods reliably fulfill their intended purpose rather than offering persuasive but incorrect explanations. Achieving this reliability demands a theory of explainable AI: a framework that clarifies which methods answer which questions, and under what assumptions they deliver valid, provably correct answers. Equally important is a systematic account of common failure modes—well-constructed counterexamples that reveal pitfalls and help practitioners avoid them in practice.
With this ELLIS UnConference workshop, we aim to strengthen and connect the community working on the theory of XAI, advancing foundations, curating instructive counterexamples, and translating theoretical insights into practical guidance for responsible deployment. To foster exchange, we emphasize interaction rather than a crowded speaker lineup, reserving ample time for discussion and exchange.
We invite submissions to the workshop; Accepted submissions will be selected for presentation as a talk or a poster. Contributions are submitted as extended abstracts (up to 2 pages) and can contain already published as well as new unpublished work.
We welcome contributions that address the theoretical underpinnings of explainability methods. Submissions may, for example, consist of provable guarantees for explanation methods, identified limitations and impossibility results, illuminating examples and counterexamples, or formal conjectures and work in progress. We are agnostic to the explanation paradigm (feature importance, concept-based, causal, mechanistic interpretability) and modality (tabular data, images, text, and beyond).
Submission Guidelines
Format: NeurIPS style (use the linked official NeurIPS LaTeX template)
Length: Up to 2 pages (excluding references, which may extend beyond the limit). Since the entire submission is an abstract, there is no need to use the abstract environment.
Review: Submissions will be lightly reviewed for relevance and quality. Accepted abstracts will be selected for presentation as posters or short talks. You can specify your preferred presentation format along with the submission and we will try to accommodate your preference.
Archival Policy: The workshop is non-archival. Authors are encouraged to submit work that is preliminary, in progress, or recently published elsewhere.
Important Dates
Submission Deadline: October 15, 2025, AoE
Accept/Reject Notifications: October 31, 2025, AoE
Workshop: December 2, 2025
You can submit your extended abstract via the following form: Submission Form.
The ELLIS UnConference workshop is co-located with EurIPS Copenhagen and will take place on December 2, 2025.
Time Session
8:00 - 9:00 Registration
9:00 - 10:30 Keynotes / Contributed Talks
10:30 - 11:00 Coffee break
11:00 - 12:30 Keynotes / Contributed Talks
12:30 - 13:30 Lunch
13:30 - 15:00 Discussion / Interactive Session
15:00 - 20:00 ELLIS Unconference Program and Poster Session