co-located with EurIPS
Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law to safeguard the fundamental rights of individuals subject to algorithmic decision-making, and they inform decisions in sensitive areas such as medicine. It is therefore essential that XAI methods reliably fulfill their intended purpose rather than offering persuasive but incorrect explanations. Achieving this reliability demands a theory of explainable AI: a framework that clarifies which methods answer which questions, and under what assumptions they deliver valid, provably correct answers. Equally important is a systematic account of common failure modes—well-constructed counterexamples that reveal pitfalls and help practitioners avoid them in practice.
With this ELLIS UnConference workshop, we aim to strengthen and connect the community working on the theory of XAI, advancing foundations, curating instructive counterexamples, and translating theoretical insights into practical guidance for responsible deployment.
The ELLIS UnConference workshop is co-located with EurIPS Copenhagen and will take place on December 2, 2025.
Time Session
8:00 - 9:00 Registration
9:00 - 10:30 Talks
10:30 - 11:00 Coffee break
11:00 - 12:30 Talks
12:30 - 13:30 Lunch
13:30 - 15:00 Talks
15:00 - 20:00 ELLIS Unconference Program
Click the title to expand the abstract.
9:00 - 9:30
Jessica Hullman
Explanations are a means to an end
9:30 - 10:00
Shahaf Bassan
Explanation, Guaranteed! Provable Certificates for Machine Learning Explanations
10:00 - 10:30
Kiet Vo
Explanation Design in Strategic Learning: Sufficient Explanations that Induce Non-harmful Responses
Taimeskhanov Magamed
Feature Attribution from First Principles
11:00 - 11:30
Tiago Pimentel
TBD
11:30 - 12:00
Dolores Romero Morales
TBD
12:00 - 12:30
Amir-Hossein Karimi
Explainable AI is Causality in Disguise
Amin Parchami
FaCT: Faithful Concept Traces for Explaining Neural Network Decisions
13:30 - 14:00
Bernt Schiele
TBD
14:00 - 14:30
Mateja Jamnik
Tbd
14:30 - 15:00
Abhijeet Mulgund
Theoretical Aspects of Deep-Learned Error-Correcting Codes
Amir Mehrpanah
Spectral Analysis as a Basis for a Theory of XAI
We invite submissions to the workshop; Accepted submissions will be selected for presentation as a talk or a poster. Contributions are submitted as extended abstracts (up to 2 pages) and can contain already published as well as new unpublished work.
We welcome contributions that address the theoretical underpinnings of explainability methods. Submissions may, for example, consist of provable guarantees for explanation methods, identified limitations and impossibility results, illuminating examples and counterexamples, or formal conjectures and work in progress. We are agnostic to the explanation paradigm (feature importance, concept-based, causal, mechanistic interpretability) and modality (tabular data, images, text, and beyond).
Submission Guidelines
Format: NeurIPS style (use the linked official NeurIPS LaTeX template)
Length: Up to 2 pages (excluding references, which may extend beyond the limit). Since the entire submission is an abstract, there is no need to use the abstract environment.
Review: Submissions will be lightly reviewed for relevance and quality. Accepted abstracts will be selected for presentation as posters or short talks. You can specify your preferred presentation format along with the submission and we will try to accommodate your preference.
Archival Policy: The workshop is non-archival. Authors are encouraged to submit work that is preliminary, in progress, or recently published elsewhere.
Important Dates
Submission Deadline: October 15, 2025, AoE
Accept/Reject Notifications: October 31, 2025, AoE
Workshop: December 2, 2025
You can submit your extended abstract via the following form: Submission Form.
The submission form is now closed.