The Sixth Workshop on Explainable Logic-Based Knowledge Representation
Sponsored by the Argumentation-based Deep Interactive EXplanations project under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101020934) and the Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” (CPEC – TRR 248).
Sarath Sreedharan
Prof. Sreedharan is an Assistant Professor at Colorado State University. His core research interests include designing human-aware decision-making systems to generate behaviours that align with human expectations. He completed his Ph.D. at Arizona State University, where his doctoral dissertation received one of the 2022 Dean’s Dissertation Awards for Ira A. Fulton Schools of Engineering and was an Honourable mention for the ICAPS-23 Outstanding Dissertation Award. He is the lead author of a Morgan Claypool monograph on explainable human-AI interaction and has provided tutorials and invited talks on the topic at various venues. He was selected as a DARPA Riser Scholar for 2022, a Highlighted New Faculty at AAAI-23, and an IEEE AI's 10 to watch for 2024.
What can human-aware AI do for explanations?
In this talk, I would like to introduce the framework of human-aware AI and how it provides us with a conceptual lens to understand the role played by explanation in supporting effective human-AI interaction. We will use this framework as a way to characterize the types of explanation and when each type might be most appropriate. This framework will also help highlight the centrality of knowledge representation and how different use cases and different users may require us to use different representations and vocabulary to create effective explanations. Finally, I would like to urge the community to start thinking about how our explanations would be used in practice. In this context, I will focus on two important explanation use cases: first, actionable explanations, i.e., explanations that empower users to influence the decision-making process, and second, trust-calibration explanations that help users align their trust with the AI system's true capabilities and goals.
Christian Alrabbaa, Stefan Borgwardt, Philipp Herrmann and Markus Krötzsch
The Shape of EL Proofs: A Tale of Three Calculi (Extended Abstract)
Lars Bengel and Matthias Thimm
Sequence Explanations for Acceptance in Abstract Argumentation
Meghyn Bienvenu, Diego Figueira and Pierre Lafourcade
Shapley Revisited: Tractable Responsibility Measures for Query Answers (Extended Abstract)
Martin Demovič, Peter Švec, Martin Homola and Maurice Funk
On the Prospects of EL and ELU Concept Learners for Explainable Malware Detection
Tobias Geibinger and Zeynep G. Saribatur
What Can We Explain in Answer Set Programming?
Nicola Gigante, Francesco Leofante and Andrea Micheli
Counterfactual Scenarios for Automated Planning
Jakub Kloc, Janka Boborová, Martin Homola and Júlia Pukancová
CATS Solver: The Rise of Hybrid Abduction Algorithms (Extended Abstract)
Rafael Patronilo, Ricardo Gonçalves, Matthias Knorr and Ludwig Krippahl
Towards Embedding Concepts into Neural Networks for Explanations
XLoKR 2025 is the sixth workshop in the XLoKR series. It will be co-located with the 22nd International Conference on Principles of Knowledge Representation and Reasoning, which will take place in Melbourne, Australia. Previous editions of XLoKR took place in
2020 co-located with the 17th International Conference on Principles of Knowledge Representation and Reasoning,
2021 co-located with the 18th International Conference on Principles of Knowledge Representation and Reasoning,
2022 co-located with the 19th International Conference on Principles of Knowledge Representation and Reasoning,
2023 co-located with the 20th International Conference on Principles of Knowledge Representation and Reasoning and
2024 co-located with the 21st International Conference on Principles of Knowledge Representation and Reasoning.
Embedded or cyber-physical systems that interact autonomously with the real world, or with users they are supposed to support, must continuously make decisions based on sensor data, user input, knowledge they have acquired during runtime as well as knowledge provided during design-time. To make the behavior of such systems comprehensible, they need to be able to explain their decisions to the user or, after something has gone wrong, to an accident investigator.
While systems that use Machine Learning (ML) to interpret sensor data are very fast and usually quite accurate, their decisions are notoriously hard to explain, though huge efforts are currently being made to overcome this problem. In contrast, decisions made by reasoning about symbolically represented knowledge are in principle easy to explain. For example, if the knowledge is represented in (some fragment of) first-order logic, and a decision is made based on the result of a first-order reasoning process, then one can in principle use a formal proof in an appropriate calculus to explain a positive reasoning result, and a counter-model to explain a negative one. In practice, however, things are not so easy also in the symbolic KR setting. For example, proofs and counter-models may be very large, and thus it may be hard to comprehend why they demonstrate a positive or negative reasoning result, in particular for users that are not experts in logic. Thus, to leverage explainability as an advantage of symbolic KR over ML-based approaches, one needs to ensure that explanations can really be given in a way that is comprehensible to different classes of users (from knowledge engineers to laypersons).
The problem of explaining why a consequence does or does not follow from a given set of axioms has been considered for full first-order theorem proving since at least 40 years, but usually with mathematicians as users in mind. In knowledge representation and reasoning, efforts in this direction are more recent, and were usually restricted to sub-areas of KR such as AI planning and description logics. The purpose of this workshop is to bring together researchers from different sub-areas of KR and automated deduction that are working on explainability in their respective fields, with the goal of exchanging experiences and approaches. A non-exhaustive list of areas to be covered by the workshop are the following:
AI planning
Answer set programming
Argumentation frameworks
Automated reasoning
Description logics
Non-monotonic reasoning
Probabilistic representation and reasoning
Paper Submission: July 31, 2025
Notification: August 21, 2025
Camera-ready papers: October 10, 2025
Workshop: November 13, 2025
We invite extended abstracts of 2–5 pages (excluding references) on topics related to explanation in logic-based KR. Reviewing will be single-blind. The papers should be formatted in Springer LNCS Style and must be submitted via
Since the workshop will only have informal proceedings and the main purpose is to exchange results, we welcome not only papers covering unpublished results, but also previous publications that fall within the scope of the workshop. To avoid conflicts with previous/future publications, we will not have formal proceedings, but will make the papers available on the website.
Xiang Yin, Imperial College London
Stefan Borgwardt, TU Dresden
Franz Baader, TU Dresden
Nico Potyka, Cardiff University
Franz Baader, TU Dresden
Bart Bogaerts, Vrije Universiteit Brussel
Jörg Hoffmann, Saarland University
Thomas Lukasiewicz, University of Oxford
Nico Potyka, Cardiff University
Francesca Toni, Imperial College London
Franz Baader, TU Dresden
Bart Bogaerts, KU Leuven
Stefan Borgwardt, TU Dresden
Gerhard Brewka, Leipzig University
Roberta Calegari, Università di Bologna
Deniz Gorur, Imperial College London
Jörg Hoffmann, Saarland University
Ruth Hoffmann, University of St. Andrews
Francesco Leofante, Imperial College London
Thomas Lukasiewicz, Vienna University of Technology
Pierre Marquis, CRIL, U. Artois & CNRS
Cristian Molinaro, Università della Calabria
Cem Okulmus, Paderborn University
Rafael Peñaloza, University of Milano-Bicocca
Nico Potyka, Cardiff University
Antonio Rago, Imperial College London
Francesco Ricca, University of Calabria
Maryam Rostamigiv, University of Regina
Fabrizio Russo, Imperial College London
Zeynep G. Saribatur, TU Wien
Stefan Schlobach, Vrije Universiteit Amsterdam
Khan Shakil, University of Regina
Xiang Yin, Imperial College London
For organizational questions, please contact Xiang Yin and Stefan Borgwardt.