XLoKR 2022

The Third Workshop on Explainable Logic-Based Knowledge Representation

Sponsored by The Transregional Collaborative Research Centre 248 “Foundations of Perspicuous Software Systems” (CPEC – TRR 248) and the Argumentation-based Deep Interactive EXplanations project under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101020934)".

Program

You can find the complete FLoC program here.

News

  • January 3: Website and Submission Portal online

  • April 30: We are very happy that Fabio Cozman will give an invited talk at XLoKR. He is Professor at the University of São Paulo (USP) and Director of the Center for Artificial Intelligence at USP. His work is at the intersection of Knowledge Representation and Machine Learning and includes contributions to the understanding of Probabilistic Graphical Models, Probabilistic Logic Programming and Probabilistic Classifiers.

  • May 4: We are delighted that Esra Erdem will give an invited talk about Explanations in Answer Set Programming. She is Professor at the Sabancı University and works on the theory and practice of Knowledge Representation methods. Her work found applications in various domains including Bioinformatics, Logistics, Robotics, and Economics.

About

XLoKR 2022 is the third workshop in the XLoKR series. It is co-located with the 19th International Conference on Principles of Knowledge Representation and Reasoning, which will take place in Haifa, Israel. Previous editions took place in

  • 2020 co-located with the 17th International Conference on Principles of Knowledge Representation and Reasoning and

  • 2021 co-located with the 18th International Conference on Principles of Knowledge Representation and Reasoning.


Motivation and Topic

Embedded or cyber-physical systems that interact autonomously with the real world, or with users they are supposed to support, must continuously make decisions based on sensor data, user input, knowledge they have acquired during runtime as well as knowledge provided during design-time. To make the behavior of such systems comprehensible, they need to be able to explain their decisions to the user or, after something has gone wrong, to an accident investigator.


While systems that use Machine Learning (ML) to interpret sensor data are very fast and usually quite accurate, their decisions are notoriously hard to explain, though huge efforts are currently being made to overcome this problem. In contrast, decisions made by reasoning about symbolically represented knowledge are in principle easy to explain. For example, if the knowledge is represented in (some fragment of) first-order logic, and a decision is made based on the result of a first-order reasoning process, then one can in principle use a formal proof in an appropriate calculus to explain a positive reasoning result, and a counter-model to explain a negative one. In practice, however, things are not so easy also in the symbolic KR setting. For example, proofs and counter-models may be very large, and thus it may be hard to comprehend why they demonstrate a positive or negative reasoning result, in particular for users that are not experts in logic. Thus, to leverage explainability as an advantage of symbolic KR over ML-based approaches, one needs to ensure that explanations can really be given in a way that is comprehensible to different classes of users (from knowledge engineers to laypersons).


The problem of explaining why a consequence does or does not follow from a given set of axioms has been considered for full first-order theorem proving since at least 40 years, but there usually with mathematicians as users in mind. In knowledge representation and reasoning, efforts in this direction are more recent, and were usually restricted to sub-areas of KR such as AI planning and description logics. The purpose of this workshop is to bring together researchers from different sub-areas of KR and automated deduction that are working on explainability in their respective fields, with the goal of exchanging experiences and approaches. A non-exhaustive list of areas to be covered by the workshop are the following:

  • AI planning

  • Answer set programming

  • Argumentation frameworks

  • Automated reasoning

  • Description logics

  • Non-monotonic reasoning

  • Probabilistic representation and reasoning


Important Dates

  • Paper Submission: May 10

  • Notification: June 15

  • Camera-ready papers: June 30

  • Workshop: July 31, 2021

Submission Guidelines

We invite extended abstracts of 2-5 pages (excluding references) on topics related to explanation in logic-based KR. The papers should be formatted in Springer LNCS Style and must be submitted via EasyChair

https://easychair.org/conferences/?conf=xlokr2022


Since the workshop will only have informal proceedings and the main purpose is to exchange results, we welcome not only papers covering unpublished results, but also previous publications that fall within the scope of the workshop.



Organization


Organization Committee


Program Committee

  • Franz Baader

  • Sander Beckers

  • Bart Bogaerts

  • Annemarie Borg

  • Stefan Borgwardt

  • Gerhard Brewka

  • Sarah Alice Gaggl

  • Joerg Hoffmann

  • Ruth Hoffmann

  • Thomas Lukasiewicz

  • Pierre Marquis

  • Cristian Molinaro

  • Rafael Peñaloza

  • Nico Potyka

  • Antonio Rago

  • Silja Renooij

  • Francesco Ricca

  • Zeynep G. Saribatur

  • Stefan Schlobach

  • Mohan Sridharan

  • Matthias Thimm

  • Francesca Toni

  • Trung-Kien Tran

  • Markus Ulbricht


Contact

For organizational questions, please contact Nico Potyka.