Special Issue

The Role of Ontologies and Knowledge in Explainable AI

Aims and Scope

Explainable AI (XAI) has been identified as a key factor for developing trustworthy AI systems. The reasons for equipping intelligent systems with explanation capabilities are not limited to user rights and acceptance. Explainability is also needed for designers and developers to enhance system robustness and enable diagnostics to prevent bias, unfairness, and discrimination, as well as to increase trust by all users in why and how decisions are made.

The interpretability of AI systems has been described long time ago since mid 1980s, but until recently it becomes an active research focus in computer science community due to the advances of big data and various regulations of data protection in developing AI systems, such as the GDPR. For example, according to the GDPR, citizens have the legal right to an explanation of decisions made by algorithms that may affect them (e.g., see Article 22). This policy highlights the pressing importance of transparency and interpretability in algorithm design.

XAI focuses on developing new approaches for explanations of black-box models by achieving good explainability without sacrificing system performance. One typical approach is the extraction of local and global post-hoc explanations. Other approaches are based on hybrid or neuro-symbolic systems, advocating a tight integration between symbolic and non-symbolic knowledge, e.g., by combining symbolic and statistical methods of reasoning.

The construction of hybrid systems is widely seen as one of the grand challenges facing AI today. However, there is no consensus regarding how to achieve this, with proposed techniques in the literature ranging from knowledge extraction and tensor logic to inductive logic programming and other approaches. Knowledge representation---in its many incarnations---is a key asset to enact hybrid systems, and it can pave the way towards the creation of transparent and human-understandable intelligent systems.

This special issue will feature contributions dedicated to the role played by knowledge bases, ontologies, and knowledge graphs in Explainable Artificial Intelligence (AI), in particular with regard to building trustworthy and explainable decision support systems. Knowledge representation plays a key role in Explainable AI (XAI). Linking explanations to structured knowledge, for instance in the form of ontologies, brings multiple advantages. It does not only enrich explanations (or the elements therein) with semantic information---thus facilitating evaluation and effective knowledge transmission to users---but it also creates a potential for supporting the customisation of the levels of specificity and generality of explanations to specific user profiles or audiences. However, linking explanations, structured knowledge, and sub-symbolic/statistical approaches raise a multitude of technical challenges from the reasoning perspective, both in terms of scalability and in terms of incorporating non-classical reasoning approaches, such as defeasibility, methods from argumentation, or counterfactuals, to name just a few.

The scope of this special issue is open to contributions by researchers, from both academy and industry, working in the multidisciplinary field of XAI.

Topics

  • Cognitive computational systems integrating machine learning and automated reasoning

  • Knowledge representation and reasoning in machine learning and deep learning

  • Knowledge extraction and distillation from neural and statistical learning models

  • Representation and refinement of symbolic knowledge by artificial neural networks

  • Explanation formats exploiting domain knowledge

  • Visual exploratory tools of semantic explanations

  • Knowledge representation for human-centric explanations

  • Usability and acceptance of knowledge-enhanced semantic explanations

  • Evaluation of transparency and interpretability of AI Systems

  • Applications of ontologies for explainability and trustworthiness in specific domains

  • Factual and counterfactual explanations

  • Causal thinking, reasoning and modeling

  • Cognitive science and XAI

  • Open source software for XAI

  • XAI applications in finance, medical and health sciences, etc.

Submission

Submissions shall be made through the Semantic Web Journal website. Prospective authors must take notice of the submission guidelines.

We welcome four main types of submissions: (i) full research papers, (ii) reports on tools and systems, (iii) application reports, and (iv) survey articles. While there is no upper limit, paper length must be justified by content.

Note that you need to request an account on the website for submitting a paper. Please indicate in the cover letter that it is for the "The Role of Ontologies and Knowledge in Explainable AI" special issue. All manuscripts will be reviewed based on the SWJ open and transparent review policy and will be made available online during the review process.

Also note that the Semantic Web Journal is open access.

It is worth noting note that submissions must comply with the journal’s Open Science Data requirements.

Important Dates

  • Paper submission: February 15, 2022 (Papers submitted earlier will be reviewed upon receipt)

  • Tentative date for Acceptance/rejection notification: June 30, 2022

  • Estimated Publication Date: October 2022

Editors

Roberto Confalonieri

Free University of Bozen-Bolzano, Faculty of Computer Science, Italy

Oliver Kutz

Free University of Bozen-Bolzano, Faculty of Computer Science, Italy

Diego Calvanese

Department of Computing Science, Umeå University, Sweden

Free University of Bozen-Bolzano, Faculty of Computer Science, Italy

José M. Alonso

Research Centre in Intelligent Technologies (CiTIUS), University of Santiago de Compostela (USC), Spain

Shang-Ming Zhou

University of Plymouth, Faculty of Health, UK

Contact

The guest editors can be reached at ontologies-knowledge-in-xai-swj [at] googlegroups [dot] com