DAO-XAI 2024
4th International Worshop on Data meets Applied Ontologies in Explainable AI
October 19-20, 2024
An ECAI 2024 Workshop
Welcome to the website of the International Workshop on Data meets Applied Ontologies in Explainable AI (DAO-XAI 2024), an event associated with the 27th European Conference on Artificial Intelligence (ECAI 2024).
Latest News
Match 13
Web site is online!
March 14:
Submission is open!
April 17:
Deadline for submitting papers until 30/05!
We will have Prof. Mehul Bhatt as invited speaker!
Aims and Scope
The introduction of Artificial Intelligence (AI) in domains that impact human life (agriculture, climate, forestry, health, etc.) has led to increased demand for Trustworthy AI. To reach a level of trustworthiness we need to improve robustness and explainability to foster secure AI solutions.
Explainable Artificial Intelligence (XAI) focuses on developing new approaches for the explanation of black box models by achieving good explainability without sacrificing system performance. One typical approach is the extraction of local and global post-hoc explanations. Other approaches are based on hybrid or neuro-symbolic systems, advocating a tight integration between symbolic and non-symbolic knowledge, e.g., by combining symbolic and statistical methods of reasoning.
The construction of truly explainable systems is widely seen as one of the grand challenges facing AI today. However, there is no consensus regarding how to achieve this, with proposed techniques in the literature ranging from knowledge extraction and tensor logic to Markov logic networks and logical neural networks. Knowledge representation---in its many incarnations---is a key asset to enact hybrid systems, and it can pave the way towards the creation of transparent and human-centric explainable knowledge-enabled systems.
Building such systems also requires to generate explanations that are human-understandable, e.g., linking them to structured formal knowledge and generating explanations that support common-sense reasoning and are expressed in a natural language. This also requires to put `humans in the loop' to combine the successful statistical learning approaches with human experience and contextual understanding and to align AI with human values, ethical principles and legal requirements to ensure privacy, security, and safety.
This workshop will feature contributions dedicated to the role played by ontologies in XAI, in particular with regard to building trustworthy and explainable human-centered AI systems and to explaining and refining the output of large language models (LLMs). The scope of this workshop is open to contributions by researchers, from both academy and industry, working in the multidisciplinary field of XAI.