XAI-KRKG@ECAI2025
Explainable AI, Knowledge Representation, and Knowledge Graphs
Explainable AI, Knowledge Representation, and Knowledge Graphs
October, 25 (Morning)
The integration of Explainable AI (XAI) with Knowledge Representation (KR) and Knowledge Graphs (KGs) has emerged as a critical field of study, addressing the growing need for AI systems that are transparent, explainable, interpretable, and trustworthy. Knowledge Representation offers methods for describing, organizing, encoding, and reasoning with domain-specific knowledge, providing a foundational layer for AI understanding. Knowledge Graphs, as a seminal result of KR, further enable the structuring of interconnected concepts and relationships, forming an intuitive framework for describing complex domains. KGs have been effectively used as a powerful tool for improving results in solving different tasks in multiple fields such as question answering, recommendation, medical decision support systems, fact-checking, semantic search, image classification and many others. Together, KR and KGs provide a natural complement to XAI techniques, empowering AI models to produce meaningful explanations that align with real-world contexts and user expectations.
As AI systems become increasingly embedded in critical domains such as healthcare, finance, and law, the need for interpretable models that offer insights into their reasoning processes has never been more urgent. By combining XAI with KR and KGs, researchers and practitioners can develop systems that bridge the gap between technical outputs and human comprehension. This integration not only enhances the clarity and relevance of AI-generated explanations but also supports the development of fairer, more accountable systems by incorporating domain knowledge and logic into the reasoning process.
This workshop seeks to explore the rich opportunities and challenges at the intersection of XAI, KR, and KGs. Key themes include leveraging structured knowledge to enhance explainability and interpretability, using XAI methods to refine and validate KR and KG models, as well as to increase trustworthiness of machine learning models and applying these combined approaches to tackle real-world problems. We invite contributions on theoretical advancements, innovative tools, and case studies that demonstrate how knowledge-driven AI can deliver explanations that are transparent, domain-aware, and user-centric. By fostering collaboration among researchers, industry practitioners, and domain experts, this workshop aims to drive forward the development of ethical and impactful AI systems that align with human values and societal needs.