Knowledge Graphs for Responsible AI in conjunction with ESWC25  





Responsible AI is built upon a set of principles that prioritize fairness, transparency, accountability, and inclusivity in AI development and deployment. As AI systems become increasingly sophisticated, including the explosion of Generative AI, there is a growing need to address ethical considerations and potential societal impacts of their uses. Knowledge Graphs (KGs), as structured representations of information, can enhance generative AI performance by providing context, explaining outputs, and reducing biases, thereby offering a powerful framework to address the challenges of Responsible AI. By leveraging semantic relationships and contextual understanding, Knowledge Graphs facilitate transparent decision-making processes, enabling stakeholders to trace and interpret the reasoning behind AI-driven outcomes. Moreover, they provide a means to capture and manage diverse knowledge sources, supporting the development of fair and unbiased AI models. The workshop aims to investigate the role of Knowledge Graphs (KGs) in promoting Responsible AI principles and creating a cooperative space for researchers, practitioners, and policymakers to exchange insights and enhance their comprehension of KGs' impact on achieving Responsible AI solutions. It seeks to facilitate collaboration and idea-sharing to advance the understanding of how KGs can contribute to Responsible AI. This is the 2nd edition of the KG-STAR Workshop. The first edition was successfully conducted in conjunction with CIKM 2024 where 40 participants attended engaging with keynotes, invited talks as well as author presentations covering both academia and industry.

Important Dates

Workshop date: 1 June 2025

Submission deadline: March 6, 2025

Notifications: April 3, 2025

Camera-ready version: April 17, 2025

Workshop Theme and Topics

We invite submissions of original research, case studies, and position papers on topics related to Knowledge Graphs and their applications in advancing Responsible AI. The workshop explores the intersection of Knowledge Graphs and ethical considerations in AI development. Submissions may include, but are not limited to, the following topics:


Knowledge Graphs for Bias Mitigation:

● Techniques and methodologies for using Knowledge Graphs to identify and mitigate biases in AI models.

● Case studies demonstrating the successful application of Knowledge Graphs in addressing bias challenges.

Interpretability and Explainability:

● Approaches to enhancing the interpretability and explainability of black-box AI models through integrating

Knowledge Graphs.

● Evaluating the effectiveness of Knowledge Graphs in making AI decision-making processes more transparent.


Privacy-Preserving Knowledge Graphs:

● Methods for constructing Knowledge Graphs that prioritize privacy and comply with data protection

regulations.

● Applications of Knowledge Graphs in privacy-preserving AI systems.

Fairness in AI with Knowledge Graphs:

● How Knowledge Graphs contribute to ensuring fairness in AI applications.

● Techniques for using Knowledge Graphs and their embeddings to identify and rectify unfair biases in AI

models.


Ethical Considerations in Knowledge Graph Construction:

● Ethical challenges in the creation and maintenance of Knowledge Graphs.

● Best practices for ensuring responsible and ethical Knowledge Graph development.

● Real-world applications of Knowledge Graphs in Responsible AI.


Integration of Large Language Models (LLMs) and Knowledge Graphs (KGs):

● Enhancing LLMs’ accuracy, consistency, reducing hallucinations and harmful content generation, fake news

detection, fact checking, etc. with knowledge-grounded techniques, e.g., Grapgh RAG (graph-based retrieval

augmented generation) and KG RAG.

● Enhancing the interoperability of KG downstream tasks through LLMs’ natural language interfaces,

transferability, and generalization capacity, e.g., GNN (graph neural network)-LLM alignment.