KR 2025 Workshop on LLMs and KRR for Trustworthy AI
The emergence of large language models (LLMs) has brought significant opportunities for developing scalable and generalisable AI applications more easily. Compared to knowledge representation and reasoning (KRR) methods, LLMs demonstrate remarkable capability in encoding linguistic knowledge, enabling them to generate human-like text and generalise across diverse tasks with minimal domain-specific training. However, LLMs’ reliance on statistical patterns rather than explicit reasoning mechanisms raises concerns about factual consistency, logical coherence, vulnerability to hallucinations, bias and misalignment with human values. This workshop focuses on an emerging research paradigm: the integration of LLMs with KRR techniques to enhance transparency, verifiability and robustness in AI systems. We explore approaches that incorporate structured knowledge (ontologies, knowledge graphs, symbolic logic, etc.), neuro-symbolic methods, formal reasoning frameworks and explainability techniques to improve the trustworthiness of LLM-driven decision-making.
The workshop will feature invited talks from leading experts, research paper presentations, and interactive discussions on bridging probabilistic learning with symbolic reasoning for trustworthy AI. By bringing together researchers from KRR and deep learning, this workshop aims to foster new collaborations and technical insights to develop AI systems that are both powerful and trustworthy.
Topics of interest include but are not limited to:
Knowledge-grounded language models
Hybrid neuro-symbolic architectures
Reasoning-aware prompt engineering
Logical consistency checks in LLM outputs
Uncertainty and automated verification
Causality and reasoning
Explainability and controllability
Commonsense reasoning integrating LLMs and KRR
Reinforcement learning for ensuring safety and trustworthiness
Alignment and preference-guided LLMs
Multi-agent AI frameworks
Benchmarks, datasets and quantitative evaluation metrics
Evaluation and user studies in real-world applications
TBD
Professor Maurice Pagnucco, University of New South Wales, Australia
Associate Professor Yang Song, University of New South Wales, Australia
Professor Tony Cohn, University of Leeds, UK
Dr Mingming Gong, University of Melbourne, Australia
Professor Gerhard Lakemeyer, RWTH Aachen, Germany
Professor Fangzhen Lin, HKUST, China
Professor Tim Miller, University of Queensland, Australia
Dr Nina Narodytska, VMware Research, USA
Associate Professor Abhaya Nayak, Macquarie University, Australia
Professor Ken Satoh, National Institute of Informatics, Japan
Professor Michael Thielscher, University of New South Wales, Australia
Professor Guy Van den Broeck, UCLA, USA
Paper submission: August 4, 2025 August 9, 2025
Paper notification: August 31, 2025
Workshop date and time: Half-day during November 11-13, 2025 (TBD)
Contributions may be regular papers (up to 9 pages) or short/position papers (up to 5 pages), including everything. Submissions should follow the KR 2025 formatting guidelines and be submitted through the submission page. Each submission will be reviewed by at least two program committee members. We also welcome submissions that have recently been accepted in top AI conferences. At least one author of each accepted paper will be required to attend the workshop to present the contribution.
Submission link: https://openreview.net/group?id=kr.org/KR/2025/Workshop/LMKR-TrustAI
Contact yang.song1@unsw.edu.au for more information.