Large Language Models (LLMs) have shown remarkable capabilities in question-answering systems, including those used in education. However, they often provide only a single, concise answer without any explanatory detail. In educational settings, this lack of transparency is problematic:
i. Quality of Answers: If an answer is incorrect, we cannot identify the flaw in reasoning; if it is correct, we do not know the source or whether it has been appropriately verified.
ii. Complex Reasoning: Educational domains often involve rules, policies, and detailed logical steps, which can challenge purely LLM-based solutions.
A promising solution is Symbolic Reasoning, a trending approach in Explainable AI (XAI). By using a symbolic engine, either standalone or integrated with an LLM, complex reasoning steps can be made explicit, improving both the accuracy and interpretability of educational question-answering systems.
Below are two sample queries that illustrate how an XAI-enhanced system is expected to provide clear and transparent explanations for student inquiries.
This semester, I scored 8 points on the final exam for the DSA course. However, I was absent for the lab exam. Can I still get a B in this course?
↳ No. Because you missed the lab exam, you received a score of 0 for lab work. According to Regulation #13 of X University, a student with 0 lab points cannot pass the course.
Dr. Tho earned his doctorate overseas. Is he allowed to teach the NLP course in the English-taught program?
↳ Yes. Dr. Tho holds a Ph.D. (meeting the requirement of at least a Master's-level qualification) and studied abroad, demonstrating English proficiency. According to Regulation #10 of X University, lecturers must have a minimum of a Master's degree and sufficient English skills to teach in the English-taught program. Therefore, Dr. Tho meets these criteria.
This challenge aims to explore innovative and effective solutions for XAI in educational question-answering. We are particularly interested in systems that combine the power of LLMs with symbolic reasoning to handle complex, rule-based, and logic-driven questions.
Our objectives are:
Promote the development of interpretable and trustworthy QA systems for education
Encourage hybrid approaches that integrate symbolic reasoning with LLMs - while not limiting participants to any specific methodology. Creative and novel solutions are highly welcome.
Showcase real-world use cases where explainability improves learning outcomes
Highlight the best solutions through publication and long-term visibility
The best solutions will be featured on the challenge website. Outstanding submissions will be selected for publication as regular papers in the workshop proceedings.
High school and university students, as well as researchers with an interest in XAI, are invited to participate.
Teams may consist of up to six (6) members.
Individual participants without a team may request to be assigned to one by the organizers.
Individuals and teams can register throughout this link https://forms.gle/mJpQ6Bkfs9KGR8999
Registration: Open from 02/03/2025. Deadline is 25/04/2025
Workshop & Dataset release: 13/04/2025
On-event Challenge: 14/04/2025 - 11/05/2025
API submission and testing:
Phase 1 result: 12/05/2025 - 13/05/2025
Model update period: 14/05/2025 - 15/05/2025
Phase 2 result: 16/05/2025 - 17/05/2025
Overall result: 18/05/2025
Public test day: 01/06/2025 (Tentatively)
Result announcement: 01/06/2025
Paper submission: TBD
The timeline may be adjusted depending on organizational needs. Participants registering after April 13th, 2025 should check their email within 1–2 days. If no confirmation is received, please contact the organizers to verify your registration.
Are you prepared to demonstrate your expertise in Explainable AI and compete alongside leading experts in the field? This competition offers a unique platform to contribute to the advancement of AI research while gaining prestigious recognition.
The competition will award the following prizes:
First Prize: Trophy, cash prize, and an opportunity to publish as a regular paper.
Second Prize: Trophy, cash prize, and an opportunity to publish as a regular paper.
Third Prize: Trophy, cash prize, and an opportunity to publish as a regular paper.
The top 3 teams will have the opportunity to publish their work in the proceedings of the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI, held at the International Joint Conference on Neural Networks 2025 (IJCNN25). Additionally, the top 10 teams will have their papers published as challenge papers on the official challenge website.
All teams that successfully participate and submit their solutions will receive an official certificate issued by the workshop chairs.
All registered participants will gain access to specialized training sessions conducted throughout the competition, equipping them with valuable insights and practical experience in addressing challenges related to XAI.
For top-performing teams:
The most outstanding approaches, following evaluation by international experts, will be invited for presentation at the International Workshop on Trustworthiness and Reliability in Neuro-symbolic AI (TRNS-AI 2025), co-located with the International Joint Conference on Neural Networks (IJCNN 2025).
Selected high-quality solutions may be considered for publication in the workshop proceedings, providing participants with the opportunity to share their research with the global AI community.
Exceptional contributions may receive mentorship from experienced researchers and AI experts to further refine their methodologies.
Participants will have opportunities to connect with industry professionals and research institutions, fostering potential collaborations for future advancements.
Solutions demonstrating significant practical value may receive support for patent registration and commercialization, enabling participants to translate their innovations into real-world applications.
For any inquiries regarding the challenge, please contact by email: ura.hcmut@gmail.com
Note: Once registration is finalized, the organizers will set up a communication platform (Discord) to facilitate discussions with the teams.
Prof. Quan Thanh Tho - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Prof. Nguyen Duc Anh - Department of IT and Economics, University of South Eastern Norway, Norway
Prof. Fabien Baldacci - Université de Bordeaux, France
Prof. Bui Hoai Thang - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Dr. Tran Thanh Tung - School of Computer Science and Engineering, Ho Chi Minh City International University, Vietnam
Nguyen Song Thien Long - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Vo Hoang Nhat Khang - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Nguyen Hoang Anh Thu - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Bui Cong Tuan - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Nguyen Quang Duc - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
We sincerely thank our sponsors for their generous support, with special appreciation to our industry partners and individual contributors who make our journey possible.
Pham Nhi Nguyen, alumnus of Cohort 96, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam