Large Language Models (LLMs) have shown remarkable capabilities in question-answering systems, including those used in education. However, they often provide only a single, concise answer without any explanatory detail. In educational settings, this lack of transparency is problematic:
i. Quality of Answers: If an answer is incorrect, we cannot identify the flaw in reasoning; if it is correct, we do not know the source or whether it has been appropriately verified.
ii. Complex Reasoning: Educational domains often involve rules, policies, and detailed logical steps, which can challenge purely LLM-based solutions.
A promising solution is Symbolic Reasoning, a trending approach in Explainable AI (XAI). By using a symbolic engine, either standalone or integrated with an LLM, complex reasoning steps can be made explicit, improving both the accuracy and interpretability of educational question-answering systems.
Below are two sample queries that illustrate how an XAI-enhanced system is expected to provide clear and transparent explanations for student inquiries.
This semester, I scored 8 points on the final exam for the DSA course. However, I was absent for the lab exam. Can I still get a B in this course?
↳ No. Because you missed the lab exam, you received a score of 0 for lab work. According to Regulation #13 of X University, a student with 0 lab points cannot pass the course.
Dr. Tho earned his doctorate overseas. Is he allowed to teach the NLP course in the English-taught program?
↳ Yes. Dr. Tho holds a Ph.D. (meeting the requirement of at least a Master's-level qualification) and studied abroad, demonstrating English proficiency. According to Regulation #10 of X University, lecturers must have a minimum of a Master's degree and sufficient English skills to teach in the English-taught program. Therefore, Dr. Tho meets these criteria.
This challenge aims to explore innovative and effective solutions for XAI in educational question-answering. We are particularly interested in systems that combine the power of LLMs with symbolic reasoning to handle complex, rule-based, and logic-driven questions.
Our objectives are:
Promote the development of interpretable and trustworthy QA systems for education
Encourage hybrid approaches that integrate symbolic reasoning with LLMs - while not limiting participants to any specific methodology. Creative and novel solutions are highly welcome.
Showcase real-world use cases where explainability improves learning outcomes
Highlight the best solutions through publication and long-term visibility
Outstanding submissions may receive the opportunity to develop and submit a regular paper, subject to the review process of the target venue.
High school and university students, as well as researchers with an interest in XAI, are invited to participate.
Teams may consist of up to six (6) members.
Individual participants without a team may request to be assigned to one by the organizers.
Individuals and teams can register throughout this link https://forms.gle/CFcwjnRfFVG32vTT7
Registration: Open from 02/03/2025. Deadline is 25/04/2025
Workshop & Dataset release: 13/04/2025
On-event Challenge: 14/04/2025 - 11/05/2025
API submission and testing:
Phase 1 result: 12/05/2025 - 13/05/2025
Model update period: 14/05/2025 - 15/05/2025
Phase 2 result: 16/05/2025 - 17/05/2025
Overall result: 18/05/2025
Public test day: 01/06/2025 (Tentatively)
Result announcement: 01/06/2025
Paper submission: TBD
The timeline may be adjusted depending on organizational needs. Participants registering after April 13th, 2025 should check their email within 1–2 days. If no confirmation is received, please contact the organizers to verify your registration.
To provide a formal academic record of the competition, the challenge regulations, benchmark design, participant solutions, and analytical findings were consolidated into a regular research paper presented at ITADATA 2025. The paper highlights how the competition bridges lightweight LLMs and symbolic reasoning for transparent educational QA systems.
Are you ready to showcase your expertise in Explainable AI and compete alongside leading researchers in the field? This challenge offers a valuable opportunity to contribute to trustworthy AI research while gaining academic visibility and recognition.
The competition will award the following prizes:
First Prize: $300 cash prize
Second Prize: $150 cash prize
Third Prize: $100 cash prize
Fourth Prize: $50 cash prize
Fifth Prize: $50 cash prize
The top 3 teams will be invited to present their solutions at the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI, co-located with the International Joint Conference on Neural Networks 2025 (IJCNN25). In addition, outstanding teams may receive the opportunity to develop and submit a regular paper to ITADATA 2025, subject to the conference review process.
All teams that successfully participate and submit their solutions will receive an official certificate issued by the workshop chairs.
For any inquiries regarding the challenge, please contact by email: ura.hcmut@gmail.com
Note: Once registration is finalized, the organizers will set up a communication platform (Discord) to facilitate discussions with the teams.
Prof. Quan Thanh Tho - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Prof. Nguyen Duc Anh - Department of IT and Economics, University of South Eastern Norway, Norway
Prof. Fabien Baldacci - Université de Bordeaux, France
Prof. Bui Hoai Thang - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Dr. Tran Thanh Tung - School of Computer Science and Engineering, Ho Chi Minh City International University, Vietnam
Prof. Nguyen Le Minh - Japan Advanced Institute of Science and Technology (JAIST), Japan
Nguyen Song Thien Long - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Vo Hoang Nhat Khang - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Nguyen Hoang Anh Thu - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Bui Cong Tuan - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
Nguyen Quang Duc - Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam
We sincerely thank our sponsors for their generous support, with special appreciation to our industry partners and individual contributors who make our journey possible.
Pham Nhi Nguyen, alumnus of Cohort 96, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Vietnam