Brief description of the project (purpose and methods)
This research examines how students interact with generative AI systems and investigates the risks posed by AI hallucinations and misinformation for learning and knowledge formation in higher education.
Generative AI tools such as ChatGPT can produce responses that appear fluent and authoritative but may contain fabricated citations, incorrect claims, or misleading reasoning. These outputs often referred to as AI hallucinations, raise important questions about students’ ability to critically evaluate machine-generated knowledge.
Across two related studies, we examine (1) students’ ability to detect hallucinations in AI-generated responses and (2) how exposure to AI-generated misinformation can influence students’ beliefs and confidence levels.
The research uses experimental tasks in which students evaluate AI-generated answers containing both accurate information and hallucinated content, combined with surveys capturing confidence levels, AI usage patterns, and reasoning processes.
Both of the working papers are under revision requests at leading journals.
Distinguishing Fact from Fiction: Student Traits, Attitudes, and AI Hallucination Detection in Business School Assessment, C., Dang, A., Nguyen, 2024, https://arxiv.org/abs/2506.00050,
Student Attitudes and Skills in a World of AI Hallucinations, C., Dang, A., Nguyen, 2025, and https://ceur-ws.org/Vol-4138/comm1.pdf.
The research has been presented at several leading international venues including the Cambridge Generative AI in Education Conference (2024, 2025), the American Economic Association CTREE Conference (Atlanta, 2024), the Learning, Teaching & Student Experience Conference (Nottingham, 2025), and a workshop on Generative AI and Education at the European Conference on Artificial Intelligence (Bologna, 2025), and various seminars and national conferences. A media piece for this work was published in Innovation News Network (2024, link). And the project is funded by KBS’s Innovative Education Fund, and KCL’s College Teaching Fund.
Key findings
The findings highlight several emerging risks:
Students often struggle to detect hallucinations in AI-generated responses, particularly fabricated citations or incorrect claims presented in authoritative language.
Frequent AI users may express high confidence in evaluating AI outputs without corresponding accuracy.
Exposure to AI-generated misinformation can increase students’ belief in incorrect claims and strengthen their confidence in those beliefs.
Students often rely on surface-level cues such as writing style or structure rather than systematically verifying information.
Practical and policy implications
The results suggest that the key educational challenge posed by generative AI is not only academic integrity but students’ ability to critically evaluate machine-generated information.
For educators and institutions, this highlights the need to embed critical AI literacy within curricula, including training students to verify AI outputs, recognise hallucinations, and understand the limitations of generative models.
At a policy level, the findings suggest that AI in education policies should prioritise AI literacy, educator training, and guidance on responsible AI use, ensuring that students develop the critical skills required to navigate AI-enabled information environments.