Brief description of the project (purpose and methods)
This project explores how AI-assisted knowledge tools can support educators in producing AI-enhanced teaching materials that remain grounded in verified academic sources, while helping students engage with generative AI in a more responsible and transparent way.
Generative AI tools are increasingly used by educators and students to generate summaries, explanations, and learning resources. However, concerns about hallucinations, inaccurate citations, and unreliable information have created barriers to adoption in higher education.
To address this challenge, the project experiments with the use of NotebookLM to develop teaching materials anchored to curated academic sources. NotebookLM allows educators to upload materials such as journal articles, lecture notes, and reports, and generate AI-assisted summaries and explanations that remain tied to those uploaded sources.
The project examines how this approach can:
support educators in creating AI-assisted teaching materials with traceable academic sources
reduce the risks associated with hallucinations and misinformation
build student confidence in engaging with AI-generated learning resources
The work involves practical experimentation in higher education teaching contexts, combined with feedback from students and educators on the usability, reliability, and pedagogical value of AI-supported materials.
The project has been presented at the Cambridge Generative AI in Education Conference (2025) and discussed at seminars and workshops at King's College London, University of Warwick, and Royal Holloway University of London. The project is generously funded by KBS’s Innovative Education Fund.
Key findings
Early findings suggest several benefits of this approach:
AI tools can be used more safely when outputs are restricted to verified academic sources rather than open internet content.
Educators can significantly reduce preparation time while maintaining academic rigour in teaching materials.
Students report greater trust and confidence in AI-assisted learning materials when sources are clearly visible and verifiable.
AI-supported materials can enhance engagement by providing interactive explanations, summaries, and study aids grounded in course readings.
Practical and policy implications
For educators and institutions, the project demonstrates how AI tools can be integrated into teaching in a way that strengthens rather than undermines academic integrity.
This suggests that AI adoption in education should focus not only on access to tools but also on pedagogical frameworks that prioritise transparency and source verification.
At the policy level, the findings highlight the importance of supporting responsible AI infrastructure in education, including tools and systems that prioritise traceability, source transparency, and academic verification. Such approaches can help mitigate risks associated with generative AI while enabling educators and students to benefit from its productivity and learning potential.