Brief description of the project (purpose and methods)
This project examines how generative AI tools may affect university curricula by assessing the vulnerability of economics courses and assessments to generative AI substitution. As tools such as ChatGPT become widely accessible to students, many traditional assessment formats, including essays, short-answer assignments, and problem explanations, can increasingly be completed or assisted by AI systems. This raises important questions about how universities should adapt curricula and assessment design in AI-enabled learning environments.
The project develops an AI Vulnerability Index to evaluate how susceptible different courses and assessment formats are to generative AI assistance. The analysis focuses on undergraduate economics courses and draws on:
· Course catalogues and module syllabi
· Learning outcomes and assessment formats
· Course structures (e.g., problem sets, exams, projects)
· Skills embedded in the curriculum
Using data scraping and systematic analysis of course documentation, the project identifies which assessments and skills are most easily substituted or augmented by generative AI, and which remain more resistant to automation.
The project was funded by the King's Undergraduate Research Fellowship and conducted in collaboration with an undergraduate research assistant. Findings were showcased at the KURF Festival, and an academic journal article is currently in preparation.
Key findings
Preliminary findings suggest:
· Many commonly used assessment formats, such as essays, structured written responses, and conceptual explanations are highly susceptible to generative AI assistance.
· Courses that rely heavily on routine analytical explanations or formula-based problem solving may also be partially automated by AI tools.
· Skills involving critical reasoning, interpretation of empirical evidence, contextual judgement, and original research design are less easily replaced by AI.
· There is often limited alignment between existing assessment formats and emerging AI-enabled learning environments.
Practical and policy implications
For educators and universities, the findings highlight the need to rethink assessment design and curriculum structures in response to generative AI. This may include designing assessments that emphasise interpretation, judgement, and applied reasoning, as well as integrating AI use transparently into learning activities.
At the policy level, the results suggest that governments and institutions should support universities in updating curricula and developing AI-resilient skills, ensuring that graduates are prepared for labour markets where AI tools are increasingly embedded in knowledge work.