Audience: staff running seminars, workshops, labs, studios or online sessions.
Goal: give you ready-to-run, tool-agnostic activities so students learn your subject and practise critical AI use, not just “play with ChatGPT”.
Pick an activity that matches your session goal.
Tell students: “Use any chat-style AI that can read text” (don’t name brands).
Run the activity (10–30 mins).
Debrief, that’s where the learning is.
Point students to For students → Using AI in your assignments if you want them to keep using it, or to How to acknowledge GenAI if it was assessed.
We’ve included low-/no-access versions in case students can’t log in.
Good for: any subject, any level
Time: 15–20 mins
You need: a question/task from your subject
Students ask a chat AI:
“Answer this question as if you were a 2nd-year UK HE student: [YOUR QUESTION].”
Students highlight: what’s correct, what’s vague, what’s missing.
Students improve the answer using your slides/readings.
Whole-class debrief: “What did AI get wrong and why?”
Low-/no-access: you generate one AI answer beforehand, put it on the screen, students annotate in pairs.
Why it’s good: shows limits/hallucination and reinforces “AI ≠ source”.
Good for: writing-heavy modules, language clarity
Time: 15 mins
Student A writes an AI prompt for your task.
Student B runs it and looks at the answer.
Student B improves the prompt to make it clearer/more specific and runs it again.
Pair compares the two outputs and notes what changed.
Debrief: “The better the prompt (context + role + constraints), the better the output.”
Link to Core guidance → Myth busting / Prompting.
Good for: large cohorts, revision, pre-exam
Time: 10–15 mins
Students tell AI:
“Create 5 practice questions on [TOPIC] for a UK HE level [4/5/6] student. Mix MCQ and short answer.”
Students answer them without AI.
Students check against notes/readings.
Optionally: swap with a partner and try theirs.
Low-/no-access: teacher makes one set from AI and posts it in the VLE.
Why it’s good: AI helps with drill, staff keep standards.
Good for: accessibility, health/education/social care, science comms
Time: 15–20 mins
Students paste a paragraph from the reading.
Students ask AI:
“Rewrite this for a non-specialist audience / for a 1st-year / for a patient.”
Students compare the AI version with how they would explain it.
Discuss: what did AI oversimplify or get wrong?
Low-/no-access: you provide the AI rewrite, students critique.
Why it’s good: shows AI can support inclusive teaching, but students still need disciplinary judgement.
Good for: modules using Process-Driven Assessment
Time: 20–30 mins
Give students a small version of the assessed task.
Students run the exact 4 steps: brief → AI output → human improvement → reflection.
Students submit or share the 4 artefacts.
You show what “good” looks like.
Why it’s good: students learn the process before the high-stakes assignment.
Link to Process-Driven Assessment.
Good for: social sciences, business, health, education
Time: 15–20 mins
Students ask AI to respond to a scenario.
In groups, they look for biases, missing viewpoints, unsafe suggestions.
They re-prompt AI to fix these.
Debrief: “Who is responsible — the tool or you?”
Low-/no-access: print or display the AI answer; students annotate.
For each activity above, you can set it as a discussion / forum:
“Post your AI answer AND your critique.”
“Reply to one other student with improvements.”
“Post your process pack for the mini-task.”
Tell students to acknowledge AI even in the forum.
Always propose: “Use any AI you have access to.”
Offer a pair option: one device per pair.
If no-one can log in, use the low-/no-access version.
If the task needs images / placement data, tell students to anonymise and point them to Core guidance → Data, privacy, copyright & accessibility.
Remind students: “This is how you can use AI for your assignment → For students → Using AI in your assignments.”
Remind staff: “If this is assessed, add wording from AI Use Statements & Acknowledgement.”