Audience: staff and students in UK HE
Purpose: clear up the things that cause most confusion so we can focus on assessment and learning.
Read next: Data, privacy, copyright & accessibility • Designing assessments in an AI world • Using AI in your assignments
Myth: Chat-based GenAI works like a search engine and “looks things up.”
Reality: Search engines index and rank web pages and return links; GenAI generates new text from patterns in training data and may not be connected to live sources. It can produce fluent but uncited or incorrect claims.
Do instead: Use GenAI to plan searches (keywords, synonyms, Boolean strings) and to clarify materials you’ve already found; use library databases/Google Scholar for evidence; in briefs, require verification and disclosure (see AI Use Statements; AI in Academic Research).
Myth: GenAI “understands” like a human and can be trusted to reason reliably.
Reality: GenAI predicts the next token; it can simulate understanding and still contradict itself, miss context, or make reasoning errors. Fluency ≠ comprehension.
Do instead: Treat GenAI as a drafting/explanation assistant; design assessments that foreground judgement, verification, and improvement (e.g., critique-and-revise an AI answer; submit a process pack showing prompts, AI output, student edits, and reflection).
Myth: We can run student work through a detector and always tell if it’s AI.
Reality: Current detectors are unreliable; they can flag real student work (especially L2 writers) and miss AI-edited work.
Do instead: Design tasks that collect process and disclosure and make AI use explicit (see Process-Driven Assessment; AI Use Statements).
Myth: Any AI involvement = cheating.
Reality: Many modules now allow AI for support (planning, outlining, clarity) as long as students say what they did.
Do instead: Tell students, in the brief, which of these is true:
No AI
AI allowed with acknowledgement
AI-enabled/process-assessed (see Designing assessments in an AI world).
Myth: AI has access to every journal and database, right now.
Reality: AI can hallucinate sources, miss recent work, and make incorrect claims — especially in specialist subjects.
Do instead: Use AI to explain or plan reading, then verify in library databases or subject resources (see AI in Academic Research).
Myth: We can just make AI mark and feedback.
Reality: AI can help phrase, structure and vary feedback, but academic judgement, disciplinary nuance, and fairness stay human.
Do instead: Use AI to draft, you edit, and show students how to turn feedback into an action plan (see Feedback with AI).
Myth: Students need ‘How to use X’ - videos for every new AI.
Reality: Tools change too fast; what lasts is prompting principles, assessment design, and disclosure.
Do instead: Teach capabilities (“any chat-style AI that can read text”).
Myth: AI = Automatic cheating.
Reality: Many students use AI mainly for explanation, language support and getting started. If we don’t tell them how to acknowledge it, they guess.
Do instead: Teach transparent use, ask for process evidence, and give them the student templates (see For students → How to acknowledge GenAI).
Myth: Because there are privacy and bias issues, AI has no place in HE.
Reality: AI can support accessibility and inclusion (simplifying briefs, alternative explanations, drafting alt-text) if used safely.
Do instead: Follow the UK HE data rules, and tell students to anonymise placement/clinical/school data.