Audience: staff who want a practical, defensible way to let students use GenAI without losing sight of their learning.
Why it matters: if AI can help students draft, we should assess the process they followed — not just the polished end product.
Instead of asking “did the student write every word alone?”, we ask:
What was the task? (brief, question, scenario)
What AI support did they use? (prompts, screenshots, exports)
What did they do with it? (edits, additions, sources, discipline-specific work)
What did they learn? (short reflection)
Students submit these as assessment artefacts alongside (or instead of) a single final document.
This is the approach we want The AI Forge to be known for.
Ask students to submit all four (or clearly say which ones you want):
Task / context
The assignment brief, question or scenario they responded to
If they rephrased it in AI, include the rephrased version
AI interaction(s)
Prompts they used
AI outputs they received
Screenshots or exported chat history
(Tool-agnostic: “any chat-based AI able to draft text/ideas”)
Human improvement
The version the student actually wants marked
Marked-up/annotated version showing changes to the AI output
Extra sources / data / discipline-specific reasoning the AI didn’t provide
Reflection (150–200 words)
What did the AI do well / badly?
What did you change and why?
How did you check accuracy / sources / ethical issues?
If a tool changes how you export, only the screenshots/export step needs updating; the pedagogy stays the same.
Coursework essays and reports
Case studies / business plans
Lab reports with planning notes
Design / media briefs (students can show ideation prompts)
Dissertation / project planning (not raw participant data!)
It’s especially useful when you can’t or don’t want to ban AI, but still need to see the student’s thinking.
AI-enabled, process-driven task. For this assignment, you may use Generative AI to help you plan, outline and improve your work. You must submit:
The prompts or instructions you gave to the AI tool(s);
At least one AI output before you edited it;
Your edited/improved version;
A short reflection (150–200 words) explaining what the AI missed and what you did about it.
Your mark will focus on your understanding, judgement, alignment to the task and use of sources, not on the AI’s unedited text.
You can paste this into your VLE as-is.
Step 1 – Check disclosure (pass/fail)
Did they actually include prompts/output/reflection?
If no → return / apply late/missing-component rule.
Step 2 – Mark human contribution
Use these rubric ideas:
Judgement & improvement
Student identified problems in AI output (accuracy, relevance, tone, missing scholarship) and fixed them appropriately.
Alignment to task
Final work matches the original brief / learning outcomes, not what the AI suggested.
Transparency & integrity
Student disclosed AI use clearly and followed task rules (same as your “AI Use Statements”).
Disciplinary quality
Citations, method, analysis, presentation (as usual).
Step 3 – Moderate
Sample the process packs
Compare how colleagues are rewarding “improvement over AI”
Adjust if students are over-relying on AI for structure / language
Link to: For students → Using AI in your assignments and How to acknowledge GenAI.
Give them this quick student version (paste into the assignment page):
How to submit your AI work for this task
Keep your AI chat open while you work
Copy your main prompts into a document
Save one AI output before you edited it
Write 150–200 words: what did you change and why?
Upload all of this with your final version
Don’t upload confidential/placement/patient data — anonymise it first
In-class / time-limited
Students do the AI step in class (or you demo it)
They submit the reflection later
Group work
One AI interaction per group
Individual reflections (“what I did with the AI output”)
Low-/no-access
You provide a generic AI output and students annotate + improve that
They still write the reflection → same learning, no tool
If the task involves real people / placements / clinical → tell students to create a fictionalised version for AI.
Point to Core guidance → Data, privacy, copyright & accessibility (UK HE) for the red/amber/green data list.
If your university has an approved/enterprise AI, recommend that first.
Recently, David and Nigel, in collaboration with Dom Henri (University of Hull) wrote a blog article for Advance HE (Read the blog here) discussing embedding GenAI as part of the writing and assessment process. As part of this, to demonstrate the process, GenAI was used to support the drafting of the blog article. You can read the process and prompts used below.