Pre-prompting is the practice of igniting student thinking before they use AI tools. Instead of banning AI outright, you can make part of a student’s grade based on their pre-prompt work. This way, you’re assessing both:
Their thinking: the observations, questions, and hypotheses they generate.
The quality of the prompt itself: how well they set up the AI to give meaningful, usable answers.
This shifts assessment away from “Did you give a good answer?” toward “How well did you design your interaction with AI?”
👉 Pre-prompting positions AI as a partner in inquiry rather than a shortcut. By requiring students to surface observations, questions, and hypotheses first, you make their cognitive process visible and assessable before they ever type into a chatbot.
Prohibiting the use of AI does not prevent its presence, it only obscures it. When we ban AI, we lose access to the very traces of students’ thinking that could help us understand how they are navigating, negotiating, or even resisting these tools.
Rather than relying on control-based prohibitions, we are invited to design learning encounters that illuminate the cognitive, ethical, and relational dimensions of students’ engagements with AI.
This means creating assessments that do not just measure outcomes, but reveal process: how students are making decisions, what they are trusting, what shortcuts they are tempted by, and how they are co-shaping the tools that also shape them.
This shift is not just about improving learning outcomes, but about composting extractive habits of use and cultivating deeper, more reflective relationships with emergent intelligences.
When educators use pre-prompt assessments—asking students to share their reasoning, framing, or intentions before engaging with AI—they gain a window into the thought processes that shape how students interact with the tool. This approach foregrounds students’ judgment, creativity, and ethical discernment, allowing teachers to track not just what the AI produces, but how learners are orienting themselves in relation to it.
By contrast, when there is no pre-prompt assessment, or worse, when AI is banned altogether, students’ interactions with AI remain hidden, reducing opportunities for reflection and dialogue.
In such contexts, AI use tends to move underground, framed by secrecy rather than accountability, and educators are left evaluating polished outputs without insight into the relational and cognitive pathways that produced them. Pre-prompting, then, shifts assessment from policing to accompaniment: instead of controlling whether students use AI, it supports them in becoming more intentional and transparent about how they use it.
Students should first understand what the assignment is really asking.
Prompt from teacher: “What is this essay really asking you to think about? Circle key terms and explain them in your own words.”
Student notes might include:
“World War II → global conflict, 1939–1945.”
“Impact → long-term changes, not just immediate effects.”
“Place of women in society → roles, jobs, rights, expectations, family structures.”
Purpose: Ensures they don’t run to AI with a vague “Tell me about WWII.”
Students record what they already notice from class materials, primary sources, or their own knowledge.
Prompts for observation:
“What patterns or shifts in women’s roles during WWII stand out to you?”
“What anomalies or surprising details have you noticed?”
Examples of student responses:
“Women entered factory jobs (Rosie the Riveter) while men were at war.”
“After the war, some women were pushed back into domestic roles.”
“Nursing and auxiliary military roles expanded.”
Purpose: Surfaces prior knowledge and shows teacher the starting point.
Students brainstorm questions they want AI to help with.
Prompts for questioning:
“What don’t I know yet?”
“What do I need clarified to write a strong essay?”
Examples of student questions:
“How many women actually worked in factories?”
“Did WWII accelerate women’s rights, or was progress reversed afterward?”
“Were experiences different in the US, Britain, Germany?”
Purpose: Prevents shallow “tell me about women in WWII” prompts.
Students predict or hypothesize possible answers before checking with AI.
Prompts for hypotheses:
“Based on what I know, what do I think the answer will be?”
Examples:
“I think women gained independence during the war but lost ground when men came back.”
“Maybe the war didn’t cause permanent change but did plant seeds for second-wave feminism.”
Purpose: Puts students in an active stance; they’re not just receiving, they’re anticipating and testing.
Students reflect on what AI is likely to do with their question.
Prompts for anticipation:
“If I ask AI my question, what kind of answer will it probably give?”
“What risks or limitations might there be?”
Examples:
“AI might give me a generic overview (factories, Rosie the Riveter, then postwar pushback).”
“AI might miss cultural nuance or oversimplify differences across countries.”
Purpose: Prepares students to read AI critically and not accept surface-level responses.
When students engage in pre-prompt activities, much of the work is thinking, not a finished product. Without a clear framework, this thinking can remain invisible. The Cognitive Ignition Rubric makes those processes observable and assessable. It allows teachers to grade the quality of the student’s reasoning and the strength of the prompt they produce, rather than just the essay or the AI-generated content.
Assign the Pre-Prompt Task
Example: “Before you ask AI about WWII and women, write down three observations, two questions, one hypothesis, and a prediction about how AI might answer you.”
Collect Student Responses
These can be quick notes, bullet points, or short paragraphs. They don’t need to be polished essays.
Score with the Rubric
Observation: Did the student notice specific, relevant details?
Questioning: Were the questions thoughtful and original, or basic and predictable?
Hypothesis: Did the student form a logical, evidence-based prediction?
Metacognition: Did they reflect on how they approached the task or what they expect from AI?
Give Feedback
Highlight one strength: “Your observation of postwar pushback on women’s roles is excellent.”
Suggest one growth area: “Try to form a hypothesis that connects your observation to long-term social change.”
Close the Loop
Have students compare their pre-prompt thinking with what AI actually provided. This builds critical reading of AI output and shows whether their initial reasoning holds.
The rubric evaluates pre-prompting across four dimensions:
Observation & Noticing – How well does the student notice patterns, anomalies, or key details in the material?
Questioning & Curiosity – Do they generate meaningful, probing questions that go beyond the surface?
Hypothesis Formation – Can they make reasoned predictions or tentative explanations before consulting AI?
Metacognitive Awareness – Do they reflect on their own thinking process and explain how they approached the task?
Each dimension is scored on a 1–4 scale (Beginning → Developing → Proficient → Exemplary).
Observation: “Women worked while men fought.”
Question: “What did they do?”
Hypothesis: “I think they just did men’s jobs.”
Anticipation: “AI will tell me women worked in factories.”
Why this is Beginning:
Observations are vague (“worked”), questions are surface-level (“what did they do?”), hypothesis is generic (“did men’s jobs”), and anticipation is minimal.
Observation: “Lots of women started working in factories, like the posters of Rosie the Riveter. They also joined the army but not in combat.”
Question: “Did women want to stay in those jobs after the war, or were they forced out?”
Hypothesis: “Probably many were forced out when men returned, but some changes stayed, like more women wanting to work.”
Anticipation: “AI will probably say women helped in factories and then men took over again. It might not explain how that affected women’s rights after.”
Why this is Proficient:
Observation is concrete (factories, military service, posters). Question shows curiosity about long-term effects. Hypothesis is logical but not deeply layered. Anticipation shows some critical awareness of AI’s limits.
Observation: “Before WWII, many women were housewives or did domestic work. During the war, they worked in factories, farms, and even joined military support roles. After the war, there was pressure for them to return home, but not all did.”
Question: “Did WWII create permanent change in women’s roles, or was it more of a temporary shift that just delayed until the women’s movement in the 1960s?”
Hypothesis: “I think it was both: the war didn’t instantly change women’s rights, but it showed society that women could handle jobs outside the home, which later helped movements for equal pay and equality.”
Anticipation: “If I ask AI, it will probably give a balanced answer about short-term vs long-term changes. I’ll need to ask it to compare different countries, because otherwise it might just focus on the U.S.”
Why this is Exemplary:
Observation is nuanced (pre-war vs wartime vs postwar). Question is layered (temporary vs permanent change). Hypothesis is reasoned and forward-looking. Anticipation anticipates AI’s strengths and gaps (U.S.-centric).