Talking to Students About AI
Artificial intelligence has not simply advanced; it has crossed a threshold, transforming from background tool into an active presence that raises significant questions.
Students who once saw “smart” technology only in the form of autocorrect or recommendation engines now face a classroom where chatbots can produce essays, generate code, and even simulate conversation with Abraham Lincoln. Teachers find themselves in the position of explaining a technology that is both ordinary and uncanny, a calculator that can also mimic a human voice.
The first challenge in talking to students about AI is to resist two extremes: panic and dismissal. To frame AI solely as a threat to academic integrity is to risk reducing the conversation to suspicion and surveillance. To present it as nothing more than a tool is to ignore the cultural and ethical questions it raises. Students deserve a vocabulary that acknowledges both the utility and the strangeness of these systems, and it is within this nuance that intellectual growth surrounding this subject can occur.
One approach is to begin with history. The adoption of earlier technologies in education—the printing press, the typewriter, the calculator—provoked anxieties that now seem almost quaint. Yet those debates remind us that tools alter not just what we produce, but how we think. When students recognize that the calculator was once banned in mathematics classrooms, they may be better prepared to see AI as part of a longer human story about learning and machines. Still, the analogy only goes so far. A calculator automates arithmetic but leaves the problem itself intact; it lightens the cognitive load without pretending to understand the question. Artificial intelligence, by contrast, can generate not only the solution but also the framing of the problem, supplying both the steps and the prose that justifies them. In this sense, AI blurs the line between tool and collaborator. Like the calculator, it risks narrowing skills if overused, but unlike the calculator, it can also reshape the very definition of what counts as original thought. The similarities help us recognize familiar patterns of resistance and accommodation, while the differences force us to confront the possibility that education is not just adjusting to a new instrument, but to a new kind of intellectual presence.
Another approach is to talk directly about language. Many AI systems generate text, and for students, this raises questions about authorship and voice. Who is speaking when a chatbot produces a paragraph? What does it mean to put one’s name on a paper that was shaped, in part, by an algorithm? These are not questions with simple answers, but they invite students to consider what writing is for, and how writing continues to adapt to the needs of those crafting it. What does a field like digital humanities look like before the advent of large language models? What could it look like 20 years from now?
Educators might also ask students to reflect on their encounters with AI outside the classroom. Most have already interacted with it, whether through social media feeds, music recommendations, or automated customer service. Bringing those experiences into the open demystifies AI and grounds it in everyday life. A conversation about how TikTok curates a feed may be more illuminating (for some) than a technical discussion about machine learning, but both conversations require the depth and nuance associated with this new field.
It is equally important to foreground the limits of AI. These systems do not understand in the way humans do. They predict words based on patterns in data. This distinction is crucial for students who may be dazzled by fluent sentences and overlook the absence of comprehension. Encouraging students to probe where AI goes wrong—where it invents sources, confuses concepts, or produces biased output—can sharpen their critical thinking skills. These sorts of shifts have been well-documented in the scholarly literature surrounding how we might teach about this technology in a rapidly shifting landscape.
Above all, talking to students about AI is an opportunity to model intellectual curiosity. You don't need to be an expert in computer science; it's often enough to ask questions with genuine interest:
What might we gain from working with this technology?
What might we lose?
How should we define originality in a world where machines can generate drafts at will?
These are questions of philosophy and ethics as much as of pedagogy. Conversations about AI belong to a broader tradition of education, one that encourages students to question, to examine assumptions, to situate themselves within a rapidly changing world. And I've always believed that the goal is not to supply definitive answers, but to create a space where students can think alongside their teachers about new technologies, new vocabularies, and ultimately, new literacies.
In what follows, we offer a few practical teaching strategies to engage students in this conversation:
1. Start with Their Experiences
Ask students where they already encounter AI (TikTok recommendations, Snapchat filters, Spotify playlists, autocorrect).
Use these familiar examples to frame AI as an exploratory technology.
Strategy: Begin class with a quickwrite: “Name one way AI has influenced something you’ve read, watched, or listened to this week.”
2. Demystify the Technology
Clarify that most AI tools (like ChatGPT) don’t “think” or “know” in human terms; they generate predictions based on patterns in data.
Keep explanations simple and accessible without reducing complexity.
Strategy: Show side-by-side outputs (one from a student draft, one from AI) and analyze differences. What feels human? What feels machine-like?
3. Frame AI as Part of a Longer Story
Place AI in the lineage of classroom technologies once seen as disruptive: the calculator, the typewriter, even the internet.
This context helps reduce fear while showing how tools reshape thinking.
Strategy: Facilitate a class discussion surrounding common ethical questions in this field.
4. Connect to Writing and Authorship
Encourage students to ask: Who is the author when AI is used?
Emphasize writing as a process of thinking, not just a product.
Strategy: Have students generate an AI draft of a paragraph, then rewrite it in their own voice. Discuss what changed.
5. Highlight Strengths and Limits
Make students aware of where AI excels (speed, fluency, brainstorming) and where it falters (accuracy, originality, bias).
Strategy: Assign a “fact-check the AI” exercise: students identify where an AI-generated text is misleading or incorrect.
6. Address Ethics Directly
Discuss plagiarism, authorship, intellectual honesty, and the environmental concerns associated with this technology openly rather than only punitively.
Invite students to reflect on when it feels appropriate (or not) to use AI.
Strategy: Pose scenarios and have students discuss the ethics of each.
7. Foster Critical Literacy
Treat AI outputs as texts that can and should be read critically.
This strengthens analytical reading skills and positions students as active, not passive, users.
Strategy: Analyze an AI-generated text together. What biases or assumptions are embedded? What is left unsaid?
8. Keep the Conversation Ongoing
Make AI part of continuous dialogue, not a single lesson.
Encourage curiosity, skepticism, and reflection rather than final answers.
Strategy: Incorporate an “AI check-in” once a month where students share new observations or questions about their encounters with AI.
Vol. 18 | September 19 & 26, 2025