By the time they graduate (or perhaps in their first summer), most law students will need to know how to competently use AI. That is true in the same way it is true that students will need to know how to interview clients, manage confidentiality, draft under deadline, and exercise professional judgment under uncertainty. But it does not follow that every professor should use AI all the time, nor that every assignment should be redesigned around it.
Law school has never treated professional competencies as “always on” from day one. We do not ask students to represent a real client in their first week because the stakes are high, the skills are layered, and novices need scaffolding. We build toward competence through sequencing: first observation, then controlled practice, then increasing responsibility and complexity. AI belongs in that same category.
There are sound pedagogical reasons to withhold AI in some contexts, at least temporarily. Sometimes the learning goal is fluency with foundational doctrine or methods: reading cases closely, building rules from authority, synthesizing competing lines of reasoning, or learning to write in a disciplined analytic structure. In those moments, AI can short-circuit the productive struggle that helps students internalize the moves they will later need to perform quickly and reliably. Withholding AI can also protect metacognitive development, because students cannot calibrate trust, spot subtle errors, or correct overconfident prose until they have a baseline sense of what “good” looks like.
In other settings, limiting AI is about professional identity rather than mechanics. Students need repeated opportunities to practice owning their reasoning, making choices, and defending those choices to an audience. If an AI system supplies the first draft of the analysis or the final turn of phrase too early, students can mistake output for understanding. That risk is not a moral failing. It is a predictable feature of tools that generate polished language.
A thoughtful approach, then, is not “AI everywhere” or “AI nowhere.” It is intentional design. Some activities should be AI-free so students can build core competencies and confidence. Some should be AI-supported so students can learn to prompt responsibly, verify outputs, and integrate AI into workflows that still require independent judgment. And some should be AI-restricted because the task is specifically about learning the human skill AI would otherwise replace, at least at the novice stage.
Competent AI use is a professional skill. Teaching it well means recognizing that timing matters, context matters, and learning objectives matter. The question is not whether AI belongs in legal education. The question is when, why, and under what constraints it best supports the kind of lawyer we are helping students become.
Introducing GenAI for your students
Many of us assume that students already know about GenAI, but that's not always true—especially when it comes to specialized tools like Lexis+ AI. While some students may arrive with a passing knowledge of ChatGPT, many have never tried AI-driven research or drafting software. Even those who have may not be skilled at assessing whether the outputs they receive are reliable, well-reasoned, or ethically sound. Thousands of recent cases serve as reminders that attorneys can face professional risks if they cite AI-generated material without confirming its accuracy.
The activities described below introduce AI drafting in a structured way, helping students see how generative AI can assist with straightforward tasks, like drafting a basic memo, and more complex challenges, like adapting legal analysis for client communication. By comparing their own writing to AI-produced texts, students learn to scrutinize each step: checking citations, verifying factual statements, and tailoring the tone for the intended audience. This process highlights the importance of combining the speed and convenience of AI with the professional judgment and critical evaluation skills that remain the hallmark of effective legal advocacy.
Activity One: Introducing AI Through a Simple Memo
In the first activity, students take their initial memo (for me, it is the “Simple Analysis Assignment” or SAA, which is a short e-memo they complete over a weekend), and have Lexis+ AI draft a version of that same memo. They then compare the AI-generated text with the memo they wrote themselves. The purpose is to give students a first-hand experience with the capabilities and limitations of AI-generated legal documents before they develop any preconceived notions. By examining discrepancies between the AI output and their own work—such as missing analysis, incorrect citations, or lack of real-world context—students learn to question the reliability of machine-generated memos and sharpen their critical thinking skills about AI tools. * A note of caution here: this used to be an effective assignment when AI was worse than a 1L, but be warned that the minor issues you'll catch will be completely invisible to many of your students. *
Activity Two: Using AI to Transform a Memo into a Client Letter
In the second activity, students use AI to help translate a more complex memo into a client-facing letter. This step allows them to see how AI can suggest simpler wording or a different organizational structure that might be more comprehensible for non-lawyers. Students then reflect on how well the AI captures the client’s perspective, whether it maintains accuracy, and what adjustments they must make to preserve the appropriate legal tone. This exercise also highlights the need for careful oversight whenever AI is involved, helping students practice verifying each point of analysis and any references to real authority. It also helps them see that it takes a bit of work to translate legal speak into more accessible language.
This second activity helps students see an immediate application to some of the metaknowledge about writing they've been developing. I use these assignments to reinforce threshold concepts like rhetorical situation.