Welcome to the Teaching & Learning Guide for Introducing Students to AI!
This guide includes tips and best practices designed to help you:
Incorporate AI into your teaching in ways that align with learning objectives
Introduce students to using AI to enhance critical thinking skills
Develop and refine AI prompts to generate meaningful results
Address academic integrity and responsible AI use in the classroom
Engage students in AI-based activities that benefit learning and skill development
There are many arguments about when and if we should introduce AI tools to students. These arguments range from banning any AI use to assigning projects that require AI use. You know your students and the goals of your course best, so incorporating AI into your teaching should always align with your learning objectives.
It’s important to acknowledge that AI use is ubiquitous and students are probably already curious about or have used some kind of AI. Preparing students to apply evolving technologies like AI in their field of study makes sense considering how much AI has already changed the way people work across fields. As instructors, you can help guide students on how to use AI critically and responsibly by linking it to your course goals.
No matter how much AI use you allow in your classroom, make sure that your students are still able to meet course outcomes. Instructors need to be clear about when, how, and where AI use is permissible. Providing guidance on where they can use AI will hopefully encourage students not to rely on AI for coursework that they must complete on their own to meet learning objectives. And providing clarification on what type of AI they can use, and for which exact parts of a project, assignment, or assessment, can encourage students to learn the human skills needed to prove their discipline-specific competency skills.
Some instructors might take a “stoplight” system approach to AI in their courses: A “red” light on an assignment means that no AI use is permitted; a “yellow” light means that some AI use might be allowed with instructor permission; and a “green” light means that students are allowed and/or encouraged to use AI on that particular assignment (Mormando, 2023).
You could also approach AI in your courses as a “menu,” particularly when providing options to students. The menu approach means that if an assignment is approved for AI use, students have a whole buffet of options to choose from.
In the table below, we’ve provided an example “menu” of different ways students could use AI (Liu, 2024). You can choose which parts of the assignment students can use AI for and communicate it to them clearly. For example, if your main learning outcome for the assignment is for students to be able to analyze data, then you might allow the use of AI for brainstorming ideas, or for suggesting topics for the analysis, or for providing some feedback based on the rubric after they have done the data analysis.
It’s important to explain to students that AI is not a substitute for human critical thinking. In fact, using AI effectively requires strong critical thinking skills due to its limitations.
AI tools, especially Large Language Models (LLMs), are great at looking for patterns in existing data sets and even synthesizing them to create new content. However, AI tools lack human judgment, leading to well-known limitations. These limitations include the potential for generating false information, producing biased outputs, and demonstrating gaps in reasoning abilities. Hallucinated content (inaccurate, untrue, or randomly generated content) is relatively common in LLMs, and even in Google’s AI-powered search answers.
It’s important to explain to students that if they are using AI, they will still need to evaluate its output for inaccuracies, biases, and logical inconsistencies. Two key strategies for fostering critical thinking in AI use, then, are prompt engineering and reviewing and reflecting.
Prompt engineering is an important foundational skill for anyone who wants to use AI. One framework for prompt engineering, published in The Journal of Academic Librarianship, is the CLEAR Framework. According to its author, “By utilizing the CLEAR framework [...] students can learn to navigate and develop AI-generated content more effectively, thereby nurturing the critical thinking skills necessary for the ChatGPT era.” The framework uses five core principles:
Concise: Prompts should be brief and free of superfluous language so AI can focus on key components. For example,“Identify factors behind China’s recent economic growth” is a concise prompt, whereas “Please provide me with an extensive discussion on the factors that contributed to the economic growth of China during the last few decades” is unnecessarily wordy.
Logical: Prompts should be structured and coherent. For example, “Compare and contrast mitosis and meiosis, including at least two similarities and two differences.”
Explicit: Prompts should be precise about the output format, content, and scope. For example, “Explain the main themes of Frankenstein by Mary Shelley and provide two examples from the text to support each theme.”
Adaptive: Prompts should allow for refinement and customization after the initial prompt. For example, “Explain the Pythagorean theorem in simple terms, then show me how to apply it in a real-world example.”
Reflective: Users should continuously evaluate and improve their prompts for the most useful results. For example, “What are some potential biases in AI-generated content, and how can I check for them in my research?”
Another formula for prompt engineering provides additional information. Here are some suggestions from Julie Schell, PhD at UT Austin:
Persona: Define your role or ask AI to adopt a persona. For example, “I am a junior in college” or “Act as my writing tutor.”
Context: Provide relevant background. For example, “I’m taking an introductory psychology course, and I need help understanding classical conditioning. Can you explain it using an example different from Pavlov’s dogs?”
Objective: Specify the task. For example: “Help me create a study plan to prepare for my biology exam in two weeks, focusing on genetics and evolution.”
Guardrails: Set limits. Guardrails provide additional details to refine the response. A guardrail might ask AI to keep its answer within certain parameters such as length, timeframe, scope, etc. For example, “Explain the causes of World War I in simple terms without using more than 100 words.” or “What did U.S. citizens accept as causes of World War I at the time of the conflict?”
Courtesy: AI is trained on human writing, and since humans respond better when people ask nicely, AI shows a similar trend. While AI doesn’t have feelings, polite and well-structured prompts and a “Thank you for your help!” often lead to clearer and more helpful responses. In addition, creating a habit of professional courtesy will benefit students in their interactions with real people.
Here’s an example using all elements:
Hello! I am a junior in college, working on a writing project for my psychology class. The goal of the project is to explore classical conditioning, with real-life examples from our own experiences. I’m having trouble coming up with examples besides Pavlov’s experiment. Can you help me understand classical conditioning better so I can find relevant examples in daily life and create a study guide to help me collect my thoughts? Thank you!
Students should be taught that if they plan to use AI, they should always review and reflect on the content. Reviewing and reflecting on AI responses with students can be a great opportunity to revisit course content as you check for accuracy. You can also show students how to use their critical thinking skills to refine their prompts, ask follow-up questions, and/or get more specific when prompting AI.
You can visit this ChatGPT discussion link to see an example of how it responded to the above prompt. An expert might notice that ChatGPT’s first answer is okay, but can be improved. This would be a chance to engage students in refining prompts and interacting with the GPT to get more accurate and useful information. Getting the best results requires iteration!
In the above-linked discussion example, ChatGPT also offers to write a few example paragraphs with in-text citations for students. This is an ideal time to discuss academic integrity and how AI use fits into your course policies. You might explain how you will handle situations where students are found to be using AI in a way that isn’t aligned with academic integrity standards.
Since AI is relatively new in academic contexts, students may not know what is appropriate. It may be more useful to be generous in our thinking about why students choose to use AI. For example, instead of assuming students use AI as a shortcut or to get out of doing work, it might be more helpful to assume that students are not sure where and when AI use is appropriate. Instructors can help guide students toward making good decisions about using AI in their coursework and beyond.
We’ve provided some tips below to help you start the responsible AI use conversation with your students.
Talk to students about AI disclosure and responsible use in academic settings. If you are worried about students using AI, you might consider creating an AI disclosure form for major assignments.
Newman University has a quick guide on AI disclosures.
Forms like this Microsoft Form for AI in Writing Disclosure (origin unknown, possibly Johns Hopkins) can also be used.
Establish clear guidelines for AI citation. An example could include citing the prompt and platform used. APA, MLA, and Chicago style guides provide formats for AI citation.
The Purdue AI LibGuide provides citation information for students.
Let students know how you plan to use AI in your teaching.
Set the expectation to never upload anyone’s work or personal information to AI generators without the author’s explicit permission. Everyone has a right to privacy, and with AI still being new, there are likely still technological weaknesses that bad actors can exploit.
Be clear in your syllabus what types of AI are allowed, if any.
Show students how to prompt and use AI properly and within ethical guidelines.
Work with students to discover the capabilities and limits of AI tools. Become a co-learner with them and embrace a growth mindset.
Review TWU’s optional syllabus policy statement for AI and consider what would be appropriate to include in your course syllabus. This statement offers a great starting point for discussing AI with students.
There are many ways you can guide students through using AI in your courses. This guide provides a framework for how to approach introducing students to AI. Once you become more comfortable with talking to students about AI and allowing them or even encouraging them to use it for specific goals, there are numerous ways to expand your and their AI use. We’ve provided some ideas below about how AI can be used in coursework.
Encourage students to use AI for brainstorming ideas and identifying connections. Or, as an activity, have students brainstorm ideas and connections first, and then have them compare and contrast their human output versus the AI output. This can be used to demonstrate the differences between real human intelligence and a Large Language Model’s algorithm.
Show students how to use AI to create study materials. Do this so that you can show students that generative AI tools are not always correct or complete. There will need to be a human element that double-checks the generative AI.
Input small sections of content from a textbook, article, or other course content and ask for language simplification, study questions, or examples. Keep copyright restrictions in mind; don’t input entire articles.
Have students interact with AI as if it were a historical figure, industry professional, or patient/client to explore different perspectives. Have two students ask the same questions to the same persona, and see how the answers differ.
Use AI to visualize data so students can more easily find patterns, trends, and anomalies.
Have AI create a short story, or math problem, and have students critique the output. Are the math problems accurate? Does the short story make sense? If AI and a real student writes on the same prompt, you might have the class vote on which they think is AI, or which is their favorite, and have them explain what makes one feel more “human” or “artificial” than the others.
AVID. (n.d.). “AI and Collaboration.” https://avidopenaccess.org/resource/ai-and-collaboration/#1707936470872-4f2cae55-c18a
Carucci, R. (2024). “In The Age Of AI, Critical Thinking Is More Needed Than Ever.” Forbes. https://www.forbes.com/sites/roncarucci/2024/02/06/in-the-age-of-ai-critical-thinking-is-more-needed-than-ever/
Ghosh, U. (2024). “Use Artificial Intelligence to Get Your Students Thinking Critically.” Times Higher Education. https://www.timeshighereducation.com/campus/use-artificial-intelligence-get-your-students-thinking-critically
Liu, D. (2024). “Menus, not Traffic Lights: A Different Way to Think about AI and Assessments.” University of Sydney. https://educational-innovation.sydney.edu.au/teaching@sydney/menus-not-traffic-lights-a-different-way-to-think-about-ai-and-assessments/
Lo, L. S. (2023). “The CLEAR Path: A Framework for Enhancing Information Literacy through Prompt Engineering.” The Journal of Academic Librarianship, 49(4). https://doi.org/10.1016/j.acalib.2023.102720
MIT Horizon. (2024). Critical Thinking in the Age of AI. https://horizon.mit.edu/insights/critical-thinking-in-the-age-of-ai
Mormando, S. (2023). “A Stoplight Model for Guiding Student AI Usage.” Edutopia. https://www.edutopia.org/article/creating-ai-usage-guidelines-students/
Schell, J. (2024). AI-Forward-AI Responsible Course Materials and Engagement Strategies: Insights into GenAI Practice [PowerPoint slides]. Texas Higher Education Coordinating Board. https://share.julieschell.com/JruZ1qJ4