Welcome to the Teaching & Learning Guide for Creating Assignments that Outsmart AI Software!
This guide includes best practices and tips designed to help you:
Write assignment prompts and policies that don't allow the use of AI software
Use varied methods to work with AI tools in your classroom
Adapt teaching strategies to changing technology
Request support from your Instructional Designer and the Technology Service Desk
Generative Artificial Intelligence (GenAI) tools like OpenAI’s ChatGPT and Google’s Gemini are foundational Large Language Models (LLMs) that use Generative Pre-trained Transformer (GPT) technology to generate new sentences, images, and ideas. LLMs sort probabilities of a next word or pixel, based on the data and examples it was trained on, and how its parameters were tuned. These tools can almost instantly scan linguistic information within a text prompt, computer file, or audio/video input. LLMs can generate human-like outputs—most often text, but also images, video, audio, and code, depending on the tool and the prompt.
Since the release of ChatGPT (GPT 3) in late 2022, higher education instructors have been concerned about students’ use of these programs to complete their assessments and bypass the learning process. Instructor concern is compounded by the fact that newly created essays or other outputs are not considered plagiarism, traditionally speaking, even if generated by a machine and not by the student. Instructors also struggle to tell the difference between human- and AI-generated text.
There have always been students who cheat or attempt plagiarism, and they admit that their top reason for doing so is “there was an opportunity to do so” (Bowen & Watson 2024, 108; Stanoyevitch, 2024). As of Fall 2023, 75% of students said they will continue to use AI even if instructors or their institution prohibit it (Bowen & Watson, 2024, 110). And, in the Spring of 2025, 80-92% of students globally reported using AI in their studies, about 25% daily and 54% weekly (Mulford, 2025).
The ease and availability of GenAI tools have created an opportunity for many students to practice a new form of cheating: “AI plagiarism:” “the process of using generative AI to produce content that students submit as their own work for assessment tasks” (Anthology, 2024). AI plagiarism is a serious threat to the academic integrity and viability of higher education. But, instructors can help dissuade student AI use and, more importantly, encourage students to engage fully in the learning process using methods established before the AI boom.
There are many AI detection tools advertised to combat this new form of plagiarism, for example GPTZero, Turnitin’s AI detection, DetectGPT, ZeroGPT. But research shows that AI detection tools are currently not accurate or reliable enough to sufficiently combat AI plagiarism.
Problems include varying rates of accuracy, inability to keep pace with rapidly improving GenAI tools, inability to detect modified AI outputs (like paraphrasing by students), and perhaps most importantly, incorrect classification of over 50% of non-native English speakers’ essays as AI-generated.
The consensus in higher education is that AI detection tools should only be used in conjunction with the best practices listed below in this guide, and in two specific ways:
Demonstrating the tool(s) as a deterrent during a lesson on Academic Integrity.
Initiating non-accusatory conversations with students suspected of unethical AI use about their personal goals, the value of meaningful learning, and the ethical use of AI tools.
Because of inevitable student use and the unreliability of AI detection tools, many education leaders and EdTech specialists suggest that now is the time for instructors to shift their assessment design with AI in mind. While it is concerning that traditional essay-writing may be less impactful to student learning, there are ways to design writing assignments and other assessments that deter AI plagiarism and make student learning deeper and longer-lasting by prioritizing what only humans can do best.
Research suggests that the best way to deter AI plagiarism is to implement assessment best practices that predate “the AI era” and have been advocated by experts for years.
In the next two sections, we’ll discuss best practices for designing assessments that also tend to “outsmart” AI. All of these recommendations focus on making student learning processes more transparent, which makes unethical AI use less relevant.
“Scaffolding” is an approach where students are asked to complete a large project in smaller steps. Each step becomes a more focused, graded assignment that includes:
Clear instructions
Student submission of a learning artifact
Feedback from Instructor/Peers
A detailed rubric that emphasizes process and critical thinking
Multiple attempts and opportunities for revision
An opportunity for reflection
Scaffolding could include asking students to turn in digital or paper copies of their annotated sources, drafts with track changes enabled and version history visible, transcripts of any AI tools used, or any other milestones toward their paper or project. Or it could mean asking students to complete worksheet templates to help them complete the end assignment. The key is to closely monitor student progress over time with multiple and frequent checkpoints.
Research shows that creating assignments that require students to apply academic or theoretical concepts to real-world scenarios, especially ones familiar to their own personal life goals, motivates them and engages them in the learning process. Having students solve genuinely complex problems typical of the discipline or intended profession requires creativity and higher-order thinking that AI tools struggle to do well, especially consistently in different output formats. Examples of authentic assessments include:
Case studies requiring course theories or frameworks for solution or analysis
Role-playing exercises (possibly even using AI)
Project-based learning (PBL) portfolios of multiple media products, for example, a text report, infographic, spreadsheet workbook, video, all addressing a research problem or question.
Design group projects and peer review activities that require social interaction, collective problem-solving, and shared accountability. Live oral exams, presentations, and conversational assessments require real-time demonstration of knowledge and the ability to respond to spontaneous questions, which GenAI cannot do. Examples include:
Group projects (carefully constructed with clearly defined individual and collective responsibilities)
Peer review processes
Individual oral exams or defense-style presentations
Use of collaborative annotation tools such as Google Docs or Perusall
Socratic seminars
Interviews or interview-style assignments
Integrate reflections that require students to connect course concepts to their personal experiences, analyze their own learning processes, evaluate their progress, and connect experiences to broader learning goals. For example, ask students to:
Set and track academic and professional goals
Reflect on their use of AI tools and how they critically engaged with AI-generated content
Complete self-evaluation forms or rubrics
Keep learning journals
Create assignments that draw heavily on class discussions, lectures, guest presentations, local field experiences, and course-specific materials that AI tools cannot access. GPT software algorithms can’t decipher indexicality - this means that they can’t write a response if it is based in the context of the course. For example, you can ask students to summarize a discussion or reference a lecture. You might ask something like, “Based on our discussions from module two, what are the two main arguments in X theory?”
Studies show that students have varying understandings of appropriate AI use, making clear communication essential. Instructors can educate students about academic integrity and AI plagiarism. In combination with our recommended best practices, having meaningful discussions about academic integrity and why their learning in the course matters dissuades most students from resorting to AI plagiarism. Here are some specific tips:
Follow TWU’s institutional policy on AI in the Syllabus Template
Determine your course-level policy on AI that complies with institutional limits
Create explicit AI use policies for each assignment
Teach students how to properly attribute AI-generated content
Require students to document AI tool use and critically evaluate AI outputs
Monitor student work closely and watch for pattern or style anomalies
You may still need to assign exams in your course, and student AI use may make online exams a challenge. Although it is not possible to fully AI-proof online exams, there are some question styles that make it harder for students to use AI. We’ve included some examples below. Use of test-proctoring software may also be helpful to help make exams more secure from AI use.
GPT-3 Software can’t compare and contrast texts (as of 2025). Asking students to do analytical comparisons will make it difficult for them to rely on this software. These comparisons can be more effective when you require them to choose course texts. You could even shorten this type of prompt for a quiz question by asking students to compare the argument or point made in a certain part of one text with a specific section of another.
Asking students to find what’s missing from a list is a great way to help them develop problem-solving skills. AI software also can’t analyze a list. This type of prompt could be in the form of giving students a list of formulas, data, or terms and asking them to identify the missing formula, data, or term.
If you find or suspect that many students are using GPT-3 essays, you can expand the workflow. Asking students to analyze their essays or work based on course concepts will ensure that even if students do use GPT-3, they will still have to complete the analytical step themselves.
For more ideas related to assignments and assessment, or for help implementing these suggestions, contact your Instructional Design Partner.
Anthology Inc. (2024 October). AI, Academic Integrity, and Authentic Assessment: An Ethical Path Forward for Education. Anthology Inc. https://www.anthology.com/paper/ai-academic-integrity-and-authentic-assessment-an-ethical-path-forward-for-education
Bowen, J. A., and Watson, C. E. (2024). Teaching with AI: A Practical Guide to a New Era of Human Learning. Association of Colleges and Universities. Johns Hopkins University Press. https://www.unesco.org/en/articles/whats-worth-measuring-future-assessment-ai-age
Desai, H. (2025, July 2). What’s worth measuring? The future of assessment in the AI age. Unesco. https://www.unesco.org/en/articles/whats-worth-measuring-future-assessment-ai-age
Evangelista, E. D. L. (2025). Ensuring academic integrity in the age of ChatGPT: Rethinking exam design, assessment strategies, and ethical AI policies in higher education. Contemporary Educational Technology, 17(1), ep559. https://doi.org/10.30935/cedtech/15775
Fataar, A. 2025. AI, pedagogy, assessment: Shifting to a design-based pedagogy. University World News. https://www.universityworldnews.com/post.php?story=20250514190158384
Francis, N., Henri, D., and Smith, D. (2025, May 7). AI: Actual Intelligence—how embedded GenAI can promote the aims of higher education. Advance HE. https://www.advance-he.ac.uk/news-and-views/ai-actual-intelligence-how-embedded-genai-can-promote-aims-higher-education
Kovari, A. (2025, January 9). Ethical use of ChatGPT in education—Best practices to combat AI-induced plagiarism. Frontiers in Education, 9. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1465703/full
Kwapo, Williamena. “This Technology Can Write Student Essays: Is There Any Educational Benefit?” Education Week. August 2, 2022. https://www.edweek.org/technology/this-technology-can-write-student-essays-is-there-any-educational-benefit/2022/08
Mulford, D. (2025, March 6). AI in Higher Education: A Meta Summary of Recent Surveys of Students and Faculty. Campbell [University] Academic Technology Services. https://sites.campbell.edu/academictechnology/2025/03/06/ai-in-higher-education-a-summary-of-recent-surveys-of-students-and-faculty/
Quality Matters. (2025, September 3). Research-Supported Recommendations for Strategic AI Integration. https://www.qualitymatters.org/qa-resources/resource-center/articles-resources/AI-integration-for-course-design-recommendations?utm_source=a.+Quality+Matters+Digital+Communications&utm_campaign=177abad212-EMAIL_Higher_Ed_September_2025_newsletter&utm_medium=email&utm_term=0_-177abad212-34235053&goal=0_355a0627da-177abad212-34235053
Schatten, Jeff. “Will Artificial Intelligence Kill College Writing?” The Chronicle of Higher Education. September 14, 2022. https://www.chronicle.com/article/will-artificial-intelligence-kill-college-writing
Sharples, Mike. “New AI Tools that Can Write Student Essays Require Educators to Rethink Teaching and Assessment.” London School and Economics. May 17, 2022. https://blogs.lse.ac.uk/impactofsocialsciences/2022/05/17/new-ai-tools-that-can-write-student-essays-require-educators-to-rethink-teaching-and-assessment/
Stanoyevitch, A. (2024, August 19). Online assessment in the age of artificial intelligence. Discover Education 3:126. https://doi.org/10.1007/s44217-024-00212-9
Stoekl-Walker, Chris. “Students Are Using AI Text Generators to Write Papers - Are They Cheating?” The Information. October 28, 2022. https://www.theinformation.com/articles/students-are-using-ai-text-generators-to-write-papers-are-they-cheating
Team Pepper. “Who Writes Better: College Students or GPT-3 Essay Writers?” Pepper Content. April 5, 2022. https://www.peppercontent.io/blog/who-writes-better-essays-college-students-or-gpt-3/
Woodcock, Claire. “Students Are Using AI to Write Their Papers, Because Of Course They Are.” Vice. October 14, 2022. https://www.vice.com/en/article/m7g5yq/students-are-using-ai-to-write-their-papers-because-of-course-they-are
Instructional Designers in Faculty Success design and present learning solutions to continually enhance institutional and instructor performance. We collaborate closely with instructors to translate course objectives into meaningful, customized courses tailored to each instructor’s specific needs, leveraging an aptitude for design and development, along with excellent problem-solving and analytical skills.
Our technical expertise encompasses a range of programs and best practices, including Canvas, Quality Assurance, Universal Design, and more. Instructional Designers meet with academic components to answer questions about teaching and learning in one-on-one consultations, small group work, symposia, and workshops.
To request technical support, submit a Technology Service Desk email to start a ticket.