Generative Artificial Intelligence (GenAI) is a technology that generates material like texts, videos, and images in response to prompts. Large Language Models (LLMs) are a kind of GenAI that generates written responses to a prompt by scraping the internet for content and predicting the next word in a sentence. You’ve probably heard of it and you may have tried it yourself (we certainly have!). LLMs have been all the rage since they were popularized in November 2022 when ChatGPT became available to the public.
Whatever your relationship is to LLMs, we’re all still learning about its capabilities and limitations!
The purpose of this section is to explore the following:
What is the official college policy on using LLMs?
What are your instructor’s guidelines for using LLMs for classwork?
We recognize that LLMs are a changing technology. Even as we write this, what LLMs can and cannot do is changing. As such, it’s important to stay apprised of the changes and how they impact when, why, and how we may or may not use them when completing writing tasks. As with all writing, it is imperative that you consider the rhetorical context of your writing. Keep your audience, purpose, and context in mind as you make decisions about using LLMs.
At Macomb Community College, each individual instructor determines the degree to which students can or cannot use generative AI in their classroom. As such, there is no "official" college-wide generative AI policy. That said, the college has provided faculty with the following guidelines:
All faculty shall include generative AI use statements in their syllabi.
Unauthorized use of generative AI tools, with or without attribution, is considered plagiarism and a violation of the Academic Integrity Policy.
AI detection tools are not reliable and must not be the sole determinant of unacceptable AI use. Faculty are expected to engage students in a conversation in response to suspected, unauthorized AI use.
Because of MCC's guidelines, you can expect the following:
Your instructor will include their AI policy on the syllabus.
If you do not follow your instructor's generative AI policy, you may be in violation of MCC's Academic Integrity Policy.
If your instructor has questions about your writing process or the work you're submitting in the course, they may request a meeting to discuss their concerns.
Classroom policies regarding the use of generative AI are left up to each individual instructor. Your instructor's AI policy will include definitions of acceptable use (if, when, and how you can use it) and consequences for unacceptable use (what happens if you do not follow the AI policy).
Generally, AI policies fall under four categories: immersive, permissive, moderate, and restrictive. Check out your instructor's AI policy included on their syllabus. In which category do you think your instructor's AI policies fit?
Immersive: Students are required to use AI and will receive feedback and grading on their use of AI.
Permissive: Students can freely utilize AI tools to assist in their assignments, such as generating ideas, proofreading, or organizing content.
Moderate: Students can use AI tools for specific parts of their assignments, such as brainstorming or initial research, but the core content and conclusions should be the student’s original work.
Restrictive: AI tools are prohibited for this assignment. All work must be the student's original creation.
Regardless of an instructor's AI policies, if you use generative AI during your writing process, you must acknowledge use of the tool. For information about how to cite generative AI use, ask your instructor and check out these sources:
Writing and Artificial Intelligence by Molli Spalter is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.