We find ourselves at a paradoxical moment in higher education where we must teach with and about AI as a new part of our digital literacy landscape but all while needing to focus more than ever on what it means to compose as human, to communicate as human, to be human.
Generative Artificial Intelligence (GenAI or GAI) is what powers software applications or features within applications so they can produce text, image, audio, and video. GenAI is already a feature of personal, professional, and academic activities for many people and is likely to become an increasingly common and accessible tool.
AI tools used to generate text are built on neural networks that are trained on large language models (LLMs) using many different parameters. What they become very good at doing, based on that training, is predicting a next-most-likely or logical word in a sequence of words. This allows generative AI programs to produce sentences, paragraphs, and entire essays of readable prose in response to user prompts.
ChatGPT is one well known GenAI tool. GPT stands for
Generative (it can generate new text)
Pre-trained (it has learned from a large dataset based on certain parameters)
Transformer (the training occurs in a specific type of neural network that involves coding every word in the dataset with numerical properties)
To give a sense of the scale involved in training something like ChatGPT version 3.5: The dataset (material from the web) was 45 terabytes (that’s 45 with twelve zeros after it). And the training involved the use of 175 billion parameters.
ChatGPT, like other GenAI tools (e.g., Google Gemini and Microsoft Copilot) can thus, because of this training at scale, produce very polished prose in very short order.
And given how fast advances in generative AI, and in AI generally, are occurring, the power and sophistication of what AI can do and produce will only increase.
If you haven’t tried a GenAI tool yet, we encourage you to do so: It is probably much more helpful for you to discover what they can produce and how than to have the computer engineering behind how they work explained in very general terms.
Despite their power and sophistication, however, and the seeming ease with which tools like ChatGPT can generate prose, GenAI tools remain just that: tools. They are at our disposal to use but we should never simply accept GenAI output as-is. GenAI users, faculty and students alike, should always read AI produced text carefully and critically. Because GenAI is trained on language datasets that contain linguistic and cultural bias, misinformation, and disinformation, its output will inevitably repeat these biases.
So, when you use a tool like ChatGPT, don’t forget to “chat” with it: Revise prompts to produce better output, tell GenAI when it has not provided you with the output you wanted, prompt it to check for bias, flag it when it is clearly wrong……the more, better language we feed into the dataset that the system learns from, better that system will become.
Of note, however: If you would prefer that a GenAI tool not collect your prompt inputs, most do have an “opt-out” process. In ChatGPT for example (at least as of this writing), you can adjust the Data Controls in the Settings menu so that the system does not collect/share your data:
As new and disruptive as AI-powered text generation tools might seem, they point us back in many ways to the foundation of teaching writing. The following principles, recommended by Sarah Z. Johnson and Jason Snart in their presentation, “AI and Writing: A Return to our Future Roots,” align with COD’s Composition program’s core principles.
While generative AI excels at providing efficiencies and improving productivity, limits should be placed around use that impedes or bypasses development of foundational skills.
We also need to be cognizant of ethical considerations around generative AI usage, such as how generative AI creators usehuman-provided prompts and content; how these platforms use intellectual property is currently being litigated. Furthermore, genAI tools are often trained on texts that reinforce racist and sexist stereotypes. You can learn more about this issue from Antonio Byrd’s presentation “Practicing Linguistic Justice with Large Language Models.” The script is available here.
In short, our job now involves teaching AI literacy. AI use will be (if it’s not already!) a valuable professional and academic skill required of our students. We can empower students by developing their literacy skills that emphasize “choice architecture,” meaning that the writer is always asking: Why am I making these composition choices? And why is generative AI a good or bad choice for my purposes?
Instead, consider designing scaffolded projects with multiple due dates and measure progressive steps toward student learning.
Also, be aware that there is no reliable generative AI detector available, so be wary of tools that claim otherwise.
We encourage faculty to be reflective about how generative AI can and should impact their own pedagogical practices and assignments. In order to do this, we recommend that faculty build their own generative AI literacy and revise assignments and activities to better support students’ ethical and intentional usage of generative AI.
While it can feel intimidating or overwhelming, building your own generative AI literacy is both doable and beneficial! We recommend starting by just experimenting with what it can do:
Play around with AI. We recommend trying ChatGPT, Latimer, Gemini, and Consensus. Ask it questions. Respond to it with further directions, critiques, and so on. Get more comfortable with the technology–both its affordances and constraints.
Text to text - ChatGPT, Copilot, Gemini, Latimer
Text to image - Dall-E, Midjourney, Craiyon, ImageFX, Firefly, (Copilot as well)
Research focused - Consensus, Perplexity, Elicit, Keenious, ChatPDF
Put one of your current writing assignments into one of the generative AI tools and see what it generates.
Ask it to write something related to your classes–an email to students, a set of discussion questions about a particular reading, etc. See what it generates. Don’t forget to write back to it with follow-up questions, critiques, or further directions.
Since students are increasingly being required to use generative AI in other settings, including the workplace, we should consider how these tools can be helpful to the writing process. Not all generative AI usage is necessarily plagiarism or academically dishonest. (See “It’s time to rethink ‘plagiarism’ and ‘cheating’” below.)
In order to design assignments that take generative AI’s possibilities and challenges seriously, ensure that your assignments and class activities are directly linked to a course objective. Be transparent with your students about not only what you’re asking them to do but why it is important.
The following are a few activities that you can try to think this through:
Try revising one of your writing assignments by incorporating AI as a writing tool at some point in the writing process. The tool could be used for invention, peer review, drafting, and/or revision.
Teach students how to use AI for their writing assignments. Model for them in class or through an instructional screencast for online courses.
Have students write reflections (without AI) about their experiences using the writing tool. Have them articulate the affordances and constraints. Have them consider what it means to write with a writing tool. Have them consider tone and style and what it means to sound like a human.
Have students write something from scratch that includes intentional and known errors; as AI to evaluate the passage for accuracy - what kind of errors does it catch….if any.
Because we are all using different combinations of assignments and process work, a single generative AI policy wouldn’t be appropriate for all COD Composition instructors. However, we can apply several guidelines, recommended by the MLA-CCCC Task Force on Generative AI and Policy Development second working paper, to create our own policies:
Context-Specific Parameters: On your syllabus, clearly state when it is and is not appropriate for students to use generative AI in their work. For example, “Generative AI may be used when brainstorming a project or for revision assistance.” It may be the case that generative AI is okay to use for some assignments (or a portion of them) and not for others–that is also acceptable and should be clearly communicated to students.
Transparency and Ethical Acknowledgement: Make it clear that students are responsible for reporting how they used generative AI in their writing–and how you would like students to do that. For instance, you might ask for students to submit all of the prompts that they used by taking screenshots or submitting the text prompts along with the assignment. Teach students how to ethically acknowledge generative AI as a source; the COD Library offers instructions on how to cite generative AI here and here. You can also learn more about the ethical complications about using generative AI in the section of this document called Principle 1: Ethical use of generative AI in an education context places learning at the center of practice.
Reflection: Asking students to write about why they made particular rhetorical choices in any given composing process can help instructors to understand student decision-making, but can, perhaps of even greater value, help students to become more aware of their own choices–even when they think that they are not consciously making choices. Reflective practice is even more crucial when AI is involved. Opportunities to write reflectively about rhetorical decision making offers students the chance to disclose–intentionally and safely–when they have used AI and, of course, why. This can help instructors to be aware of when/how GenAI is shaping student work. And it can lead students to answer the question of whether they are using AI to circumvent the demonstration of course objectives and competencies: When used to disguise the fact that they cannot do a certain writing, research, or literacy task that we expect them to be able to do, AI is more clearly a tool being used to cheat or to game the system. When used as a collaborative tool at strategic points in any composing process, it becomes more clearly a means to help the student demonstrate a competency (and not disguise the fact that that competency doesn’t exist).
The statement below reflects the principles and recommendations listed in this Principles and Recommendations document.
Generative AI has a lot of potential applications for writing, and we’ll be exploring some of that in this class. Know that it’s important to take ownership of your voice and to not sell yourself short. You came here to write, so write. You came here to learn, so learn. However, figuring out when and why generative AI can be helpful is an important part of being a prepared communicator in lots of different settings. Therefore, we are going to use the following generative AI guidelines:
Any time you use exact words or ideas that come from another source, the source must be cited–including if it comes from generative AI. You can learn more about how to cite generative AI here.
You may use generative AI to assist with brainstorming ideas or outlines for any major assignment in this class. If you do so, please submit screenshots of your generative AI conversations along with the final submission.
Some assignments will allow you to use generative AI for different purposes. Whenever that is the case, you will find information on what is permissible and how to cite that work on the assignment description.
Source: University of Tennessee - Chattanooga
All submitted coursework must be your own original work. Appropriate use of quotation marks and citations, using the accepted citation style for this course, is required per the UTC Honor Code. While the use and/or inclusion of any materials derived by a Generative AI tool is generally prohibited, you may be given written permission to use such a tool for one or more assignments. Any coursework for which the use of Generative AI is allowed will include explicit directions on when and how to use such tools, as well as how to properly cite the resource. Failure to follow any of the aforementioned guidelines constitutes a violation of the Honor Code and will result in a referral to the Office of Student Conduct.
Source: Sarah Z. Johnson, Madison Area Technical College: “My syllabus language is a mashup modeled around a blurb I got from Kansas U and the language I helped write for Madison College's Academic Integrity page.”
I encourage you to try out Generative AI tools in this class for some purposes. That may include generating ideas, outlining steps in a project, finding sources, getting feedback on your writing, and overcoming obstacles on papers and projects. Using those tools to generate all or most of an assignment, though, will be considered academic misconduct. When you're in doubt, talk to me!
All ethical use of Gen AI in this course will follow the following parameters, which we'll discuss further in class
Responsibility—you are the author of what you submit
Transparency—if you use AI, you disclose how you used it. This may include citation, prompt history, or other acknowledgment
Reflection—you include metacommentary (e.g. a reflection letter or something comparable) to explain how GenAI contributed to your learning and the final product you submitted.
Madison College's statement on ethical use of GenAI is based on three core principles, which inform our class policy:
Principle 1: Ethical use of Generative AI in an education context places learning at the center of practice. While Generative AI excels at providing efficiencies and improving productivity, limits should be placed around use that impedes or bypasses development of foundational skills.
Principle 2: Acquiring Generative AI critical literacy is an emerging essential skill that must be cultivated through exposure, practice, productive feedback, and reflection.
Principle 3: Assessment practices that rely solely on evaluation of a discrete finished product are most vulnerable to abuses of generative AI. Instead, programs and faculty are encouraged to design assessments that measure progressive steps toward learning and mastery.
“Syllabi Policies for AI Generative Tools” curated by Lance Eaton.
“Faculty Help: Generative AI Resource Guide: Sample Syllabus Policies” (Santa Fe Community College).
In Spring 2024, a group of English 1101 students worked together to draft their own Generative AI policy. They analyzed how ChatGPT responded to a prompt and genre awareness and rhetorical situation and then analyzed examples of syllabi here at COD so they could figure out how to write their own. The students hoped that their work could be a resource for other COD students and faculty to read and reflect on their own practices.
These are a few common AI-related terms you may encounter; ChatGPT was used to produce the brief definitions below, which were then human-reviewed and edited:
Deepfake - An "AI deepfake" refers to the use of artificial intelligence algorithms to create highly realistic fake images, videos, or audio recordings, often manipulating or replacing the likeness of individuals in a way that can be difficult to discern from reality.
Generative AI - Generative AI refers to artificial intelligence systems that create new content, such as text, images, or music, often mimicking human creativity by generating novel outputs based on patterns learned from existing data.
Hallucination - AI hallucination refers to instances where artificial intelligence generates content that appears realistic but is entirely fabricated, mimicking human-generated text, images, or other media. This term underscores the potential for AI to create convincing yet entirely invented material, raising concerns about misinformation and manipulation.
Large language model (LLM) - A large language model is an advanced artificial intelligence program capable of understanding and generating human-like text based on vast amounts of data it has been trained on.
Natural language model - A natural language model is a computer algorithm designed to understand, generate, or process human language in a way that mimics human communication patterns and structures.
<OpenAI. (2024). ChatGPT 3.5 (May 1 version). https://chat.openai.com/>
(You can find additional terms and definitions here.)