By: Michael McVey, Professor in the Teacher Education Department
In a record-breaking two months, the Chatbot driven by artificial intelligence and trained on a huge language model drawn from across the Internet reached 100 million users. It took TikTok almost a year to reach that many and Instagram over two years.
The present trend in many articles written about ChatGPT is to begin with a tongue-in-cheek disclaimer that the article you are about to read was not generated by artificial intelligence.
I should not have to make such a claim since most readers of this blog post are faculty members with years of experience crafting the English language and bending it to their academic will. They will recognize in an instant that my left-branch sentence structure, my subtle literary allusions, and my ‘perplexity’ of word use will assure any mathematical algorithm that a human wrote these words.
To be assured that I am, indeed, a human being, at the end of this post I will run Edward Tian’s freely available test from GPTZero and share the resulting score with you. The test reviews structures unique to human writing, specifically perplexity and “burstiness,” a sort of randomness indicator. Similar tests for humanity check for the presence of artifacts such as correct grammar, standard spelling, proper punctuation, and coherent writing around a theme, skills which would obviously indicate a piece was written by a computer and not an undergraduate.
I will pause to let that settle in.
As you well know by now, the ChatGPT site will craft a written response to almost any conceivable prompt. I say ‘almost’ since there are now a few guardrails that safeguard against violence and racism. One of the major downsides of the tool is that those of us who face students trained in the treasured and timeless five paragraph essay are quickly realizing that those same students will find that ChatGPT has the potential to make quick work of responding to an uninspired writing assignment.
At the Faculty Development Center’s most recent CONNECT Conference, I shared the floor in one session with colleagues Susan Bushinski (Nursing) and Ann Blakeslee (English) to offer suggestions for reimagining and reconsidering our writing assignments. Writing as a process, as Ann noted, is essential to learning and critical thinking.
Colleagues from all over the world have responded to the call for reconsideration of some of our heirloom assignments. If I could be so bold as to clump several of the better ones into one category, they would be to develop and use academic tasks that are iterative in nature, building gradually upon a simple framework, and elaborated upon throughout the semester. This is what I do with my major end-of-semester project. I take pride in watching my students build on a simple lesson structure, which they then infuse with the new tools they have learned and review in terms of sound pedagogical planning. ChatGPT can even play a minor role in fleshing out their ideas in the early stages and I am not discouraging its use.
However, my own personal aversion to quizzes has been aggravated by ChatGPT, lest any reader think I have been smitten by this tool. It is possible to input a large chunk of text, such as some pages from a reading from class, and invite some AI tool (and there are many that can do this already) to generate a quiz for students. As time-saving as it may seem at first, quizzes are not necessarily designed to address misconceptions about a topic, they focus primarily on immediate recall, and they usually do not serve as a tool for developing learning. I worry about my teacher candidates taking the easy route by using such tools.
Gloom and worry aside, there are some joyous and fresh approaches to teaching that center writing as a process that have been championed by colleagues leaning in to use ChatGPT and incorporate it with a clear recognition of its inherent weaknesses. What follows are some of my favorites.
Prompt Dissection takes the prompt that a student provides a Chatbot, then, after reviewing the inconsistencies or inaccuracies in the response, they work to improve on the response to the prompt. Each successive response helps them figure out where the weaknesses in their own queries resided.
Find the Biases is an opportunity to have learners engage in discussions to work on recognizing blind spots in the Chatbot’s responses then figuring out how to incorporate multiple viewpoints.
Spot the Human is a great opportunity for students to dive deep into a text and try to ascertain if it was generated by artificial intelligence or a human. They will have to focus on nuance, passion, and even fallibility for clues as they reach their conclusions.
The use of AI-driven tools derived from large language models to generate text is at once exciting and ripe with the potential to alter the landscape of teaching and learning.
But back to Edward Tian’s GPTZero tool. After reviewing this blog post, its perplexity (the general randomness of my writing) is scored at 98.258 and its burstiness (the variation in my perplexity) is scored at 69.562.
I am delighted to report that according to the algorithm the blog post you just read is “. . . likely to be written entirely by a human.”
Not sure I appreciate the term ‘likely’ but I’ll take it.
Michael McVey is a professor in the Teacher Education Department and specializes in educational technology. Beyond EMU, he serves as the elected President of the Saline Area Schools Board of Education. He also serves as a Director for the International Society for Technology in Education (ISTE).