Academic dishonesty policy statements regarding the use of AI for the International Conflict and Security (left) and the Contemporary Problems in Freedom of Speech and Press classes at the University of Minnesota Twin Cities campus on Aug. 9. PHOTO BY GABRIEL CASTILHO
In her office at the University of Minnesota, Minn., journalism professor Jane Kirtley writes on her class’ syllabus statement on use of AI tools on Aug. 7, 2024. PHOTO BY GABRIEL CASTILHO
_____________________
By Gabriel Castilho / The Hubbard School
University of Minnesota political science professor Ron Krebs said during the spring semester he had multiple student writing submissions that read as if they were altered by some kind of assistance, like ChatGPT, which his syllabus prohibited.
“Turns out those students hadn’t read the syllabus,” Krebs said. “They were in violation. They admitted when I confronted them.”
But this semester, university teaching specialist John Bartucz will teach a computer science writing-intensive course using artificial intelligence as an assistant that can provide feedback based on a rubric.
“I think this is, to me, the most logical role for artificial intelligence,” Bartucz said.
As they prepare their syllabi for the fall, University of Minnesota instructors vary in their use of AI to teach. While some are choosing to prohibit any type of AI use, others are fully embracing and even incorporating it into their coursework.
Krebs’ ban on ChatGPT for student submissions has led him to experiment with the AI tool when grading assignments to see how students could use it.
He had to rethink how he delivered some reading questions in his International Conflict and Security class. He said asking a “regurgitation” question to engage with a critical point in a reading was pointless.
Krebs said he has “rejiggered” his assignments and raised expectations. He has focused more on “thought” questions, which means students have to substantively engage with the reading. If they answer it in a vague ChatGPT way, they will get a poor grade.
“ChatGPT will be able to summarize a critical point in a text probably better than I can,” Krebs said. “The whole purpose of the assignment was to make sure the students were zeroing in on the important issue … If ChatGPT can provide it, there is zero value to it.”
Media law professor Jane Kirtley’s experience with students’ use of artificial intelligence in her midterms and final exams led to a major change in her delivery method, she said.
“When I used to give my exams in the undergraduate (mass communication) law course, the last time I had it as a Canvas-based exam there were several instances of students who had very clearly used AI,” Kirtley said. “That’s why I changed it to an in-person exam.”
Instead of delivering her exams through an online platform, Kirtley now gives blue book exams in her Media Law course.
Kirtley said there have already been multiple scandals where practicing lawyers submitted briefs in court generated by AI.
Last year a New York federal judge sanctioned lawyers who submitted a legal case brief written by ChatGPT, which included citations of non-existent court cases, according to CNBC. The judge fined each lawyer $5,000.
“If you’re just starting out studying law for the first time, and you don’t really have an idea of what is going on, then you’re going to be much more vulnerable to what’s happened to those lawyers,” she said.
According to linguistics professor Diti Bhadra, who is teaching the Human Mind and Human Languages course, it is an educator’s responsibility to be vigilant on student AI use. She does not believe the university has specific tools catered to detect students’ use of artificial intelligence.
The University of Minnesota website has a section dedicated to restricting ChatGPT use in classes. However, it warns users about the high rates of false positives and false negatives of AI detection tools like OpenAI and ZeroGPT, and should be used as a last resort.
The University of Iowa employs a similar approach but recommends instructors refrain from using AI detectors as they can unfairly penalize them.
“If you look at all these articles checking if a certain thing is written by AI or not, it’s very tricky,” Bhadra said. “Suppose you give it a prompt, the program tells you it’s 80% AI generated and 20% human generated. If you take out a period or a comma then it will tell you it’s 2% AI-generated.”
Kirtley said she and her teaching assistant do not have time to police their students on artificial intelligence usage.
“I don’t like the idea of basically creating the presumption that students are guilty until they are innocent,” Kirtley said. “It’s simply not feasible if you’ve got 65 to 70 students in a class and even more so for the big classes.”
Embracing AI
Bartucz, who has embraced AI in his computer science courses, said students will receive feedback from the teaching assistants and him for one of the most important essays in the class. Then they will get feedback from ChatGPT.
“I want them to be thinking about how much help is too much help,” Bartucz said. “So, if they think of it as a TA, a friend, or a group partner, or something like that—if it is doing all the work, then that is not your work anymore.”
Additionally, building AI skills was part of the university’s business analytics coursework for years before ChatGPT appeared, Carlson School of Management professor De Liu said in an interview with the Star Tribune.
“Companies are seeing their employees leveraging AI as a productivity tool, whether they’re generating code, generating writing or generating ideas,” Liu said.
According to the university website, professors have three options for syllabus statements concerning ChatGPT: embrace its use, allow limited usage or prohibit its use.
Other Big Ten universities employ similar approaches to AI use. Even though Michigan State University does not have a university-wide policy on AI in the classroom, it delegates the decision to use AI or not to instructors.
The University of Iowa, on the other hand, employs an approach closer to the University of Minnesota’s, with an extra option for professors who want to encourage AI use for certain assignments.
Bartucz said in an email the syllabus for his upcoming course will have a specific section dedicated to AI, as opposed to last year.
On Kirtley’s and Krebs’ syllabus statements on the use of artificial intelligence tools, it is prohibited to submit all or any part of an assignment statement to an online learning support, incorporating any part of an AI generated response in an assignment or to ask it for improvement.
“There is so much hallucination and misinformation,” Bhadra said. “If the students are sending that in and considering that to be knowledge, no, that’s not knowledge. LLMs are Large Language Models—if you look at the math they literally predict what word will come next because that’s their training data—they are not knowledge models.”