Is using AI for school work cheating?
artificial | intelligence | inclined | nonetheless | model
artificial | intelligence | inclined | nonetheless | model
Zachariah, a young lawyer, was overwhelmed with work. He decided to use ChatGPT to help him write a motion for a case. Zachariah was thrilled with the time it saved him and sent the motion on to the court. Only then did he notice that ChatGPT had included made-up lawsuits in the motion! When artificial intelligence makes up facts, it is often called a “hallucination.” Zachariah was horrified when he realized he had submitted made-up information to the court. He had to explain and apologize to the judge handling the case. Zachariah was later fired.
A New York Times article about AI hallucinations from 2023 said:
“hallucination rates vary widely among the leading AI companies. OpenAI’s technologies had the lowest rate, around 3 percent. Systems from Meta, which owns Facebook and Instagram, hovered around 5 percent. The Claude 2 system offered by Anthropic ... topped 8 percent. A Google system, Palm chat, had the highest rate at 27 percent.”
At the time of the article, which of the following statements was true?
A. If you ask questions to OpenAI’s technologies, one in three answers is likely to contain an error.
B. Palm chat will produce hallucinations 9 times more often than OpenAI.
C. Systems from Meta will produce errors 3% more often than Claude 2.
D. Approximately 1 out of 6 responses from Palm chat will contain a hallucination.
A teacher assigned an essay for homework. One half of all students used OpenAI’s program to generate their essays; ¼ of all students used Systems from Meta; and ¼ used Palm chat. Using the percentages from the article above, what percent of the essays are likely to include hallucinations?
A. A little less than 10% of the essays
B. Approximately 5% of the essays
C. Almost 13% of the essays
Large language models like ChatGPT hallucinate because they were designed to create text that follows the same pattern as other texts. It doesn’t think like humans do. If it does not have a needed fact, it is inclined to make up a response. Those who support AI say this is a technical problem that will be solved over time. Nonetheless, some educators worry that students will learn incorrect information and that AI will increase the spread of misinformation in the future. Do you think AI should be kept out of schools if it is not 100 percent reliable? Or is it the responsibility of students to check their facts?