A survey in January of 2023 from Study.com highlighted concerns teachers have with ChatGPT and its use in the classroom:
34% of educators believe ChatGPT should be banned from schools
72% of college educators that are aware of ChatGPT are concerned about its impact on cheating
89% of students stated they had used ChatGPT on a homework assignment
48% of students stated they had used ChatGPT on a take-home test/quiz
53% of students stated they had used ChatGPT to write an essay
ChatGPT's parent compant, OpenAI, collects a lot of data from users including account information, user data, device information, content, and communications. The privacy policy states that this data can be shared with third-party vendors, law enforcement, affiliates, and other users.
Information provided by ChatGPT is not always accurate and some people have called these "hallucinations" from ChatGPT. An article from April 2023 entitled "Why ChatGPT and Bing Chat are so good at making things up" explains how this happens. It also explains that these situations should be called "confabulations," which is when someone's memory has a gap and the brain convincingly fills in the rest without intending to deceive others. (Edwards, 2023)
Also, ChatGPT is not a reliable source of factual information because that is not its intended purpose. An article entitled "Analysis: ChatGPT is great at what it’s designed to do. You’re just using it wrong" provides a clear explanation of how it works:
A language model like ChatGPT, which is more formally known as a “generative pretrained transformer” (that’s what the G, P and T stand for), takes in the current conversation, forms a probability for all of the words in its vocabulary given that conversation, and then chooses one of them as the likely next word. Then it does that again, and again, and again, until it stops. So it doesn’t have facts, per se. It just knows what word should come next. Put another way, ChatGPT doesn’t try to write sentences that are true. But it does try to write sentences that are plausible. (May, 2023)
GPTZero and other AI writing detectors are not reliable. It is another computer model trying to identify patterns it recognizes as possibly created by artificial intelligence based on its algorithm. It will not solve the problem of plagiarism from platforms such as ChatGPT. It could also incorrectly label human-created writing as potentially AI-inspired, as discussed in a posting from Dr. Gegg-Harrison from the University of Rochester. If that happens, how could a student prove they actually produced the material?