ChatGPT
2022-Present
2022-Present
Sources and Suggested Readings
Bhaimiya, Sawdah. "A Professor Hired by OpenAT to Test GPT-4 Says There's 'Significant Risk' of People Using It to Do 'Dangerous Chemistry." Business Insider, 14 Apr. 2023, www.businessinsider.com/open-ai-gpt4-high-risk-used-for-dangerous-chemistry-expert-2023-4.
Davis, Kirsten. "The Rhetoric of ChatGPT: What ChatGPT Had to Say About Its Connection to Rhetoric and What We Can Learn from That Response." Appellate Advocacy Blog, 2 Mar. 2023, lawprofessors.typepad.com/ appellate_advocacy/2023/03/the-rhetoric-of-chatgpt-what-chatgpt-had-to-say-about-its-connection-to-rhetoric-and-what-we-can-lea.html.
Ferreria, Gabby. "Ask an Expert: What Are the Ethical Implications of ChatGPT?" Cal Poly News, 28 Mar. 2023, www.calpoly.edu/ news/ask-expert-what-are-ethical-implications-chatgpt.
Image credit: Gerd Altman
ChatGPT (“Chat Generative Pre-Trained Transformers”) is a type of artificial intelligence. Users interact with it by asking questions, and it produces human-sounding answers by processing billions of data inputs to predict what a “typical” answer might be. While other forms of computer generated language, such as autocorrect and virtual assistants (Siri, Alexa, etc.), have been around for a while, ChatGPT emerged in late 2022. Its noteworthy ability to approximate human language has caused a new round of social, educational, and ethical disruption.
ChatGPT’s programmers designed it to approach language as an art, natural sounding yet also clear and readable. However, it is not objective. Through algorithms and other factors, ChatGPT brings its own perspective to what it says, meaning it has inherent biases and incomplete understanding of complex topics. Interacting with it might give users the impression that they are talking to either a sentient being or a super-human technology.
However, processing power does not make ChatGPT infallible. Similar to the humans that programmed it and wrote the texts it mines for content, it gets things wrong, even as it “speaks” with an air of authority. In other words, ChatGPT is a “rhetorician” because it processes information and performs as a speaker. Although the program is not human, its programmers are, meaning users have to be aware that any output from the artificial intelligence is a rhetorical production.
Just a few months after its broader release in 2023, ChatGPT is being used to improve grammar or writing style, suggest genre-specific content, and summarize scientific research. In sum, it has potential as a kind of rhetorical business partner. However, human writers and speakers must approach it with care not to be seduced by the authoritative voice it is designed to produce. It has serious limitations and risks. For example, ChatGPT can only make false assumptions about its audiences and can produce damaging information, such as how to build a chemical weapon and craft hate speech.
Scholars are considering implications of artificial intelligence as it rapidly makes its way into our workplaces and educational institutions. At this point, the technology is developing faster than humans have time to think about what it will mean. Although industry leaders have asked for a pause on or government regulations regarding further development, the cat may already be out of the bag.
Contributed by Nancy Small, Spring 2023