As we all learned from The Good Place, the modern world is complicated, and our actions have unintended consequences.
A big unintended consequence of using automated image and text generation technology is that the data centers that process these requests waste huge amounts of energy and water.
Where we can, I think it is good for all of us to minimize our carbon footprint. (I even add "-ai" after my Google searches so that they don't automatically waste additional resources!)
ChatGPT Is Everywhere — Why Aren't We Talking About Its Environmental Costs? (Teen Vogue, May 2025)
A bottle of water per email: the hidden environmental costs of using AI chatbots (Washington Post, Sept 2024)
ChatGPT is having a really bad impact on the environment (Tech Radar, Sept 2023)
Tools for automatic media generation require large, well-annotated sets of training data. The companies that make these tools have stolen their training data from authors and artists. They collected it without their consent and without compensating them. They then try to sell these tools as a way to replace authors and artists under the guise of efficiency. They are trying to automate what people enjoy and make a living doing.
In addition, training these tools requires humans who can test and determine their limits, which involves getting them to create violent and disturbing content. These are low-paying jobs. People are not adequately recognized or supported for their work training these tools.
Hayao Miyazaki on the use of AI: “I am utterly disgusted” (Far Out Magazine, May 2023)
Why A.I. isn't going to make art (The New Yorker, Aug 2024)
The Exploited Labor Behind Artificial Intelligence (Noema, Oct 2022)
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (Time, Jan 2023)
A tool like ChatGPT is designed to make things up. It doesn't search for information, and it doesn't think. It has no understanding of the world. The only thing it does is use mathematical formulas to generate words based on probability. It might get a lot of things "right", but unless you are able to verify for yourself, you can't trust that it will get everything (or anything) right.
There are lots of examples of this. I have posted one here that is very ironic: a suggested summer reading list was made with automatic text generation and it named several books that don't exist. There are also examples of academic papers with fake citations, and of automatically generated recommendations that would cause injury if they were followed. In all these cases, the tool is doing exactly what it is designed to do: making things up based only on probable word combinations. The results can't be trusted.
If you can put aside the environmental and ethical issues, it might be fun to think of an automatic text generation tool like a Magic 8 ball, a tarot spread, or your horoscope. They are words/images for us to think about, but they aren't based in truth or fact.
AI Search Has A Citation Problem (Columbia Journalism Review, Mar 2025)
ChatGPT is bullshit (Hicks, Humphries, and Slater 2024)
ChatGPT: these are not hallucinations – they’re fabrications and falsifications (Emsley 2023)
Americans’ use of ChatGPT is ticking up, but few trust its election information (Pew Research Center, Mar 2024)
When I was a student taking Statistics, we had to learn how to do things like calculate a standard deviation by hand, even though we all had calculators that could do it for us. The reason was so that we could understand what the logic behind a standard deviation is, and so that we would know what the tool was doing for us.
I'm a professor of linguistics. I like reading and writing, and I am in awe of what humans can do with language. As an academic, I value knowing where information comes from. As somebody who lives in the US, I see that people are very easily manipulated when they cannot evaluate the source for information they get.
All of these factors make me opposed to the use of automatic text generation in educational and academic contexts. Reading and writing are valuable skills that require practice. We cannot rely on automatic text generators when reading and writing are the skills we seek to develop.
Why synthetic text is incompatible with science blogging (Dingemanse 2025)
Letting AI Read for Us Can Undermine Our Thinking (Stanford University Press Blog, Jan 2024)
Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance (Fan et al. 2024)
Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students (Abbas et al. 2024)
A big reason to resist "AI" is that it is overhyped.
Much of my own understanding of "AI" and "AI Hype" comes from Emily Bender, particularly an excellent video that she made in 2023, and her co-authored 2025 book:
How should regulators think about "AI"? (Bender 2023, see also this transcript)
The AI Con: How to Fight Big Tech's Hype and Create the Future We Want (Bender and Hanna 2025)
Some of the specific key points that I have learned from these works are:
"AI" is a marketing term. A wide variety of disparate technologies are sold as "AI", and it is important to replace this term with more specific terms. One option is automation:
When we think about automation, we can ask the following questions:
What is being automated? How well does the technology fit the intended use?
Who benefits from the automation? Who is being harmed?
Who is accountable for the automated system? What existing ethical or legal regulations already apply?
We can also recognize that different types tasks can be automated (for better or worse):
Decision making (for example, screening resumes)
Classification (for example, organizing digital photos)
Recommendation (for example, streaming movie suggestions)
Transcription/Translation (for example, captioning)
Text/Image Generation (for example, ChatGPT)
Automatic text generation is made possible by Large Language Models (LLMs):
LLMs model the probability of a word, given the words that come before it, similar to autocomplete.
LLMs are made using a large body of training data and a network of weighted mathematical functions that are "trained" to correctly output the training data. The trained model is then applied in other contexts.
LLMs capture what other words a specific word often co-occurs with. These are called "embeddings" or "semantic vectors", and are used to model things like semantic similarity.
LLMs do not search for information and they do not think. They "extrude synthetic text" based only on what words often appear together in their training data. This not a bug, it is how they are designed.
LLMs aren't people. They aren't even close (...and I can't believe this has to be said).
Language is a powerful cue to what other people are thinking. When we see language, we reflexively assume that there is a mind behind it.
But fabricated text from an LLM is not made by a mind, it is procedurally generated by weighted mathematical functions.
The only meaning that can be found in fabricated text is the meaning that we, the humans, impose on it.
Suggesting that machine automation of specific tasks is on par with human abilities like thinking, empathizing, or experiencing the world is inherently dehumanizing. Automating tasks may be useful, but we need to be careful how these technologies are sold to us and how we think about them.
My colleagues in Linguistics at Gallaudet posted a position statement about AI tools as they relate to ASL, specifically: ASL and AI tools
Mark Dingemanse always has excellent takes and resources, including this: Generative AI and research integrity
I have made a sample AI policy for graduate syllabi: AI Policy Example
I may use these in my next Cognitive Linguistics class (again, this is not a flaw, it is the system working as designed): 'You Can't Lick a Badger Twice': Google Failures Highlight a Fundamental AI Flaw
Seen on Bluesky: Sepehr Vakil's AI, Equity, and Public Education syllabus
A self-study about students using AI (take it with a grain of salt): Anthropic Education Report: How University Students Use Claude
Curated by Ryan Lepic, Last updated August 2025