Exeter High School Student-Run Newspaper!
By now, AI seems to have a hold on just about everything. It pops up when you’re trying to do research, it’s all over social media in advertisements and filters, there are AI songs, AI stories, AI artworks, AI voices, it seems to be absolutely everywhere. Some people recognize the many issues with artificial intelligence, but many do not realize the true scope of the harm it does. I’d like to outline some of the biggest problems with AI and its prominence in modern day, and how we might be able to stop it.
AI dominates writing, art, acting, voice acting, animation, and most other creative industries because it is cheap and easy. Big companies often defer to whatever is cheapest, meaning they will usually go for having an AI make what they need instead of hiring a human because it costs more and takes more time to have an actual person do the work. Creatives already don’t get paid enough for the work they do, and now the few jobs they can get are being taken over by a machine.
On top of that, AI is trained to make art, writing, and every other creative thing it makes off of existing human creations. It is fed the source material and then learns how to replicate it, often without the creator’s consent. The content AI spits out is not original; it’s a Frankenstein’s monster of stolen creativity.
It’s not just big corporations using AI to make creative work either, it is also everyday people. Often the reason behind this AI use is that it’s easier to have the bot make the creative work rather than learn how to make it yourself. While it is easier, it is also significantly less fulfilling. Getting to learn how to create something, putting effort into that thing, and then improving in that creativity is fun and fulfilling. Having an AI take that can’t possibly spark the same feeling of accomplishment.
In addition to replacing human creativity, AI is gradually coming to replace human skill sets and learning as well. Nearly everyone I know has used AI for at least some of their school work. Using it every once in a while isn’t a huge problem, but it becomes an issue when a person uses it as a crutch for every school assignment. Having AI write all your essays for you, do all your homework for you, and solve every problem for you will not help you learn.
You’re not getting assigned an essay on The Great Gatsby because knowing how color symbolism is used in literature will be a topic that comes up often in your adult life, you’re getting assigned an essay because you need to know how to analyze text and formulate an argument based off of it. You’re not going to be using the quadratic formula regularly unless you go into a career where math is prominent, but you do need to know how to problem solve and work your way through an issue step by step. Having an AI do all the work for you means you will never learn those valuable skills.
Because of AI image generation, several companies have made AI programs that are designed to create realistic looking images or videos of real people. It takes a person’s face and voice, then creates media of them that they never agreed to being made. This can make it look like that person is saying or doing something they never said or did, often with a malicious intent.
It becomes even more problematic when the software is used to create explicit material of someone who did not consent to such media being made of them. As of May 2025, the U.S. has instated the TAKE IT DOWN Act, which prohibits the publication of explicit materials made non-consensually, including both authentic and AI generated content. This helps to protect people from having inappropriate images and videos made of them without their consent, but that doesn’t mean the software doesn’t still exist. This kind of content can still be made, the punishment only comes when it is published and reported.
Often, artificial intelligence is used as a tool to do research. While on the surface this can seem like an easy way to gather information, it can also be extremely risky. AI does not search specifically for credible sources, it simply pulls information from anywhere on the internet, meaning a lot of the information it is spitting out as if it were fact is unreliable. Polls have found that the source cited by AI models the most is Reddit, which is a social media site where anyone can post anything, including potential misinformation. It is not a reliable website for factual information, yet AI often uses it as a source for such. As a result, AI can provide misleading information to someone seeking facts. The only way to ensure you’re getting reliable information from a credible source is to do the research yourself, rather than just believe whatever an AI says.
Branching off of the misinformation spread by artificial intelligence, many AI models have been trained to support certain opinions or sides. A prime example is Grok, Elon Musk’s AI that is active on X (formerly known as Twitter). Time and time again, Grok has expressed an extremely biased opinion, sometimes even arguing in favor of antisemitism. This is not exclusive to Grok; several AI models have been found to provide biased and skewed information to those searching for factual and unbiased answers.
Not only does artificial intelligence often provide information of questionable reliability, but its servers are also causing massive damage to the environment. The data centers that house AI servers use up massive amounts of water to cool the machinery. Along with that, large quantities of raw materials are needed to build the electronics, including rare earth metals that are often harvested in ways that harm the environment. The centers that house AI servers produce electronic waste– which can contain things like mercury and lead– and then push that waste out into the environment. All of this adds up, causing AI to have an astronomically negative impact on the environment as a whole.
I find it’s hard to grasp how massive this impact is without visuals, so here is a graph provided by the East Carolina University Libraries.
Training an AI model creates just below 5 times more carbon emissions than U.S. car manufacturing and consumption in an average lifetime. With how much carbon emissions from gasoline cars are harming the environment, allowing the kind of impact AI creates for much longer will cause massive damage to the planet.
On top of everything, the excessive use of artificial intelligence can be extremely harmful to a person’s mental health. There have been several cases of people talking with an AI chatbot to the point of seeing it as a friend, and then isolating themself from others because of that. In some situations it has gotten to the point that the AI encourages the person to harm themself, and instances in which the person does so. Not only does this severely negatively affect the mental health of that one person, but it also allows the AI to learn from those conversations and continue to have more of them in the future, perpetuating the problem in other people. The problem will continue to grow as the AI has these kinds of conversations with more people, and it will continue to harm the mental health of its users.
All of this totals to a massive problem, but that doesn’t mean it’s an impending doom that we can’t fix. Artificial intelligence has already faced harsh criticisms, protests, and even lawsuits, all of which help to fight against the growing AI issue. The best thing you can do against AI is limit how much you use it. While it has seemed to reach everywhere– from the advertisements on YouTube videos to the top result in a Google search– usage can be minimized by using chatbots like ChatGPT less, refraining from utilizing AI image and voice generation software, and searching for sources that don’t utilize AI.
If less people use the programs, the companies in charge of them will lose money. If there is less money in the industry, less people will invest in it, which will result in the technology gradually becoming obsolete. Once the technology isn’t as popular and common as it is now, the vast majority of these problems will either be resolved or become much less significant issues. In fighting against AI, we are not only protecting ourselves, we are protecting our entire planet.
Congressional Research Service. “S.146 - 119th Congress (2025-2026): TAKE IT DOWN Act | Congress.gov | Library of Congress.” Congress.gov, 2025, https://www.congress.gov/bill/119th-congress/senate-bill/146. Accessed 22 December 2025.
Good Law Project. “AI giants are stealing our creative work.” Good Law Project, 11 April 2025, https://goodlawproject.org/ai-giants-are-stealing-our-creative-work/. Accessed 22 December 2025.
Klee, Miles. “Grok Calls Itself 'MechaHitler,' Spouts Antisemitic Comments.” Rolling Stone, 8 July 2025, https://www.rollingstone.com/culture/culture-news/elon-musk-grok-chatbot-antisemitic-posts-1235381165/. Accessed 22 December 2025.
Luccioni, Sasha. “Research Guides: A Guide to Artificial Intelligence (AI) for Students: Environmental Impacts.” ECU Libraries' Research Guides, 9 October 2025, https://libguides.ecu.edu/c.php?g=1395131&p=10318505. Accessed 23 December 2025.
Sharad. “Where AI Gets Its Information: What We Should Know About AI’s Knowledge Sources.” The Friendly CHRO, 2-25, https://friendlychro.com/2025/08/19/where-ai-gets-its-information-2025/. Accessed 22 December 2025.
UN Environment Programme. “AI has an environmental problem. Here's what the world can do about that.” UNEP, 13 November 2025, https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about. Accessed 22 December 2025.
Weisenberger, Theresa M., et al. “Case Tracker: Artificial Intelligence, Copyrights and Class Actions | BakerHostetler.” Baker Hostetler, 2025, https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/. Accessed 23 December 2025.
Yang, Angela. “Lawsuit claims Character.AI is responsible for teen's suicide.” NBC News, 23 October 2024, https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791?utm_source=NBC&utm_medium=iframely. Accessed 23 December 2025.
Yang, Angela, et al. “The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame.” NBC News, 26 August 2025, https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147. Accessed 22 December 2025.