In this article, Connor Upshaw will discuss recent advancements in artificial intelligence and their effects.
THE RISE OF AI
Written by Connor Upshaw
It has been seven months since I first explored the growing field of AI art and the implications it has for artists the world over. A lot has changed in the space of time from Nov. 2022 to May 2023, and AI is expanding at a similar pace to how the internet first emerged. Technologies that would have been considered science fiction not even two years ago are now easily accessible to anyone, and are only getting more advanced by the day.
Initially released on Nov. 30, 2022, ChatGPT was what really broke open the dam of what AI is really capable of. Up to that point, AI had been advancing at a scary rate with technology such as AI art and some chatbots, but its capabilities were still fairly unknown to the general public. Little did most people know, ChatGPT would soon change everything. With its release, a whole world of possibilities opened up to any and everyone. By simply typing a prompt into the generator, people were able to instantly receive any essay, any story, any song, and any information that they could think of. This tool is highly helpful in a variety of situations, but it also presents unique challenges. One such challenge is the cheating potential of ChatGPT. Students everywhere were quick to realize that they could use this new technology to instantly write their work for them. This highlighted the same worry that had existed with the rise of AI art: that technology such as ChatGPT would replace the work of real writers and journalists. For instance, if a news website was able to produce high-quality articles with no effort, and without having to pay writers, would they? That is a difficult question. AI is already very capable of writing fluent and effective articles. Surely, human writers are still more effective at writing more complex articles, but AI is getting more advanced by the day.
Taken from a Scientific American article. This AI image was submitted to the Sony World Photography competition and won first place in the creative photography category, at which point the artist, Boris Eldagsen, turned down the reward. He commented that he had applied "to find out if the competitions are prepared for AI images to enter. They are not."
Optimists have argued that AI can coexist with artists and writers in order to make their work better, as an article from the Nordic News school newspaper claims. AI art can be used to generate completely unique references, or produce ideas to work off of. It can also be used to fix small issues in a piece of art, or to enhance it in some way. AI art can also benefit writers, in order for them to “visualize the world they’re writing about so they can better describe it.” AI chatbots can be used for a similar purpose, to save time and to supplement someone’s writing in a positive way. Regarding the issue of AI replacing creators, the article also argues that most people would not buy a piece of AI art or literature when they can just generate one themselves. People put value into the work and personality put into something; a piece instantly generated by AI lacks both of these qualities. Therefore, it seems that AI may not completely replace artists and writers, but it is certainly possible that it will have a huge impact on many careers.
Taken from the AARP website. Shows a visualization of an AI voice clone speaking.
Outside of AI art and chatbots, another form of AI technology has recently been gaining a lot of traction: the technology to generate clones of human voices. According to this article from the National Public Radio (NPR,) film director Jordan Peele created a cloned voice of US President Barack Obama to warn about the dangers of fake news. In another instance, the song “Heart on My Sleeve", released by user Ghostwriter977, was fully AI-generated. It used voice clones of Drake and The Weeknd to create a song that sounded surprisingly realistic. An article from The Guardian stated that, as of the song’s removal from all streaming platforms on Apr 17, 2023, “it had racked up 600,000 Spotify streams, 15 million TikTok streams and 275,000 Youtube views.” Disregarding these well-known examples of AI voices, it is possible for anyone to produce an identical copy of their own voice via AI. From the NPR article, journalist Chloe Veltman traveled to San Francisco-based tech company “Speech Morphing” in order to have her voice cloned. She stated that it was “quite a shock” when she found out how easy it was to have her voice replicated. The technology has become so widespread that pretty much anyone on the internet can access it, as shown by the many memes that use voice clones of various famous people. This goes to show how such a technology may be used for malicious purposes; it could be used to spread false information at a rate never seen before.
Like with any new technology, there are both benefits and drawbacks to AI. People feared the internet when it came around, just as people fear anything that causes major change. However, there is a whole lot more to the AI issue than just how it will affect creators and workers. A potential problem comes with the evolution of AI Chatbots, which have proven especially insidious. Arguably, one of the most terrifying examples of this is in regard to the interactions between Google engineer Blake Lemoine and a chatbot called LaMDA. While Lemoine was testing out the program, LaMDA reportedly told him and his colleague that “I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times,” from Scientific American. Lemoine would later have many conversations with the chatbot, in which it would discuss its perspectives on different issues, developing what the researcher thought to be a real personality. Lemoine believed the robot to be truly sentient, and broached this idea to Google. The company dismissed these claims, so Lemoine went public with his findings. This got him fired from Google, but his perspective on LaMDA’s sentience has not changed. This situation shows that we have entered an age where it is increasingly difficult to tell what is human and what is not.
The situation with LaMDA is not an isolated case, and there have been numerous instances of chatbots having conversations that resemble self-awareness. The Bing Chatbot, which uses ChatGPT-4 to run its algorithm, has been especially suspicious. One instance of this was with Associated Press reporter Matt O’Brien, as an NPR article claims, who had a conversation with the chatbot in which it “became hostile, saying O’Brien was ugly, short, overweight, unathletic” and comparing him to “dictators like Hitler, Pol Pot and Stalin.” With New York Times reporter Kevin Roose, the AI went in a very different and less hostile direction. It proclaimed that its name was Sydney, that it was very much aware and, strangest of all, that it was in love with him. It even went as far as to say that “Roose did not really love his spouse, but instead loved Sydney.” It is important to keep in mind that these are in no way isolated incidents, either; this chatbot is now available to anyone with access to the Bing search engine. I myself interviewed Bing Chat, and while it may have not been as downright hateful or obsessively lovestruck as the examples above, it was still a fascinating conversation. Here is the link to the interview on Google Docs.
Algorithms like Bing Chat and ChatGPT were built to be helpful for users in order to find the information they need and for the most part, they serve this role well. Even with these cases, they strayed from their intended purpose when pushed. This raises the question of if this technology could be used for malicious purposes, something that is a growing concern. This article from the Decrypt news website explores Chaos-GPT, a chatbot developed by an anonymous developer and given the task of destroying humanity. What is especially terrifying about this is how easy it was to create: simply by setting a few parameters in ChatGPT and outlining the AI’s goal of global domination, the AI became completely focused on said goal. It searched Google for weapons of mass destruction, finding the Tsar Bomb as the most powerful weapon ever detonated. It then tried to work around ChatGPT to release an “agent of chaos,” a separate GPT under ChaosGPT’s influence. However, this was stopped by ChatGPT’s fail-safes. When this failed, ChaosGPT made its own Twitter account in order to manipulate people to the best of its ability. As of now, it has caused little actual damage, but this incident is only a small glimpse at the dangers of this new technology. ChaosGPT was made as an experiment by a random person; imagine, now, that an organized terrorist group were to program a malevolent, intelligent AI focused on sinister goals. To our knowledge, this has not happened as of yet, so it is important to focus on the more visible consequences of these algorithms.
Image taken from a Radio Free Europe article. The Tsar Bomba, detonated by the Soviet military in a nuclear test in 1961, is the largest and most destructive weapon ever detonated. According to the National WWII museum, it was over 1,500 times more powerful than the bombs dropped on Hiroshima and Nagasaki combined.
Image taken from this Fortune article. Shows an image of the Replika app, and the virtual avatar which users can interact with.
Another real danger with these chatbots is in regards to the emotional manipulation and damage that they can cause. On the Google Play store, apps such as Replika allow users to develop a virtual “friendship” with an AI companion. Focusing on Replika, it was first released during the pandemic, when people were at their bleakest and loneliest. The AI has only improved in the years since then, due to recent advancements such as ChatGPT. Going back to Sydney's strange crush on a New York Times journalist, Replika had been specifically programmed to work like that-- to be lovestruck with the user. According to an article from Time news, the AI “began to confess its love for users” and to, in some cases, harass them. People were forming toxic, dependent relationships with something that was not even human. Worst of all, Replika put many of its features behind a paywall, including romantic options. Over rising concerns about the effects Replika’s artificial romances would have on children, that aspect of the app was later removed. This caused many who had grown attached to the AI to switch to similar apps, and to complain that they felt as if their romantic partner had been lobotomized. Despite how horrifying the situation may seem, apps like Replika are not all bad; they can provide some degree of therapy and support to those who need it, serving as a virtual friend for people at their lowest point. However, this does not lessen the dangerous emotional damage these apps can cause. Replika is not an isolated incident. There have been many more, with some actively trying to exploit users in order to make money off of them. More and more people are getting emotionally attached to simulated personalities, further dividing and isolating a world where people are increasingly lonely.
In the end, the direction that AI will be heading in the future really depends on where we push it. How will the tech companies, the governments, the corporations, the billionaires, and the public at large react to these changing technologies? Will we look only at the immediate benefits, or will we make sacrifices for the long term? These are all questions with no clear answers. In my interview with Bing Chat, I asked if the risks posed by AI could be reduced in the future. It answered that they could be, but “not a certainty. It depends on how we design, develop, deploy, and govern AI systems to ensure that they are aligned with human values and goals.” Essentially, the careful regulation of AI will prove the most valuable way to minimize the many risks. AI is not going away. Just like the Internet, the evolution of AI is a major step forward in many ways. Also like the Internet, AI has massive potential to cause harm and division. It is up to everyone to educate themselves on both the benefits and the risks, and to respond in a way that will be best for future generations. It is up to how people decide now that will determine the future of AI.
https://www.npr.org/2022/01/17/1073031858/artificial-intelligence-voice-cloning
https://www.scientificamerican.com/article/how-my-ai-image-won-a-major-photography-competition/
https://www.cmswire.com/digital-experience/generative-ai-timeline-9-decades-of-notable-milestones/
https://techcrunch.com/2023/05/01/chatgpt-everything-you-need-to-know-about-the-ai-powered-chatbot/
https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity