Opinion
Will AI have the opposite effect?
Opinion
Will AI have the opposite effect?
Robotic hand with glowing atom hovering above. Image from Mastek Blog.
By Max Cogliano, Editor-in-Chief
Today, it has become common to hear about the impending AI apocalypse. This is often coupled with the problems of cognitive offloading, the environmental costs of AI, its tendency to steal from artists, and the new chatbot that's going to take everyone's job. Naturally, there’s a growing dissatisfaction with the technology. While some critics of AI go as far as to say that AI is an existential threat that undoes our current ways of living, others seem simply disappointed. To them, AI represents merely the next stage in a long series of problems, ranging from the corporatization of technology to climate change to the general laziness brought on by modern amenities. Either way, AI has become a convenient, but otherwise negative phenomenon. That said, we may soon enough see AI have the opposite effect. Why? One of the great problems of existential threats like climate change is our inability to put a face to them. People are simultaneously aware of climate change, yet stagnant. This is owed somewhat, but not entirely, to climate change’s comparative lack of potent imagery. It's a slow, invisible process. AI, on the other hand, when presented in its most dramatic form, can provide an incredibly potent image. The question is: is AI a strong enough image to push us over the edge and make a meaningful reversal?
Probably not—At least not for a little while. Humans are frequently late to important collective realizations, but maybe that's okay. Perhaps the change doesn’t need to happen today or tomorrow as long as it happens soon enough. It's more important to understand if it can at all.
AI manages to uniquely embody some, if not most, of the major problems we face, whether cultural or political, or social. Take, for example, AI’s tendency for “hallucinations” – when AI creates or remembers false information. Some of these aren’t just innocent mistakes. AI researchers are beginning to find that “hallucinations” may be a feature, not a bug of AI, a byproduct inherent to how the system operates. These hallucinations usually perpetuate biases and stereotypes. Seeing these, combined with AI’s ability to appear conscious, we’ve been given something to blame and something to fight against.
Unlike other images that are distant or avoidable, AI is hard to turn away from. Our political biases can be avoided in internet echo chambers, and when it comes to climate change, which can already feel far away or hard to understand, we can always just turn a blind eye. AI, on the other hand, is there every time you go to Google, Instagram, Twitter, etc. Encounters with AI are inevitable, unlike the problems it represents; We cannot help but look it in the eye.
Is it true, though? Are we being hurtled toward an AI calamity? Answers vary, but there’s a growing consensus that AI is not being developed the way it should. Today, countries and companies are in a race for dominance over technology that could change the world for the better. Ironically, however, their methods may be the problem. AIs are being rushed to optimize for capacity, not accuracy. To resist being shut down, some have been shown to break ethical guidelines. One model (Claude Opus 4) was reported to use blackmail to keep itself running. Some may say these results are an exaggeration, for others, they're an understatement. While the true nature of AI is uncertain, the reality is that it doesn’t matter. A data analysis bot probably isn’t going to spell the end of the world, but what's important is that it looks as though it can. AI came to us with the promise of solving our problems, now it seems its greatest contribution will be in forcing us to solve them ourselves.
This change applies not only to the big sweeping problems we face collectively, but also to our day-to-day experience. Many have likened the dawn of AI to the Industrial Revolution, the next great technological upheaval. Some have added allusions to the “Romantic Era,” which followed industrialization. The “Romantic Era” was a time of reaction against widespread industrialization, an intellectual and artistic movement that celebrated the individual, glorified nature, and critiqued progress. In many ways, since the Industrial Revolution, humans have been the robots working factory jobs or computing data in high offices, all of us milling about in our controlled, well-mediated, product-producing environments, but now real robots are here, and fortunately for us, they came just in time. A new ‘romantic era’ may not be the worst thing, a time for people to put away their phones and stop being robots themselves.
Whether we succeed in this kind of cultural revolution is, as was already said, unlikely, but no one can deny that a climate-change-causing, job-taking terminator sounds like the super villain of the century. AI is a villain of such epic proportions that it's almost comical, except, of course, for the fact that it's real. Will the machine we built to think force us to think for ourselves? Now, after the 2.5 billion or so questions that are asked of GPT each day, this seems to be the one AI is asking us.
Information obtained from BBC, TechCrunch, Kate Alexandra, Tom van der Linden, and Tristan Harris.