The Dangers of Overstepping or The Dawn of Progression?
Photo by Nahrizul Kadri on Unsplash
Artificial intelligence (AI) is a frontier technology that allows computers to simulate human learning, problem-solving, decision making, and creativity to perform a multitude of tasks, from as simple as sorting emails to as complex as performing surgery, discovering medicine, and beating world champions at chess (Stryker and Kavlakoglu, IBM).
In 1818, Mary Shelley warned the world of a future shaped by unchecked ambition. Her novel Frankenstein told the tale of a brilliant man, Dr. Victor Frankenstein, who reanimated life only to abandon it in fear. His creation led to his downfall.
Over two centuries later, humanity is once again on the brink of creation: this time, not of flesh and bone, but of code and consciousness.
Artificial Intelligence (AI) is the simulation of human-like intelligence in machines. And while it holds the potential to advance civilization beyond imagination, it also threatens to spiral beyond our control, just like Frankenstein’s creature.
AI is a modern monster. But who creates it… and who’s responsible if it goes wrong?
What is AI, and how does it work?
How does AI benefit humanity?
How does it harm or alter human society?
Is AI a net good, or does it pose risks if left unchecked?
Can AI achieve human-like consciousness—and is that likely?
What could the future of AI look like?
Despite the numerous benefits of Artificial Intelligence and the potential harms, it is important that humanity as a whole regulates, oversees, and enforces a safe standard when creating AI.
We must regulate the AI race to ensure greater accountability, alignment with ethics, and a focus on the greater good for humanity.