Is AI Dangerous?
By: Aumkar Deochakke
By: Aumkar Deochakke
Countless books and movies have described the same scene: a dystopian society run by sentient robots who exploit and terrorize humans for their own agenda. This same idea has quickly been adopted for almost every advancement in technology, but a major concern today has been Artificial Intelligence. These concerns naturally elicit a question: Is AI really as dangerous as we tend to portray it?
Before we address that question, it is useful to understand what exactly AI is. In the most basic terms, AI is a computer system that attempts to mimic human decision-making, often with patterns it finds in large amounts of provided data.
This definition helps make an important distinction. Despite the fact that AI models, such as large language models, seem very human in its communication, it is incapable of actually understanding what it is doing and form its own opinions on its actions. It is simply optimizing for some variable. That is to say, it can only fulfill the goal that a human tells it to do.
The fear of AI, however, is rarely sourced from people’s view that AI has nefarious intentions, but is more so motivated by the absence of its humanity that accompanies it. Since AI is nothing more than a pattern-finding model it is incapable of common sense or humans’ emotional intelligence, regardless of how it presents itself. When AI has the capability to do whatever it likes to optimize for its solution, it could be very possible that it ignores human sensibility and takes more sinister actions. Still, as of date, there is no AI that has the power to go to such lengths. Chat GPT can only write its little essays in message boxes and stable diffusion can only paint pictures.
In the future AI may be presented with more complex problems that require a sense of ethics. Self driving cars, for example, have the capability to cause real death. It is also possible that AI will be used in extremely high level systems that leave AI with high rooms of error that could lead to dangerous outcomes. Consequently, an emerging field of AI ethics is taking root. If AI can emulate human decision making after all, why wouldn’t it be able to emulate human ethics given enough information.
At this point AI is far from the level that it could cause real harm, but its future is difficult to forecast. As more responsibilities are delegated to AI, real interest should be given to producing a system that ensures that AI is given an appropriate amount of freedom for its task. It should also be equipped with ethical processes and be trained with curated data that influences it to take only the most moral actions.