The Consequences of AI - Jeff Lo (5th Grade, Bella Vista Elementary)
As you all know, AI is getting more and more advanced by the second. Most people are hyped about all of this, but have you ever wondered about the consequences? For example, some people could lose their jobs, and others may give false information because of AI (Metz, 2023). Below I’ll be covering three different consequences of AI.
As previously stated, people might lose their jobs because of AI (Thomas, 2024). Employers have the rights to fire their employees, so when AI gets really advanced, they could potentially fire all of their employees and buy some robots to do all of the work. The ones that don’t get fired will still get their pay docked off (Thomas, 2024). But there’s also a consequence to the employer if they fire all of their employees. For example, a store owner who fires all of their employees might lead to high levels of unemployment. That means not a lot of people would have money, which means they can’t go to the store to buy items. As a result, the store owner also won’t be making any money. According to a paper written by OpenAI researchers, about 80 percent of the U.S work force could have at least 10 percent of their work affected by L.L.M (Large Language Models). About 19 percent of workers might have 50 percent of their jobs affected (Metz, 2023). As stated, people will lose their jobs if AI becomes really advanced so the people who make the AI should be careful because at some point in the future, they might also get fired and the AI’s they made will probably take their jobs.
Because of technology advancing, AI may also spread false information (Metz, 2023). A robot can hack into a computer, type false information, and some people might actually believe it. When AI becomes really advanced, computers might be able to hack themselves. According to Subbarao Kambhampati, a professor of computer science at Arizona State University, “There is no guarantee that these systems will be correct on any task you give them.” (Metz, 2023) If AI does the things stated above, we won’t know what information is real and that can really impact our education.
At some point we might lose control of AI (Thomas, 2024). According to The New York Times, a letter written by a group from the Future of Life Institute thinks that as companies plug L.L.M.s into other internet services, they could gain a lot of power because the L.L.M.s can write their own computer code (Metz, 2023). The letter also states that developers will create new risks if they allow the AI to run their own code. AI is getting so intelligent that, according to a former Google engineer, an AI chatbot named LaMDA talked to him the way a human would (Thomas, 2024). If we don’t want AI to take over the world, we should be more careful developing them so they won’t be too powerful.
In conclusion, I think we should slow down on trying to make AI so insanely smart and good so it won't take over the world. People should use AI less often and try to understand it more.
References
Metz, C. (2023). What Exactly Are The Dangers Posed by A.I.? The New York Times. Retrieved August 9, 2024, from https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html
Thomas, M. (2024, July 25). 14 Risks and Dangers of Artificial intelligence (AI). Built In. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence