AI; Useful Tool or Learning Hindrance?
By Savanna McCliggott, Club Writer (Class of 2028)
Artificial Intelligence (AI) is a controversial but well-known topic that is constantly under intense scrutiny for its ethical uses and its role as a learning deterrent. There are two kinds of AI; strong and weak AI. Strong AI is a hypothetical machine learning function that would possess a consciousness and be capable of human decisions and emotions. Weak AI is the version that exists right now, which requires only enough “intellect” as a human possesses and cannot experience thoughts, opinions, or feelings outside its programming. A few familiar examples include Siri, Chat-GPT, Google Gemini, and self-driving cars, which all use pre-programmed processes and algorithms to respond to users. These AI programs have been increasingly more common in the media and a hot topic of conversation, which has brought further attention to AI and the innovation behind it.
People have begun to raise questions about AI and its ethical implications as well as its actual ability. AI cannot distinguish between real and fake information, which makes it unreliable and able to be affected by any and all online information and may give you incorrect information. There are also no specific laws or regulations in place to regulate where AI can source information from. Deepfakes are AI-generated images, videos, and audios that can be used to trick people into giving money or valuable information to scammers posing as family members, friends, or well-known figures to get money and personal information.
Because of AI’s design as a quick information sifter and formatting in paragraphs, it has been a quickly popular tool among students as a homework shortcut, which has become a large problem for teachers. Teachers are concerned that students are not getting the writing and analysing practice needed for educational growth. Many researchers, scientists, and experts have also been protesting against student’s use of AI. According to Mitch Prinstein, chief of psychology strategy and integration at the American Psychological Association (APA), "Brain development across puberty creates a period of hyper sensitivity to positive social feedback while teens are still unable to stop themselves from staying online longer than they should," (Chatterjee). In other words, teens are becoming addicted to AI. This is in part due to companies implementing machine learning in children’s and young adults’ lives to create a reliance on it, which not only costs them their ability to synthesise and analyze information, but is impacting the way humans think. The students are assimilated from a young age which increases the reliance on AI in adulthood and greatly benefits the AI companies. However, AI is so new that there is little research on the long-term effects of using this tool.
There have been some positive effects of AI despite the backlash. There has been work to design an AI bot that is able to give accurate medical diagnoses and complete delicate medical procedures more accurately than a human is capable of. It can also be used to quickly process and analyze simple data and reduce simple human errors.
I personally believe that our government, as well as other countries’, should pass legislation to limit the use of AI and its online presence. It is not fair for AI bot companies to gain profit from using real people’s personal work without permission. There should be some sort of limit or specific sources for AI to use; this would both protect people’s own work and make AI more accurate because it would only source reliably confirmed and correct information. It would also make sense to provide further restrictions on who can have access to AI and what it could create. If students can access AI easily enough in school to use it for every essay, then something clearly needs to change.
AI is all over the internet, figuratively and literally, but if AI is regulated and used less as a failsafe and more like a useful processor, it can be a great advantage to humans.