AI in Education

If you are aware of technological progress in the last few years, then you know that AI is on the rise and easily accessible by anyone with an email address. It is inevitable that students will be using AI apps to make their work easier. As such, it is up to the teachers to inform students about the dangers of AI and set a standard for acceptable ways to use it in our classrooms. 

Before we get into the dangers, it is just as important to know the benefits. One of the most significant benefits is to use them as personalized learning systems (1). Through these online systems, students can receive content in a way that's fitting to them. Additionally, the system has the power to find gaps in the student's knowledge and teach them what they are missing. These systems are especially useful in remote learning scenarios. 

Outside of these systems, students are able to, on their own, use generative AI as a personal tutor. If a student is having trouble with a math problem, they could give the AI the problem and it will break it down and explain how to solve it step by step. If the student is still confused they can use additional prompts such as "Can you explain further about this part" or "Explain it in a different way". One app called PhotoMath allows students to scan math problems with their phones and provides an answer with steps and explanation in seconds, even able to show multiple different ways to solve (2).

These uses for AI support teachers by lessening the need for one on one academic help with students. Of course, teachers still need to help students individually, but this is an additional support that some students may prefer.


However, AI isn't all sunshine and rainbows. Just like any new technology, it comes with its downsides, and for AI, there are quite a few. 

Up front, schools have to worry about security and privacy of their students (1). An uninformed student may mistakenly put identifying information of themselves or others in their prompts which are collected as data. This data is stored in a large database that may be sold or used for purposes outside of our awareness. Schools have an obligation to their students' privacy and using a system where their information can easily get out is a big risk. 

Another problem with AI that students need to be aware of is bias and discrimination in its answers. Many see AI as an all-knowing being that only speaks the truth. However, AI is trained on human data from across the internet. As such, it comes with the pitfalls that humans fall into as well including prejudice and biases. One example of this is seen when translating between languages (1). Google Translate has been observed to translate "they are a nurse" into the feminine form while translating "they are a doctor" into the masculine form. Cases like these further reinforce the discrimination and bias in our society if we are not aware of them.

Sometimes, misinformation from AI goes beyond just bias and discrimination, and can actually provide fully incorrect information (3). The way generative AI systems work is by recursively determining what word or phrase makes the most sense following what was previously written. In some cases, it will spit out answers that look correct, but in fact are made up. This same facet also applies when teachers use AI for grading. It may save a teacher time, but if they don't double check, then a computer has just decided that a student gets a lower or higher score than they really deserve based on work that the teacher didn't bother to look at.

One final downfall of AI is how easy it makes work. In order to learn, students need some amount of struggle where they get stuck and have to think through what to do next. AI removes this struggle in favor of instant gratification (3). If a student doesn't know an answer, they use AI and have it immediately. No critical thinking is required. Critical thinking is an invaluable skill that won't get developed when a young mind's first thought is "I'll just ChatGPT it." And who knows, maybe someday AI will be widespread and influential enough to change the fundamental skills that it takes for a human being to be successful in modern society, but I don't think that it's quite there yet.


With all of those pros and cons laid out, it is time to talk about acceptable use in the classroom. This will vary greatly depending on the subject, grade level, and teacher in charge. First and foremost, it is the duty of the teacher to educate the students on how to use AI correctly. 


I do want to make note that most research on the ethics of AI has been done with generative text-based AI such as ChatGPT. As a student teacher, I found that the most common use in my classroom was with Math-solving AI. While they have similarities, their differences lie in how work is submitted and graded. In an english class, students will respond to a prompt with a written response a paragraph to a page in length, and the teacher will read and grade it. As such, a student will need to carefully plan out a prompt while providing additional details where necessary. Then the teacher will have the opportunity to read through and grade it. If a student doesn't edit it at all, it will be obvious they used an AI. Additionally, all students will have a different final product, even if they used an AI to write it. In math classes, students will face a series of problems and have to answer them. If the answer is correct, all students will get the same answer. Because of this, it may be difficult to detect when a student did the problem themselves or used an AI. To counteract this, math teachers should require that students show all work when solving problems. Even if the AI gives them the work to write down, they are at least processing the information as they are writing it leading to some amount of learning.