Artificial Intelligence (AI) is increasing in education which has sparked a growing concern about the ethical implications of its use in intermediate and secondary classrooms. While AI can transform education by enhancing learning outcomes and improving efficiency, several ethical concerns must be considered. Largely speaking, this is new technology, and its possible uses and influences are not fully understood in their own right, let alone in context specific situations like in medicine or education, and even more specialized within contexts like the Intermediate/Secondary classroom. Some of the potential issues with AI in education must be further outlined.
One of the main concerns is the potential for AI algorithms to maintain existing biases and reinforce discriminatory practices in education. AI systems like ChatGPT rely on data to make decisions and recommendations. They use the data to reflect and amplify the biases and stereotypes in the data they are trained on. This can result in AI systems that discriminate against certain groups of students, maintaining inequalities in access and outcomes (Mhlanga, 2023). Borenstein and Howard (2020) referenced an example of bias and discrimination in healthcare by authors Obermeyer et al. (2019). In their research, they found that when the task of an AI is to make suggestions on minor ailments, the AI discriminates against patients of color and more often referred white patients with similar symptoms. Throughout the literature review, bias in AI was a constant theme (Baidoo-Anu & Owusu Ansah, 2023; Borenstein, 2021; Dwiveli et al., 2023; Roll et al., 2021; van Dis et al., 2023; Zhai, 2022) . One could imagine an instance where AI discriminates against students. For example, with scholarship applications and enrollment applications it is feasible that an AI could discriminate against minorities, or those that may not fit the majority profile.
AI in education also raises questions about the privacy and security of student data and the transparency and accountability of decision-making processes (Mhlanga, 2023). AI has been used in various forms that potentially risk student privacy. Borenstein (2021) shared the use of AI in day-to-day classroom management through face recognition for attendance. Dimitriadou and Lanitis (2023) shared how AI is used for attendance and detecting students' alertness. How data is collected, stored, and shared poses a potential risk to the student's privacy. AI and Machine Learning are reliant on copious amounts of data, and when data is being collected, we must be mindful of security and privacy measures such as PIPEDA, FERPA, COPPA, and the GDPR. Moreover, there is also the concern that the use of AI in education could dehumanize the learning experience, with students being reduced to data points and algorithms. This could result in losing the human touch and empathy essential to effective teaching and learning (Zhai, 2022).
While AI has many data-related concerns such as bias, security, and the dehumanization of people, another area of concern arises regarding how students/users interface with AI. Another concern regarding the use of AI in education is the potential for students to rely too heavily on AI technology to complete their assignments and assessments. While AI tools such as auto-graders, writing assistants, and content generators can provide valuable support and feedback, there is a risk that students may become too reliant on these tools and not develop critical thinking, problem-solving, and other essential skills (Mhlanga, 2023). Epstein and Friesen (2012) address the concern for reliance and AI. The authors note that mental math skills have already been declining, without the use of a calculator the potential for continued decline increases as students continue to rely on technology. This pattern has the potential to be amplified when it comes to reliance on AI. Students will show limited critical thinking and creativity.
Furthermore, there is a risk that AI technology could enable cheating and academic dishonesty, as students could use AI tools to plagiarize or complete assignments on their behalf. This could undermine the educational system's integrity and devalue honest students' accomplishments (Sok & Heng, 2023). Therefore, it is important to consider the appropriate use of AI technology in the classroom and develop strategies to promote responsible use and prevent academic misconduct. Educators must also ensure that AI technology does not replace or diminish the importance of human interaction, feedback, and mentorship in the educational experience. It is imperative to address the ethical implications of AI in education to ensure that its potential benefits are realized without compromising the values and principles of education.
The issues related to AI have much to do with the public perception of AI and a lack of full understanding as to its potential and uses. For example, below are two uses of AI. One uses an AI image generator from OpenAI called DeepAI to create an image based off a text prompt - a very powerful tool. However, the prompt askes it to generate a tyrannical portrayal of AI, and using its database, it generates a scary monster. This is a reflection of the database, and public opinion related to what they think AI looks like. Contrasting this image is an accurate and constructive use of AI, using Google Translates speech-to-text translator to not just go from audio to visual, but to translate from French to English simultaneously. This is an incredibly powerful, useful, and beneficial use of AI that is commonly used and a more accurate portrayal of what AI looks like.
Using AI to forward false narratives of what AI looks like.
This image was generated using the website DeepAI image generator. The prompt was, "An AI powered monster that terrorizes humans, make it in the style of a cinematic movie poster." https://deepai.org/machine-learning-model/text2img
This is a speech-to-text translation from French to English recorded using Google Translate.
AI chatbots like ChatGPT simply return a curated response based on statistical information available in its large database - in essence, it is an incredibly sophisticated search engine. The use of these tools can make for improved efficiency and expedited research, yet they come with many of the aforementioned issues related to bias and academic dishonesty. The AI can be tested, and asked moral and ethical questions related to its use and function. The responses it generates are quite eye-opening. They help us see that the use of these tools may not be black or white, but are very much gray. There are indeed ethical and moral ways of engaging and harnessing these tools, but there are many pitfalls that need to be carefully navigated.
Group Design Challenge | ED 6620 | Brandon Collier, Michelle Bernard, Taylor Johnson | April 2023