Through conducting a literature review related to the use of Artificial Intelligence (AI) in the Intermediate/Secondary classroom, several solutions to the problem of the ethical implementation of AI in education were discussed.
Investing in Social Emotional Learning
While AI has the potential to automate certain tasks and aspects of the workforce, research shows that it fails to mimic human conduct in the areas of social intelligence, resilience, and social and emotional learning (Patel, 2018). We must compensate for the potential of AI siphoning off portions of a teacher's traditional workflow, or for the potential of AI to siphon off the need for students to learn certain skills. To do so, we need to introduce additional lessons not only in STEM fields so that students have a stronger comprehension as to why it may be okay to let AI handle some of their workload, but also to further emphasize the importance of social emotional learning, and social intelligence (EQ). These are the areas in which AI struggles, so we must invest in these to ensure the place of the teacher remains in balance with AI. This solution however, does not confront the ethics of AI head on, but merely makes space for it in the classroom.
Investing in Technology and Computer Science
As our use of AI increases, we (un)willingly relinquish control of our general actions. The TED-Ed video titled "How will AI change the world?" touches on this idea. The Pixar movie WALL-E also referenced in the video serves a fictitious depiction of what this may look like in the distant future - as humans become over-reliant on technology for even the simplest of tasks. In order to mitigate this loss of control caused by the average person not understand enough about tech and AI to be able to control it, we need increased learning about tech and AI. This means ramping up classes for K-12 students related to computer science and technology, honing their skills in deductive reasoning and critical analysis. The introduction of AI making remedial tasks automated does not mean teachers will have less to teach, in essence, they may have more to teach as they need to instruct students about AI and computer science. This solution attempts to heighten understanding of AI, but this may not be enough to adequately inform an ethical implementation of it in the classroom.
Top-Down Regulation from Tech Providers
Others suggest that the solution to handling AI does not include educators at all, but is instead the responsibility of those who own the AI platforms (The Guardian, 2023, March 17). Large platforms like OpenAI should be responsible for regulating and maintaining the safety of these tools. The question is, how does this look? This would be an incredibly top-down approach, and may take a long time to effectively formulate. Educators may not have time to wait for top-down regulation as students find new ways to damagingly use AI in their learning. Is the expectation that every product/company has effective AI regulation? It sounds idealistic, the fact of the matter is these technologies are always nuanced, flawed, and their flaws vulnerable to exposure. To expect every company using AI to include a regulator/security lead (department) may be an expectation for the distant future - we are not there yet, but AI tools have already infiltrated the classroom.
Heightened Understanding and Research about AIED
Various stakeholders in education and technology have acknowledged that a proper understanding of the issue of artificial intelligence and its potential benefits and drawbacks is not fully realized. At the University of Stockholm, a project is under way that concerns the study of ethical and legal challenges related to the emergence of AI-driven practices in higher education (Pelletier, 2021). Among learning institutions, there is a consensus that AI must be better understood in order to properly implement it in the learning ecosystem. A heightened understanding of AI must be attained by all those in the learning ecosystem, from administrators, to teachers, to students, and parents. This solution is a slow non-solution, essentially implying that we must wait and further understand the issue of ethical AI implementation before taking action. We do not have the time to wait, we need to take action while continuing to better research and study the issue.
Adopting Ethical Guidelines Towards AI Use in Education
Ethical guidelines are a possible solution to the discussion surrounding the ethics of AI in education for several reasons. Holmes (2022) addressed ethical concerns related to the use of AI in education by providing a summary of the opinions of 17 leaders in AIED. Among the key findings is the realization that many researchers in AIED lack the necessary expertise to address the ethical dilemmas that arise. Guidelines can provide a framework that helps educators make decisions about using AI in a way that is ethical and responsible. By having clear ethical guidelines, educators can avoid making haphazard or impulsive decisions that could have unintended consequences for students. Adopting ethical guidelines welcomes the use of AI in the classroom, but provides structure and policy that can be adopted by any user (district, educator, student) to regulate and control its usage.
Guideline 1: Transparency
When creating guidelines for the use of AI in education, it is important to consider transparency. All stakeholders, including but not limited to teachers, students, and parents, should be aware of how AI is being used in the classroom. Educators should provide clear and understandable explanations of how AI works, which AI systems are being used, how they are being used, and what data is being collected and stored. This can help to ensure that everyone involved is aware not only of the benefits but also of the potential risks of using AI in education. It is essential to receive consent before using AI in education; however, it is also important that it be informed consent. Parents, students, teachers, and any other stakeholders should be aware of what they are consenting to. Mhlanga (2023) suggests transparency can be achieved by providing students with information about the algorithms and data sources used by AI technology, as well as its processing and response creation mechanisms. Employing open-source AI technology is another way to ensure transparency, as it provides users with access to the source code and data. Additionally, students must be made aware of any potential biases and limitations of AI technologies being used in the classroom (Mhlanga, 2023, p. 15).
Guideline 2: Privacy
Privacy is important to consider when discussing the ethical use of AI within schools. AI technologies collect and process large amounts of data, which can cause concerns about student privacy and the security of their data (Zhai, 2022). Teachers must make sure that any data collected by AI is kept private and secure. This is extremely important as data collected through the education system often is directly related to minors and is strictly confidential. This could include data on student performance, behaviour, and personal information. Having policies in place that govern how this data is collected, stored, and used can help to ensure that all data is secure and kept private. Kasneci et al. (2023) suggest that schools develop and implement data privacy and security policies that "clearly outline the collection, storage, and use of student data in compliance with regulation and ethical standards" (p. 8). Before collecting personal data, it is crucial that teachers obtain explicit consent from students and parents. After receiving consent, schools should also be conducting regular audits occurring that identify and address any possible threats or areas to improve (Kasneci et al., 2023).
Guideline 3: Bias
The third ethical guideline for using AI in education is understanding bias and limitations. AI algorithms can unintentionally perpetuate biases that exist in society as they are trained on existing data. These biases can lead to unequal and unjust outcomes for students, especially if the biases shown are related to race, gender, or socioeconomic status (Zhai. 2022). One potential use of AI technology is to grade student essays, but it's important to consider the potential implications of using it in this way. If the training data used to train that specific AI technology was biased towards a certain group, students from underrepresented groups could potentially receive unjust grades, which would perpetuate existing educational gaps and marginalize already disadvantaged populations further (Mhlanga, 2023). Educators must ensure that AI is not used to discriminate against any particular group of students. This can be done by regularly monitoring the data and algorithms to identify any biases and taking corrective action to eliminate them. Schools should check that AI technologies are "conceived, developed, and used in a fair and non-discriminatory manner at every stage of the process" (Mhlanga, 2023,p. 12).
Guideline 4: Autonomy
AI should not be used to replace human judgment. Teachers should continue to make decisions in the classroom based on their professional judgment and expertise as well as their experience. As discussed in the perceived problem section, AI does not have human empathy or experiences to make judgment calls relevant to individual students and classrooms. AI can be used as a tool to help in decision-making, but it should never be used as the sole decision-maker. Students should also have the right to choose not to use AI or how to incorporate it into their learning if they wish. Students can grow in their autonomy if AI technologies are used appropriately. These technologies can provide instant feedback, and individualized answers, making it convenient for students to get assistance whenever they need it. These technologies can help students become self-directed learners. However, guidelines are required for students as they "may use this tool as a means to create their work entirely without using their analytical thinking and decision-making skills" (Sok & Heng, 2023, p. 5).
Guideline 5: Responsibility
The fifth guideline for using AI ethically is responsibility. Teachers must be held accountable for the decisions they make while using AI, such as ChatGPT. This includes ensuring that the data being used is accurate and that the decisions made are ethical (Holmes, 2022). If something goes wrong, educators must take responsibility and take appropriate action to rectify the situation. There must be clear policies and procedures in place for addressing any issues or complaints that may arise from using AI in the classroom.
Conclusion:
AI has the potential to change education positively. However, it is important to use it ethically. Teachers must be aware of the potential risks as well as the potential benefits of using AI and create ethical guidelines to ensure that it is used in a way that is beneficial. Transparency, privacy, bias, autonomy, and responsibility are the key items to consider when incorporating ethical guidelines into education. By following these guidelines, teachers can use AI to enhance learning experiences while maintaining the trust and respect of their students and communities. Our research is a foray into gaining a heightened understanding of AI in the classroom and an effort to better understand its ethical use and place. If this type of Ethical Framework/Guideline were to be adopted generally at all levels of education and by all stakeholders, and modified to meet the specific needs at each stakeholder level, then an industry standard of ethical AIED use would begin to take shape. To help regulate the adherence to this framework, Technology Teachers could receive additional training relate to the framework, and professional development sessions could be conducted at regular intervals to ensure all educators are up to date with the latest improvements and issues with AIED ethics. Adopting an ethical framework lays the foundation upon which AI can be effectively implemented and integrated in the Intermediate/Secondary classroom.
Framework on the Integration of AI in the Classroom
Group Design Challenge | ED 6620 | Brandon Collier, Michelle Bernard, Taylor Johnson | April 2023