Photo by krakenimages on Unsplash
The use of AI tools and systems can potentially enhance teaching, learning, and assessment, provide better learning outcomes, and help schools to operate more efficiently.
To maximise the benefits of AI in education, it is crucial to be aware of the risks and ethical concerns. Teachers and educators should know whether the AI systems they are using are reliable, fair, safe, and trustworthy and whether the management of educational data is secure, protects the privacy of individuals and is used for the common good.
In the following sections, you can find some critical aspects that may arise when you are using AI tools and systems and provide mitigation measures to address these aspects.
AI and Data Privacy
Data is the lifeblood of AI. In the context of education, most of the AI systems collect and analyse vast amounts of data from students to tailor learning experiences, provide insights, and even predict future performance. While this data-driven approach has its advantages, it raises significant privacy concerns.
Students generate a digital footprint every time they interact with an AI system, whether by submitting assignments, participating in online discussions, or browsing educational resources. This data, often sensitive and personal, can be susceptible to misuse or breach if not properly managed and protected.
To mitigate the risks associated with data feeding, schools and educators should adopt the following strategies:
● Develop Data Literacy: Ensure to have educators with a proper understanding of data privacy, AI algorithms, and the implications of sharing seemingly unimportant data with AI models.
● Implement Data Collection Guidelines: Establish clear guidelines for data collection, specifying the types of data that can be fed into AI models and ensuring that only relevant and necessary information is used.
● Establish Clear Data Privacy Policies: Develop comprehensive data privacy policies that outline how personal information will be collected, stored, and used. Set out the circumstances in which AI will be used and if or when personal data will be used with these tools.
● Read privacy policies: Always read the privacy policies of any AI-powered tools or websites you use to understand how your (or your student’s) data is being collected, stored, and used.
● Avoid sharing personal information: Educators should avoid sharing any personal information such as full name, phone number, or email address engineering prompts when using AI-powered tools and should follow age restrictions.
● Obtain Informed Consent: Seek consent from students and their families before collecting personal information for use with AI technology. Consent should be informed, i.e. based on a clear explanation of the purpose and benefits of data collection when used in AI tools.
● Protect Data: Establish a robust system to safeguard against cyber-attacks and data breaches.
Reliability and Bias of AI
Algorithmic bias is another pressing ethical concern in AI-driven education. AI systems are trained on vast datasets, and they learn patterns and make decisions based on this data. If the training data is biased, the AI system's decisions can also be biased. For example, the results of an AI programme that generates images may exhibit stereotypes based on factors like skin colour, gender, or age. For instance, it might consistently depict a "professor" as an older white man, reflecting historical biases where professors were mostly male.
Moreover, AI is often accurate, but it is not always perfect and students might receive inaccurate adaptive content. It is the case of AI hallucinations where an AI model generates false, misleading or illogical information, but presents it as if it were a fact. It is most commonly associated with AI text generators, but it can also occur in image recognition systems and AI image generators. As an education professional, it is important to assess its reliability to ensure its effectiveness and validity.
To address these risks, the educators can adopt the following measures:
● Data Quality and Integrity: The accuracy of AI systems heavily relies on the quality and integrity of the data they are trained on. Educators can first evaluate and verify AI-generated outputs with trusted sources before accepting it as true (fast-checking). This helps ensure that AI algorithms produce reliable and unbiased results.
● Robustness to Different Contexts: AI systems should be tested for their robustness to different educational contexts, such as diverse student populations, varying learning environments, and subject domains. For example, an AI-driven language learning tool should be tested in classrooms with students from various linguistic backgrounds to ensure it effectively supports language acquisition for all learners.
● Artefact Development Evaluation: Instead of only assessing the final artefact, educators should also examine the process of artefact development. This involves thoroughly reviewing the methodology, data sources, and reasoning used to create the artefact to verify its credibility and authenticity.
● Long-Term Performance: Evaluating the long-term performance of AI systems is crucial to ensure their reliability over time. Monitoring how AI systems adapt and evolve with changing educational requirements and emerging challenges helps maintain their effectiveness and validity.
Dependency
As schools become increasingly reliant on AI-powered solutions, there is a risk that teachers and students may become too reliant on technology. This might reduce their critical thinking and problem-solving skills and the ability to analyse, evaluate, and form independent thoughts. In the long run, this dependence may hamper their overall cognitive development and limit their capacity to think critically.
To address this concern, educators can consider the following elements while implementing the AI tools:
● Continuous Adaptation: Educators must continually adapt their approaches to maintain a sensible balance between AI usage and critical thinking. They can design and incorporate AI use with activities that promote independent inquiry and problem-solving.
● Promote Human-AI Collaboration: Educators must emphasise the importance of human-AI collaboration rather than relying solely on AI or human intelligence. Promote the idea that AI is a tool to augment human capabilities, rather than replace them.
● Encourage Collaboration: Educators can incorporate AI use with collaborative learning activities that require students to work together, discuss ideas, and solve problems as a team.
In conclusion, educators and schools are also encouraged to explore and familiarise themselves with the 'Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators' published by the EU Commission for further guidance on responsible and ethical AI integration in educational contexts. Full text is available here.