A key challenge in AI education is the lack of transparency in how models generate responses. Chatbots and other AI tools often function as black boxes, making it difficult for students and teachers to understand how they produce information. Valderrama et al. (2023), emphasize that transparency is necessary to build trust and ensure accountability. AI literacy should extend beyond technical explanations to discussions on how AI-generated content is shaped by data and modelling choices.
Educators play a critical role in fostering accountability by teaching students to evaluate AI outputs. Noble (2018) argues that AI systems encode human biases and reflect societal structures rather than being neutral. To ensure responsible AI use, teachers can:
Encourage fact-checking of AI-generated content, as search engines often prioritize commercial interests over accuracy, amplifying misinformation (Noble, 2018).
Discuss AI’s limitations, as algorithmic decision-making systems (ADMs) are often opaque, even to their developers (Valderrama et al., 2023).
Promote discussions on AI explainability to help students critically assess AI-generated content, ensuring they understand when and how AI should be used in learning contexts (Valderrama et al., 2023).
Another important ethical concern is fairness in AI. Buolamwini (2019) demonstrates that AI systems often exhibit lower accuracy when analyzing data from marginalized groups, leading to disparities in outcomes. These biases can reinforce existing inequalities, particularly in educational settings where AI tools are increasingly used.
Educators should encourage students to critically examine how AI systems may reflect societal biases and discuss ways to promote more equitable and ethical AI development.
This Recommendation addresses ethical issues related to the domain of Artificial Intelligence to the extent that they are within UNESCO’s mandate. It approaches AI ethics as a systematic normative reflection, based on a holistic, comprehensive, multicultural and evolving framework of interdependent values, principles and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies and the environment and ecosystems, and offers them a basis to accept or reject AI technologies. It considers ethics as a dynamic basis for the normative evaluation and guidance of AI technologies, referring to human dignity, well-being and the prevention of harm as a compass and as rooted in the ethics of science and technology. (UNESCO, 2022, p. 10)
Read the following statements and think about where you would place yourself on the continuum.
Students should not be allowed to use any AI in the K-12 Classroom.
Students should only be allowed to do certain activities using AI - brainstorming, giving project or assignment ideas, or assisting in assessment preparation through generating practice questions.
Students should have the ability to use AI in the classroom however they like, as long as they cite their use of AI.
After reading through the statements and placing yourself on the continuum according to the one that best aligns with your educational beliefs or practice, take a moment to reflect using the following questions.
Why did you place yourself where you did? Is this because you feel that you need to learn more about the usage of Generative AI in the classroom?
What sorts of constraints are realistic to put on the usage of Generative AI in the classroom?