Objective 2: Evaluate the ethical and equitable use of AI in classrooms, including considerations of student data privacy and responsible digital citizenship.
Bias in AI can influence how tools respond, make decisions, or deliver content -- sometimes in ways that are unfair or inaccurate. In this section, we will explore what bias means and why it matters in educational settings.
(Source: Creative Commons)
Bias refers to a systematic error or prejudice in decision-making processes -- whether by humans or algorithms -- that leads to unfair or disadvantaged outcomes for specific individuals or groups based on characteristics such as race, gender, or background, even when a tool appears neutral (Ferrara, 2023).
📽Video Run Time: 5:00 mins
📽Click to watch the full video, then answer the questions that follow.
5 Minutes with Ryan Baker on Bias in Artificial Intelligence
Video Overview:
As AI tools become more common in schools, it's essential to understand how bias in their design and use can lead to unequal outcomes. This episode of the focuses on bias in AI, examining how artificial intelligence may unintentionally disadvantage student groups and outlining steps needed to ensure fairness and inclusivity in educational tools.
Biased predictions and limited training data can lead to unequal outcomes, especially for underrepresented groups, as many AI tools are built using narrow populations like suburban, middle-class students.
Demographic-based modeling may improve accuracy but risks reinforcing group-based biases if not applied carefully.
Learners with disabilities are often overlooked due to limited research, lack of testing, and privacy-related data gaps.
📝 After watching the video, answer the following in your Guided Overview.
1. How might the use of AI tools trained on limited or non-diverse student populations unintentionally disadvantage learners in your classroom, particularly those with disabilities or from underrepresented backgrounds?
2. Given that student disability status is protected, how can we, as educators, advocate for and implement more inclusive AI practices while still respecting student privacy?
📽Video Run Time: 10:36 mins
📽Click to watch the video, then answer the questions that follow. You will need to pause the video at 4:00 and 8:50 to complete the questions in your Guided Overview.
Video Overview:
Bias in AI is unavoidable, arising from training data, cultural context, and system design, which can lead to unfair or misleading outcomes. Even efforts to eliminate bias—such as those seen in tools like Google Gemini—can unintentionally introduce new forms of bias.
While AI can reduce human bias in tasks like scoring, it may also remove important human perspectives and creativity.
Educators play a crucial role in modeling fairness by recognizing their own biases and using strategies like rubrics and blind grading.
Building AI literacy is essential; teaching students to recognize and question bias helps them develop critical thinking in an AI-driven world.
📝 After watching the video, answer the following in your Guided Overview.
3. At 4:00 minutes. How does the speaker challenge our understanding of bias in the age of AI? What role should educators and students play in responding to it?
4. At 8:46 minutes. According to the speaker, how can teachers recognize and mitigate bias in academic practices?
When integrating AI into instruction, it is important to recognize that these tools are not inherently objective or free of bias. As one expert explains, “We assume [AI] is neutral. We think that it’s not prone to [human] biases. … But AI tools are as biased as we are because they have been trained on us. These tools serve as a stark reflection of our prejudices.”
(Mishra, 2024, para. 8)
Once you finish answering all questions in Section 2.1, click 'Next' to continue to Section 2.2.