This lesson highlights the importance of ethical decision-making in artificial intelligence. It begins with an introduction to AI's societal impact, using real-world examples like facial recognition bias and privacy concerns in generative AI tools. Students work in small groups to analyze assigned AI scenarios, such as self-driving cars or predictive policing by identifying benefits, drawbacks, and ethical concerns. Each group presents its findings and proposes solutions that emphasize fairness, transparency, and accountability.
You can download the lesson plan pdf here.
Grade Level: 10-12
Time Required: 90 minutes
By the end of this lesson, students will be able to:
Analyze real-world ethical dilemmas in AI systems.
Evaluate the societal impact of AI through case studies.
Develop solutions emphasizing fairness, transparency, and accountability.
Whiteboard/projector
Printed case studies (see scenarios below)
Access to research summaries (e.g., facial recognition bias, predictive policing)
Hook: Show a short video clip or news headline about AI bias (e.g., facial recognition misidentifying darker-skinned individuals4).
Key Concepts:
Define AI ethics: Fairness, transparency, accountability, privacy, and bias.
Discuss societal impacts: How AI decisions affect healthcare, criminal justice, and employment.
Real-World Example:
Highlight the 2018 MIT/Stanford study showing facial analysis error rates of 34.7% for dark-skinned women vs. 0.8% for light-skinned men.
Divide students into small groups. Assign each group one AI ethics scenario (websites for each group are linked) :
Group 1: Self-driving cars - How should algorithms prioritize safety in unavoidable accidents? Who is accountable?
Group 2: Predictive Policing - Can biased historical data lead to over-policing marginalized communities?
Group 3: Generative AI Privacy - Should companies restrict user inputs to prevent data leaks?
Group 4: Facial Recognition Bias - How can developers mitigate skin-type and gender biases in training data?
Group Tasks:
1. Identify benefits and drawbacks of the AI system.
2. List ethical concerns (e.g., bias, privacy, accountability).
3. Propose solutions using principles like algorithmic transparency or GDPR compliance.
Each group presents their findings
Discussion Prompts:
Transparency: Should companies disclose how AI models make decisions?
Accountability: Who is responsible if a self-driving car causes harm—the programmer, company, or user?
Fairness: How can we audit AI systems for racial/gender bias?
Write a one-page policy memo for an AI company addressing one ethical challenge.
Prompt: “How would you redesign [system] to align with fairness and transparency standards?”
Case study on the MIT/Stanford study: Problem: Training data skewed toward light-skinned males. Solution: Diversify datasets and adopt bias-testing protocols.
Participation: Engagement in group work and discussions.
Critical Thinking: Depth of ethical analysis in proposed solutions.