AI makes mistakes and can misrepresent information. Students using AI should verify information for accuracy and appropriateness.
Table of Contents:
Evaluate AI Technology (Output)
Define AI Bias & How to detect AI Bias
Misinformation & Fact Checking
Image GenAI Bias
AI: Training Data & Bias 2:41
Ethics & AI: Equal Access & Algorithms 3:23
AI is trained on real-world data that people give it, and if that data contains biases (or is incomplete), the AI can end up being biased too.
AI Bias:
When an AI tool makes a decision that is skewed because it learned from training data that wasn’t fair. It didn't treat all people, places, and things the same.
AI bias impacts how reliable, fair, & trustworthy AI tools are.
It can have a direct impact on individuals or groups of people.
Negative Impacts of AI Bias:
Unfair treatment: If an AI tool is biased, it might make decisions that are unfair to certain groups of people.
Continuing stereotypes: If an AI tool learns from data that includes stereotypes (e.g. race or gender), it might make decisions that are based on those prejudiced ideas.
Unequal opportunities: AI bias can also limit opportunities for some people by unfairly favoring another group.
Misinformation: If an AI tool learns from biased information, it can end up creating and spreading false or incomplete information.
The News Literacy Project (NLP), a nonpartisan education nonprofit, is building a national movement to advance the practice of news literacy throughout American society, creating better informed, more engaged and more empowered individuals — and ultimately a stronger democracy.
Is your information AUTHENTIC?
Is the SOURCE credible?
Is there EVIDENCE to support?
Is the CONTEXT accurate?
Is it based on solid REASONING?
Adobe Express (Graphic Design Tool) has GenAI features and we have a district license for all users age 13+.
Please be aware of and read Terms of Service.
(See screenshot below)
Discussion Questions
Considering the unique nature of GenAI, where outputs are generated rather than retrieved, how might you adapt or prioritize the different elements of the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) when assessing the reliability and trustworthiness of AI-generated text or images? (This question encourages a nuanced application of a critical evaluation framework to the specific challenges posed by generative AI.)
Given the challenges of misinformation and the potential for AI to generate or amplify it, and considering the content filters implemented in tools like Adobe Express for image generation, what role do you believe technology developers, educators, and individual users each play in fostering a more informed and discerning consumption of AI-generated content and in effectively fact-checking information in the digital age? (This question explores the shared responsibility across different stakeholders in navigating the complexities of AI-generated information and promoting media literacy.)