Google Gemini. (2024).
'AI cognitive image.’ [Image]. Generated 2025, August 08.
AI makes mistakes and can misrepresent information. Students using AI should verify information for accuracy and appropriateness.
Table of Contents:
Evaluate AI Technology (Output)
Misinformation & Fact Checking
Define AI Bias & How to detect AI Bias
Image GenAI Bias
GenAI is not always correct.
The “CRAAP” test is a method to use for thinking critically about the content. Humans are necessary in evaluating the output from AI tools: Currency (means that the text has timeliness or is current information), Relevance (means the output is focused and related to the prompt), Authority (means the information represents expertise about the prompt), Accuracy (means that you have cross-checked that the information represents what reputable resources would report on the topic), and Purpose (means that the output is informative, but not biased).
How do we analyze AI output to check for misinformation, accuracy and appropriateness?
Misinformation = false or inaccurate information
Disinformation = sharing false or inaccurate information on purpose
To find the KMHS Library Website go to kmsd.edu, go to the Menu then Library Services and click the KMHS icon.
Here is a visual reminder to utilize KM Research Tools to build understanding to think critically and learn about the topic:
AI: Training Data & Bias 2:41
AI is trained on real-world data that people give it, and if that data contains biases (or is incomplete), the AI can end up being biased too.
AI Bias:
When an AI tool makes a decision that is skewed because it learned from training data that wasn’t fair. It didn't treat all people, places, and things the same.
AI bias impacts how reliable, fair, & trustworthy AI tools are.
It can have a direct impact on individuals or groups of people.
Negative Impacts of AI Bias:
Unfair treatment: If an AI tool is biased, it might make decisions that are unfair to certain groups of people.
Continuing stereotypes: If an AI tool learns from data that includes stereotypes (e.g. race or gender), it might make decisions that are based on those prejudiced ideas.
Unequal opportunities: AI bias can also limit opportunities for some people by unfairly favoring another group.
Misinformation: If an AI tool learns from biased information, it can end up creating and spreading false or incomplete information.
Which face is real? (Press PLAY after you guess to get another set of images)
Share examples and discuss strategies of how you have had to analyze AI output. Has there been a time where an AI Tool has been wrong or completely messed up your prompt?
Considering the unique nature of GenAI, where outputs are generated rather than retrieved, how might you adapt or prioritize the different elements of the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) when assessing the reliability and trustworthiness of AI-generated text or images? (This question encourages a nuanced application of a critical evaluation framework to the specific challenges posed by generative AI.)