Evaluating all information for credibility is recommended, no matter where you find it.
This is particularly true for generative AI content since:
AI can easily generate ("hallucinate") realistic-looking content, including fake quotes, names, facts, images, etc.
AI systems can have biases based on the data they were trained on.
AI systems don’t understand information the way humans do. They analyze data based on patterns and algorithms, and they can make mistakes.
Below are some general considerations when evaluating AI.
AI tools work differently and are frequently adapting.
Which AI tool and version are you using?
What data sets are used by the AI tool and how current are the data sets?
Does the AI tool have access to the live web?
Does the tool provide hyperlinked citations and references?
The prompt you enter directly defines the output you get.
Have you used a prompt framework such as the Five "S" Model?
What is the tone of language in your prompt?
Have you tried different versions of your prompt?
Have you tried conversing with the AI beyond your initial prompt?
Start with a general "feel" for the information that has been generated.
Are there inconsistencies in the information? Incoherent or contradictory information?
Are there abrupt or illogical shifts in tone or topic?
Are there nuances that are missed in the presentation of information?
Are there claims that contradict established knowledge or common sense?
AI can create content that is factually accurate but biased or incomplete.
What potential bias is based on the prompt provided?
Does content seem to be promoting a particular agenda or point of view?
Is the language objective and neutral, or is it loaded with emotional appeals?
Are multiple perspectives considered?
Some - not all - AI generators provide citations.
If citations and/or references are provided, do sources actually exist?
Click on any provided links and/or search Google for the sources. Can you find them?
If a source is real, is it a reliable and trustworthy source?
If a source is reliable and trustworthy, does the AI output match source information?
Triangulation means checking multiple other sources to cross-reference information.
Where else can you search to verify information?
How do other sources report on the same topic? What is the same and/or different?
Can you confirm all evidence for claims by searching other non-AI sources?
What is the reliability of other sources that you have used as cross-references?