AI "hallucinations" refer to the phenomenon where AI tools spit out information that is inaccurate or entirely fabricated.
The actual source (not linked or properly cited at the end of the article) has the 57% statistic, but it refers to something completely different.
1. Gaps in the training data
AI models are trained on huge datasets- but not everything is covered. When asked about topics outside that data (like obscure facts or current events), the AI guesses based on loose patterns.
2. Doesn't know what it doesn't know
AI cannot determine whether or not it has the necessary knowledge base to accurately answer a question in the first place, it generates a confident response regardless of certainty.
3. Misunderstanding your question
AI doesn’t truly “understand” language- it just predicts what words should come next. If your question is complex, ambiguous, or nuanced, the model may misinterpret your intent and give you a misleading answer.
From North Carolina State University's AI Guidance and Best Practices:
You’re The Expert, Not the AI
Don’t rely on the tool for subject-matter expertise in your area; plug in YOUR knowledge and let it help format or flesh it out
Carefully vet ALL responses from ChatGPT and all AI for accuracy, language/tone, redundancy, appropriateness, etc.
Be Cautious of Bias and Inaccuracies
Generative AI mimics humans’ online behavior, which is not always accurate, appropriate, etc.
These tools draw from enormous datasets that often include bias, which can be further skewed by user patterns or “algorithmic bias” over time
Again, review and vet all responses!
"On Monday, Google announced its AI chatbot Bard — a rival to OpenAI’s ChatGPT that’s due to become “more widely available to the public in the coming weeks.” But the bot isn’t off to a great start, with experts noting that Bard made a factual error in its very first demo."
"Google’s AI chatbot isn’t the only one to make factual errors during its first demo. Independent AI researcher Dmitri Brereton has discovered that Microsoft’s first Bing AI demos were full of financial data mistakes."
"It is important to note that AI can confidently generate responses without backing data much like a person under the influence of hallucinations can speak confidently without proper reasoning."
"There have already been a myriad of AI success stories, and other chatbots like Bard and Claude are used by tens of thousands of people too – but there have also been a lot of cases where harnessing artificial intelligence has gone horribly wrong."
"The high-profile incident in a federal case highlights the need for lawyers to verify the legal insights generated by AI-powered tools."
"Microsoft reportedly published — and retracted — an AI-generated article that recommended people visit a Canadian food bank as a tourist attraction."
"Texas A&M University–Commerce seniors who have already graduated were denied their diplomas because of an instructor who incorrectly used AI software to detect cheating."
“Add some glue,” Google answers. “Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”
Next: Protecting Privacy