AI tools like ChatGPT don’t think, plan, or fact-check the way humans do. While we form ideas and then choose words to express them, AI does the reverse. AI generates responses one word at a time, predicting each word based on probabilities from its training data. It doesn’t know what it’s going to say until it’s already said it. This makes AI writing sound remarkably fluent, but that doen't mean that it's always accurate.
A hallucination is when an AI tool generates false, misleading, or completely fabricated information.
1. Gaps in the training data
AI models are trained on huge datasets- but they don’t include everything. When asked about topics outside that data (like obscure facts or current events), the AI guesses based on loose patterns.
2. Doesn't know what it doesn't know
Humans can pause mid-sentence and say, “Wait, that’s not right.” AI can’t do that. Once it starts generating a response, its only goal is to sound fluent, not to double-check whether the answer is accurate. It will confidently keep going, even if it’s just making things up.
3. Misunderstanding your question
AI doesn’t truly “understand” language- it just predicts what words should come next. If your question is complex, ambiguous, or nuanced, the model may misinterpret your intent and give you a misleading answer.
4. Bias or misinformation in the training data
AI learns from the internet, books, articles, and other massive sources of text, but not everything it learns is true or fair. If there is misinformation or biased perspectives in the training data, AI can reflect those inaccuracies and biases in its responses.
5. It's just a giant word guessing machine
At the end of the day, AI doesn’t plan its answers, it just generates them one word at a time, using probabilities to guess the most likely next word. Each word depends on all the words that came before it. That means if one word goes wrong, the whole answer can drift off track.
AI may sound confident while being completely wrong. Small details can be fabricated or distorted.
Don’t trust AI output without verification. Always click the links. Double-check the sources. Read what it wrote.
Faculty using AI in course prep or communication must fact-check every detail. Even seemingly basic information (like textbook titles or due dates) can be wrong.
Next: Detecting AI