Mechanism Minded 25-07-25 Wendy Wee: Can't help hallucinating: Baby, LLM was born this way
Hallucination is baked into LLMs (Large Language Models, AI). Can't be eliminated, it's how they work. LLMs may hallucinate less than humans. But it's not about less or more. It's the differing and dangerous nature of the hallucination. It's doubtful LLMs will replace human workers. Jobs require understanding context, problem-solving, and adaptability. LLMs mimic but can't do them robustly. Their architecture makes it impossible. Relying on them for fact-based, high-stakes work is an insane gamble. Use LLMs where they shine. To create, not calculate. To communicate, not control. Where flexibility matters more than precision. This may sound cliche, but they're here to augment, not replace. They're the kind of worker that's highly dependent on you to lead.
Why do LLMs hallucinate?
Because of how they fundamentally work. LLMs create text (generative, "synthesize new sequences, not just retrieve memorized responses") by predicting one word at a time based on how words statistically tend to appear together (approximate), with some randomness built into each step (stochastic, "random probability distribution" ).