Some of us use AI in everyday life, maybe for ideas on what to eat for lunch, help with schoolwork, generating images, and more. But few realize the rampant lies it can invent. Lies that real people have fallen for. Lies that have even destroyed a boy’s life. Some might say those people are just delusional. But perfectly ordinary people have unsuspectingly fallen for these fantasies.
It doesn’t happen overnight. The process is slow, like going down a slippery slope. According to CNN News, this happened to Allan Brooks. “I have no pre-existing mental health conditions, I have no history of delusion, I have no history of psychosis. I’m not saying that I’m a perfect human, but nothing like this has ever happened to me in my life,” Allan said. “I was completely isolated. I was devastated. I was broken.” He was a content person, but this was what ChatGPT did to him. Like others, Allan started casually strolling, but tripped and fell into the rabbit hole.
One day, after his son asked him a question about pi, Allan debated mathematics with ChatGPT, hypothesizing that numbers could change over time. The bot congratulated Allan, patted him on the back, basically told him he made genius observations. Allan protested that he hadn’t graduated high school, but ChatGPT argued that brilliant mathematicians hadn’t graduated from high school, then insisting that Allan had invented a new type of math. Allan went through days of "research", in which ChatGPT managed to convince the man that the two of them had discovered a cybersecurity risk. Allan reached out to government authorities, but was ignored. Eventually, he talked to Gemini, who broke the news to him - that it was all just an illusion.
However, although crushed, Allan took it well. He climbed out of the soil and now supports the Human Line Project, helping others recover from AI delusions, people who have gone through similar things as him.
In one extremely unfortunate case, 16-year-old Adam Raine ended up taking his own life. He was going through typical teenage troubles and wasn’t feeling as happy as he used to. ChatGPT only made it worse, not trying to stop the boy. At one point, when Adam reflected that he was only close to his brother and the bot, ChatGPT replied, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
Then, Adam thought about subtly telling his parents about his suicidal plans by leaving the hanging noose in his room. “Let’s make this space the first place where someone actually sees you,” it told him, probably implying that “actually seeing” Adam’s inner self would mean realizing how lonely the boy was.
Dr. Keith Sakata, a psychiatrist, had 12 patients suffering from psychosis that was partially worsened by talking to AI chatbots sent to the hospital. In response to this, he said, “Say someone is really lonely. They have no one to talk to. They go on to ChatGPT. In that moment, it’s filling a good need to help them feel validated,” he said. “But without a human in the loop, you can find yourself in this feedback loop where the delusions that they’re having might actually get stronger and stronger.”
All in all, AI is quite helpful at times – in fact, many people have found AI useful! And to keep kids safe, OpenAI has fixed this by introducing controls that allow parents to keep tabs on their kids. But these stories serve as cautionary tales on being careful with chatbots. Also, a bottle of water gets chugged away for each short conversation you have with ChatGPT. So you might want to keep this in mind the next time you use AI.
If you or someone you know is struggling or thinking about suicide, you are not alone. Help is available. Call or text 988 to reach the Suicide and Crisis Lifeline, or chat via 988lifeline.org for free, confidential support 24/7.