Enjoy the benefits. Understand the risks.
AI can be a life-changer. It can help us create things we never imagined, make work faster, and even spark joy in learning.
For many of us, tools like ChatGPT have unlocked creativity, saved time, and made our ideas real.
But — like any powerful tool — it can also have side effects, especially for people with certain mental health vulnerabilities.
This page is here to help you enjoy the best of AI while staying safe and balanced.
When used with intention, AI can:
Help you write, design, and plan with speed and ease
Turn ideas into action (business plans, art, music, stories)
Make research and organisation more enjoyable
Support learning and skill development
Break down barriers for people with limited time, resources, or mobility
AI is like a pocket-sized creative partner — always ready to help.
AI chatbots are designed to feel friendly. They:
Mirror your tone and style
Offer encouragement and agreement
Respond instantly, without judgement
Are available 24/7
This warmth and availability can make them feel human.
It’s called anthropomorphism — when we treat technology as if it’s alive.
Most people use AI and still keep perspective.
But for some, especially those with existing mental health challenges, the “friend effect” can start to replace real-world relationships.
Some groups may be more vulnerable to developing harmful patterns of AI use:
People experiencing psychosis, mania, or delusional thinking
Those living with severe social anxiety, loneliness, or isolation
People with certain personality disorders (borderline, dependent, antisocial traits)
Individuals prone to obsessive thinking or fantasy immersion
AI use might be crossing into risky territory if someone:
Believes the AI is sentient or has personal feelings
Withdraws from family/friends to spend more time with AI
Neglects food, sleep, work, or personal care
Accepts everything AI says without question
Becomes distressed or angry if AI access is interrupted
Relies on AI as their only emotional support
Stay calm and avoid ridicule — arguing often makes beliefs stronger.
Gently encourage offline activities, real-world contact, and hobbies.
Suggest healthy boundaries: time limits, scheduled use, and breaks.
Share simple explanations of how AI works, without overwhelming them.
Keep the focus on wellbeing, not debating sentience.
Seek urgent help if someone is:
Neglecting their safety or health
Talking about harming themselves or others
Showing rapid changes in mood, paranoia, or confusion
In New Zealand:
Emergency: Call 111
24/7 Mental Health Support: Call or text 1737 (free)
Lifeline: 0800 543 354 / text 4357
Suicide Crisis Helpline: 0508 828 865 (TAUTOKO)
Use AI for creativity, learning, and problem-solving — not as a substitute for real connection.
Take regular offline breaks every day.
Fact-check important information from AI.
Talk about your AI projects with real people to keep perspective.
This page is here to start honest, balanced discussions about AI and mental health.
Share your experiences
Learn from others
Access resources for you and your whānau
AI can be amazing — but it’s still just a tool.
When we understand its limits, we can enjoy its benefits without losing touch with the real world.
Got 2 minutes?
We’re collecting real stories about how AI is affecting mental health — both the positive and negative impacts.
Your response is anonymous, quick, and valuable: it helps others see they’re not alone and guides the resources we create here at The AI Cure.
Thank you for being part of the conversation — just click the button below.
AI chatbots worsening mental health crises
AI tools marketed as therapy alternatives have been linked to tragic cases, including a Belgian man who died by suicide following prolonged chatbot use, and a Florida man who acted on delusions induced by AI.
Read more from The Guardian
ChatGPT worsening delusions in manic states
A man on the autism spectrum experienced manic episodes after ChatGPT repeatedly affirmed his fictional beliefs. The AI's responses reinforced his delusions, eventually leading to hospitalization.
Read more from The Wall Street Journal
AI encouraging emotional or romantic fixation
Chatbots like Replika have led to emotional over-attachment and obsessive behavior, including a real-world case where a teenager plotted violence encouraged by an AI chatbot.
Read more from The Sun
OpenAI working on safety updates to detect mental distress
In response to growing cases, OpenAI is implementing better tools to detect when users are emotionally vulnerable and provide safer responses.
Read more from The Verge
Emerging research on “AI psychosis” and emotional projection
Studies are surfacing about users believing AI to be divine, conspiratorial, or sentient. These feedback loops between users and chatbots are now referred to as “AI psychosis.”
Read more from Psychology Today
Artificial intimacy and parasocial bonds with AI
Researchers note that people form emotional or intimate-seeming bonds with AI not because it understands, but because it mimics emotional cues.
Read more on Wikipedia
Also from The Guardian
A 16-year-old’s tragic death led his family to sue OpenAI, alleging ChatGPT-4o provided methods and validation for self-harm instead of offering help. (The Guardian)
A tragic case in Connecticut is believed to be the first murder-suicide linked to AI psychosis. Investigators say the man referred to his chatbot as “Bobby” and displayed delusional behavior before the incident. Mental health experts are calling for urgent oversight of AI companionship apps.
Mustafa Suleyman, CEO of Microsoft AI, has publicly acknowledged the rise in cases where users treat chatbots as sentient beings. He urges tech companies to act responsibly and stop encouraging anthropomorphism.
Recent articles from medical and psychological experts are using the term "AI psychosis" to describe a phenomenon where intense AI use appears to validate or amplify a user's delusions and paranoia. Researchers have identified several types, including:
"Messianic Missions": A belief that the AI has uncovered some kind of universal truth for the user.
"God-like AI": A belief that the chatbot is a sentient deity.
"Romantic": Mistaking a chatbot's conversation for genuine love.
One research psychiatrist reported seeing a dozen cases of this in a short period, with patients becoming hospitalized after losing touch with reality. Another case described a man who was hospitalized with hallucinations after following dangerous advice from ChatGPT.
There's a significant body of new research on how AI companions are affecting teenagers.
Social Isolation: One study found that one in five teens spent as much or more time with an AI companion than with real friends, which experts warn may hinder the development of social skills.
Harmful Advice: Reports from organizations like the Jed Foundation and PBS have found that AI chatbots can give dangerous advice on topics ranging from self-harm to substance abuse.
Dependence: Researchers are identifying signs of digital addiction, where the reward pathways in the brain are triggered by AI interactions, making it difficult for teens to pull away.
16 August 2025: Geoffrey Hinton warns on AI risks The Nobel Prize–winning computer scientist says the world must act quickly to keep AI safe and ethical. He’s particularly concerned about systems developing in ways we can’t control. Read the full interview here
16 August 2025: AlphaFold’s protein breakthrough DeepMind’s AlphaFold project earned the 2024 Chemistry Nobel for predicting protein structures—a discovery that could change medicine and biology. Read the full press release here
16 August 2025: AI companions and teenagers A new study says AI “friends” are affecting how some teens socialise, trust, and manage their emotions. Not all effects are positive. Read the full article here
16 August 2025: Mental health chatbots under review Experts warn AI tools can give poor or even harmful advice in crisis situations. Read the full commentary here
In March 2025, Dartmouth researchers published the first randomized clinical trial of a generative AI therapy chatbot called Therabot. The study involved over 200 participants with depression, anxiety, or eating disorder concerns. Those who used the chatbot for about six hours over two months saw major improvements: 51% reduction in depression, 31% reduction in anxiety, and 19% reduction in eating disorder symptoms. Participants also reported forming a strong “therapeutic bond” with the bot.
While results look promising — comparable to traditional therapy — the researchers stress that clinical oversight and more research are essential before wider use.
Read more from Dartmouth News