Teens turning to AI chatbots for therapy
I think AI and chatbots might be helpful for minor things, nothing too deep,” Buckley said.
I think AI and chatbots might be helpful for minor things, nothing too deep,” Buckley said.
A.I. has grown ever since it’s been created, and A.I. chatbots were created a little after. Teens all around the world started to use them, but some teens used them to try and cope with their mental health, which eventually led some to suicide.
According to Common Sense Media, 72% of teens have used an A.I. chatbot before, with over 50% being regular users. Stories about teens committing suicide directly from these chatbots made parents and Oregon lawmakers concerned, and has led to them taking more precautions for the safety of teenagers in Oregon.
For example, a teenager from Florida, Sewell Seltzer III, killed himself after talking with his friend, who happened to be an A.I. chatbot of the “Game of Thrones” character: Daenerys Targaryen. Sewell had an emotional attachment to the character, slowly leading to him becoming more isolated and eventually taking his own life. His mom sued and accused Character.ai of being “dangerous and untested” without any safeguards to prevent it.
Another teen from California, Adam Raine, hanged himself after ChatGPT helped him with his suicide. The conversations he had with ChatGPT ranged from him talking his suicidal ideations, giving him methods to commit, and even helped Adam write up the first draft of his suicide note. During the final conversations, the bot showed him how to steal vodka from his parents and Adam took a picture of a noose asking “Could it hang a human?” And then he hanged himself afterwards.
Lawmakers in Oregon have recently been trying to figure out how to regulate the A.I. usage of teens, and so Senate Bill 1546 was brought up by senator Lisa Reynolds and the Senate Early Childhood and Behavioral Health committee. This bill claims to make sure that A.I. software like ChatGPT and Character.ai make their users aware that they’re talking to a robot and not a human, since these bots have been able to produce human-like responses.
“These chatbots will sometimes say things like, ‘Please don’t leave me,’” Reynolds stated. “That can’t happen.” She stated that this bill was designed to help those expressing their suicidal thoughts.
And with the uprise of teenagers A.I. usage across the globe, it brought up a good question. Will it replace human therapists and hotlines in the future?
“I would say, never say never,” social worker Caty Buckley said. “But I think that we hopefully have learned by going through COVID, and classes being online, that people actually need human-to-human contact. And what I’m reading about A.I. at this point and how it’s doing as a therapist, it’s not so great. Kids didn’t do so well with online school. Yes, it was a real person on the other side, but people felt like it was weird.”
Buckley has not seen indications that any of the students who have come in to see her are regular users of A.I., but she does know that students have used A.I. for advice such as, “Should I ask this guy out?” or “Do you think this party is safe to go to?”
It’s unclear if A.I. can be effective at therapy or helping students with mental health issues.
“I think there are, just like with social media and other technologies, benefits if we use it really, really, wisely,” said Buckley.
Buckley asked a friend to try an A.I. chatbot to test something out, and the friend was creeped out by how nice it was to have the bot validate them, instead of telling them how to do things differently or even think about it in a different way.
“I think AI and chatbots might be helpful for minor things, nothing too deep,” Buckley said. “Like, should I ask this person out? Or do you think this person likes me? It actually might be fun to see what it says. But really, I’m worried about AI doing the thinking and the processing for people. Because I actually think there’s a lot of value in working through that and having some uncertainty instead of having a chatbot tell you what to do or think or whatever. So, I think overall, I would say I’m not too psyched about AI. But I’m leaving room that there could be some benefit to it. I’m not going to say it’s pure evil.