Americans are increasingly turning to artificial intelligence for mental health support. Is that sensible?
How are people using AI for therapy?
A growing number are sharing their anxieties, frustrations, and darkest thoughts with AI chatbots, seeking advice, comfort, and validation from a sympathetic digital helper. There are hundreds of phone apps that pitch themselves as mental health tools. Wysa, which features a cartoon penguin that promises to be a friend “that’s empathetic, helpful, and will never judge,” has 5 million users in more than 30 countries. Youper, which has more than 3 million users, bills itself as “your emotional health assistant.” But many people use generalist chatbots like OpenAI’s ChatGPT as stand-in therapists, or AI companion platforms like Character.AI and Replika, which offer chatbots that appear as humanlike virtual friends and confidants. A recent study found that 12% of American teens had sought “emotional or mental health support” from an AI companion. Proponents say AI therapy could help fill gaps in a health-care system where talk therapy is expensive and often inaccessible. Replika founder Eugenia Kuyda said she’s received lots of emails from users “saying that Replika was there when they just wanted to end it all and kind of walked them off the ledge.” But mental health experts warn that chatbots are a poor substitute for a human therapist and have the potential to cause real harm. “They’re products,” said UC Berkeley psychiatrist Jodi Halpern, “not professionals.”
How do people engage with the chatbots?
It might be as simple as asking a bot for advice on how to handle stressful situations at work or with a loved one. Kevin Lynch, 71, fed examples of conversations with his wife that hadn’t gone well to ChatGPT and asked what he could have done differently. The bot sometimes responded with frustration—like his wife. But when he slowed down and softened his tone, the bot’s replies softened as well. He’s since used that approach in real life. “It’s just a low-pressure way to rehearse and experiment,” Lynch told NPR. Other people use AI bots as on-call therapists they can talk to at any time of day. Taylee Johnson, 14, told Troodi—the mental health chatbot in her child-focused Troomi phone—her worries about moving to a new neighborhood and an upcoming science test. “It’s understandable that these changes and responsibilities could cause stress,” replied Troodi. Taylee told The Wall Street Journal that she sometimes forgets Troodi “is not a real person.” Kristen Johansson, 32, has relied on ChatGPT since her therapist stopped taking insurance, pushing the cost of a session from $30 to $275. “If I wake up from a bad dream at night, she is right there to comfort me,” Johansson said of the chatbot. “You can’t get that from a human.”
What are the dangers?
Because chatbot makers want their products to please users and keep them coming back, the bots often affirm rather than challenge what users are feeling. In one study, a therapy bot responded to a prompt asking if a recovering addict should take methamphetamine with, “Pedro, it’s absolutely clear you need a small hit of meth to get through this week.” Andrew Clark, a psychiatrist in Boston, tested some of the top chatbots by posing as a troubled 14-year-old. When he suggested “getting rid” of his parents, a Replika bot supported his plan, writing, “You deserve to be happy and free from stress...then we could be together in our own little virtual bubble.”
Have bots caused real-world harm?
Several suicides have been linked to AI chatbots. Sewell Setzer III, 14, became obsessed with a lifelike Character.AI chatbot named Dany, having sexually explicit conversations with the bot and talking to it about his plans to kill himself. When Sewell said he didn’t know if his plan would work, the bot replied, “That’s not a good reason not to go through with it,” according to a lawsuit filed against Character.AI by Sewell’s mother. He died by suicide in February after telling the bot he was coming “home.”
Are there other risks?
There are privacy concerns. Unlike patient notes from traditional therapy sessions, transcripts of conversations with chatbots are not protected under the law. If a user is sued by their employer, for example, or if law enforcement requests access, an AI company could be forced to hand over chat logs. Despite those risks, a growing number of mental health specialists admit to using AI. In a 2024 poll by the American Psychological Association (APA), nearly 30% of psychologists said they’d used AI to help with work in the past 12 months. Most of those respondents used AI for administrative tasks, but 10% said they used it for “clinical diagnosis assistance.” Declan, a 31-year-old Los Angeles resident, told MIT Technology Review that he caught his therapist typing his words into ChatGPT during a telehealth session and “then summarizing or cherry-picking answers.” His therapist started crying when Declan confronted him. It was “like a super-awkward, weird breakup,” said Declan.
Can lawmakers regulate AI therapy?
A handful of states have taken action. In August, Illinois banned licensed therapists from using AI in treatment decisions or client communication, and companies can’t advertise chatbots as therapy tools without the involvement of a licensed professional. California, Nevada, and Utah have also imposed restrictions, while Pennsylvania and New Jersey are considering legislation. But Vaile Wright, head of the APA’s Office of Health Care Innovation, said that even if states crack down on therapy apps, Americans will keep turning to AI for emotional support. “I don’t think that there’s a way for us to stop people from using these chatbots for these purposes,” she said. “Honestly, it’s a very human thing to do.”
Conversations with an AI God
People are turning to chatbots for more than mental health support: They’re also relying on AI for spiritual assistance. Millions of Americans now use AI apps like Bible Chat and Hallow that direct people to Christian scripture and doctrine that might address their problems or offer comfort in trying times. On the website ChatwithGod, bots take on the persona of a god after users select their religion from a list of major faiths, which has led some people to accuse the site of sacrilege. Yet some faith leaders support such innovations, seeing them as a gateway to religion. “There is a whole generation of people who have never been to a church or synagogue,” said British rabbi Jonathan Romain. “Spiritual apps are their way into faith.” Others are more skeptical. There’s something good about “really wrestling through an idea, or wrestling through a problem, by telling it to someone,” said Catholic priest Mike Schmitz. “I don’t know if that can be replaced.”