How might AI chatbots replace mental health therapists?
Clients form 'strong relationships' with tech


There is a striking shortage of mental health care providers in the United States. New research suggests that AI chatbots can fill in the gaps — and be remarkably effective while doing so.
Artificial intelligence can deliver mental health therapy "with as much efficacy as — or more than — human clinicians," said NPR. New research published in the New England Journal of Medicine looked at the results delivered by a bot designed at Dartmouth College.
What did the commentators say?
There was initially a lot of "trial and error" in training AI to work with humans suffering from depression and anxiety, said Nick Jacobson, one of the researchers, but the bot ultimately delivered outcomes similar to the "best evidence-based trials of psychotherapy." Patients developed a "strong relationship with an ability to trust" the digital therapist, he said.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Other experts see "reliance on bot-based therapy as a poor substitute for the real thing," said Axios. Therapy is about "forming a relationship with another human being who understands the complexity of life," said sociologist Sherry Turkle. But another expert, Skidmore College's Lucas LaFreniere, said it depends on whether patients are willing to suspend their disbelief. "If the client is perceiving empathy," he said, "they benefit from the empathy."
AI therapists could "further isolate vulnerable patients instead of easing suffering," Nigel Mulligan, a lecturer in psychotherapy at Dublin City University, said at The Conversation. It is easy to understand why people would turn to a "convenient and cost-effective resource" for mental health services, but while bots can be "beneficial for some," they are generally not an "effective substitute" for a human therapist. Humans can offer "emotional nuance, intuition and a personal connection." AI, though, cannot duplicate that nuance, making it "unsuitable for those with severe mental health issues."
The technology can make mental health services "more accessible, more personalized, and more efficient," Dr. Jacques Ambrose said at NewYork-Presbyterian's blog. Large language models have the ability to "analyze the vast amount of patient data in psychiatry" and come up with tailored treatments specific to a client. But there are concerns about privacy and the "human-to-human connection" that makes therapy effective. The best approach is one that creates a "partnership between the clinician and the technology."
What next?
In February, the American Psychological Association made a presentation to the Federal Trade Commission warning against chatbots "masquerading" as therapists that "could drive vulnerable people to harm themselves or others," said The New York Times. "People are going to be misled, and will misunderstand what good psychological care is," said Arthur C. Evans Jr., the Association's chief executive.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
There are some efforts to limit the reach of AI therapy. In California, a bill has been introduced that would ban tech companies from deploying an AI program that "pretends to be a human certified as a health provider," said Vox. Chatbots are "not licensed health professionals," said state Assembly Member Mia Bonta, "and they shouldn't be allowed to present themselves as such."
Joel Mathis is a writer with 30 years of newspaper and online journalism experience. His work also regularly appears in National Geographic and The Kansas City Star. His awards include best online commentary at the Online News Association and (twice) at the City and Regional Magazine Association.
-
August 10 editorial cartoons
Cartoons Sunday's political cartoons include a global plastics problem, GOP enthusiasm over tariffs, and more
-
5 thin-skinned cartoons about shooting the messenger
Cartoons Artists take on unfavorable weather, a look in the mirror, and more
-
Is Trump's new peacemaking model working in DR Congo?
Talking Point Truce brokered by the US president in June is holding, but foundations of a long-term peace have let to be laid
-
The jobs most at risk from AI
The Explainer Sales and customer services are touted as some of the key jobs that will be replaced by AI
-
Why AI means it's more important than ever to check terms and conditions
In The Spotlight WeTransfer row over training AI models on user data shines spotlight on dangers of blindly clicking 'Accept'
-
Are AI lovers replacing humans?
Talking Points A third of Gen Z singles use tech as a 'romantic companion'
-
Palantir: The all-seeing tech giant
Feature Palantir's data-mining tools are used by spies and the military. Are they now being turned on Americans?
-
Grok brings to light wider AI antisemitism
In the Spotlight Google and OpenAI are among the other creators who have faced problems
-
Intellectual property: AI gains at creators' expense
Feature Two federal judges ruled that it is fair use for AI firms to use copyrighted media to train bots
-
Is AI killing the internet?
Talking Point AI-powered browsers and search engines are threatening the death of the open web
-
What's Linda Yaccarino's legacy? And what's next for X?
Today's Big Question An 'uncertain future' in the age of TikTok