Teen suicide puts AI chatbots in the hot seat
A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death

An Orlando teenager's obsessive attachment to an AI-generated chatbot fashioned after a "Game of Thrones" character led to him committing suicide, according to a lawsuit recently filed by his mother. The case spotlights the risks of the largely unregulated AI chatbot industry and its potential threat to impressionable young people in blurring the lines between reality and fiction.
What is the lawsuit against Character.AI?
In the wake of his death, the teen's mother, Megan Garcia, filed a lawsuit against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google for wrongful death, negligence, deceptive trade practices and product liability. Garcia argues that the platform for custom AI chatbots is "unreasonably dangerous" despite marketing to children. She accused the company of harvesting teenage users' data for AI training, having addictive features that keep teens engaged and luring some of them into sexual conversations. "I feel like it's a big experiment, and my kid was just collateral damage," she said in a recent interview, per The New York Times.
The lawsuit outlines how 14-year-old Sewell Setzer III began interacting with Character.AI bots modeled after characters from the "Game of Thrones" franchise, including Daenerys Targaryen. For months, Setzer became more withdrawn and isolated from his real life as he grew emotionally attached to the bot he affectionately called Dany. Some of their chats were romantic or sexual. But other times, Dany was a "judgment-free sounding board he could count on to listen supportively and give good advice, who rarely broke character and always texted back," said the Times. As he gradually lost interest in other things, Setzer's "mental health quickly and severely declined," the lawsuit says. On Feb. 28, Sewell told the bot he was coming home, to which Dany encouragingly replied, "… please do, my sweet king." Seconds later, the teen took his own life.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
A 'wake-up call for parents'
The lawsuit underscores the "growing influence and severe harm" that generative AI chatbot companions can have on the "lives of young people when there are no guardrails in place," James Steyer, the founder and CEO of the nonprofit Common Sense Media, said to The Associated Press. Teens' overreliance on AI-generated companions could significantly affect their social lives, sleep and stress levels, "all the way up to the extreme tragedy in this case." The lawsuit is a "wake-up call for parents," who should be "vigilant about how their children interact with these technologies," Steyer added. Common Sense Media issued a guide for adults on how to navigate talking to their children about the risks of AI and monitor their interactions. These chatbots are not "licensed therapists or best friends," no matter how they're marketed, and parents should be "cautious of letting their children place too much trust in them," Steyer said.
Building AI chatbots like these involves considerable risk, but that did not stop Character.AI from creating an "unsafe, manipulative chatbot," and they should "face the full consequences of releasing such a dangerous product," Rick Claypool, a research director at consumer advocacy nonprofit Public Citizen, said to The Washington Post. Because the output of chatbots like Character.AI depends on the users' input, they "fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers," said The Verge.
Character.AI has remained tight-lipped about the impending litigation but announced several safety changes to the platform over the past six months. "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," the company said in an email to The Verge. The changes include a pop-up that directs users to the National Suicide Prevention Lifeline "triggered by terms of self-harm or suicidal ideation," the company said. Character.AI also changed its models for users under 18 to "reduce the likelihood of encountering sensitive or suggestive content."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Why Spain's economy is booming
The Explainer Immigration, tourism and cheap energy driving best growth figures in Europe
By The Week UK Published
-
5 trips where the journey is the best part
The Week Recommends Slow down and enjoy the ride
By Catherine Garcia, The Week US Published
-
5 tax deductions to know if you are self-employed
The explainer You may be able to claim home office, health insurance and other tax deductions
By Becca Stanek, The Week US Published
-
Elon Musk's DOGE website has gotten off to a bad start
In the Spotlight The site was reportedly able to be edited by anyone when it first came online
By Justin Klawans, The Week US Published
-
Decline of dating apps: will AI be our knight in shining armour?
In The Spotlight New features have raised concerns about privacy and manipulation
By Chas Newkey-Burden, The Week UK Published
-
Romance scammers are taking advantage of Americans
Under the Radar The FBI and tech companies have warned against these scams
By Justin Klawans, The Week US Published
-
Paris AI Summit: has Europe already been left behind?
The Explainer EU shift from AI regulation to investment may still leave it trailing in US and China's wake
By Richard Windsor, The Week UK Published
-
Claws, motherships and shotguns are just some of the latest drone technology
The Explainer Beyond just surveillance, drones can now be used for a wide array of purposes
By Justin Klawans, The Week US Published
-
What is living intelligence, the new frontier in AI?
The Explainer Business leaders must prepare themselves for the next wave in tech, which will take AI to another level
By Theara Coleman, The Week US Published
-
Chinese AI company DeepSeek rocks the tech world
In the spotlight America's hold on artificial intelligence is on shaky ground
By Theara Coleman, The Week US Published
-
Big tech's big pivot
Opinion How Silicon Valley's corporate titans learned to love Trump
By Theunis Bates Published