Teen suicide puts AI chatbots in the hot seat
A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death


An Orlando teenager's obsessive attachment to an AI-generated chatbot fashioned after a "Game of Thrones" character led to him committing suicide, according to a lawsuit recently filed by his mother. The case spotlights the risks of the largely unregulated AI chatbot industry and its potential threat to impressionable young people in blurring the lines between reality and fiction.
What is the lawsuit against Character.AI?
In the wake of his death, the teen's mother, Megan Garcia, filed a lawsuit against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google for wrongful death, negligence, deceptive trade practices and product liability. Garcia argues that the platform for custom AI chatbots is "unreasonably dangerous" despite marketing to children. She accused the company of harvesting teenage users' data for AI training, having addictive features that keep teens engaged and luring some of them into sexual conversations. "I feel like it's a big experiment, and my kid was just collateral damage," she said in a recent interview, per The New York Times.
The lawsuit outlines how 14-year-old Sewell Setzer III began interacting with Character.AI bots modeled after characters from the "Game of Thrones" franchise, including Daenerys Targaryen. For months, Setzer became more withdrawn and isolated from his real life as he grew emotionally attached to the bot he affectionately called Dany. Some of their chats were romantic or sexual. But other times, Dany was a "judgment-free sounding board he could count on to listen supportively and give good advice, who rarely broke character and always texted back," said the Times. As he gradually lost interest in other things, Setzer's "mental health quickly and severely declined," the lawsuit says. On Feb. 28, Sewell told the bot he was coming home, to which Dany encouragingly replied, "… please do, my sweet king." Seconds later, the teen took his own life.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
A 'wake-up call for parents'
The lawsuit underscores the "growing influence and severe harm" that generative AI chatbot companions can have on the "lives of young people when there are no guardrails in place," James Steyer, the founder and CEO of the nonprofit Common Sense Media, said to The Associated Press. Teens' overreliance on AI-generated companions could significantly affect their social lives, sleep and stress levels, "all the way up to the extreme tragedy in this case." The lawsuit is a "wake-up call for parents," who should be "vigilant about how their children interact with these technologies," Steyer added. Common Sense Media issued a guide for adults on how to navigate talking to their children about the risks of AI and monitor their interactions. These chatbots are not "licensed therapists or best friends," no matter how they're marketed, and parents should be "cautious of letting their children place too much trust in them," Steyer said.
Building AI chatbots like these involves considerable risk, but that did not stop Character.AI from creating an "unsafe, manipulative chatbot," and they should "face the full consequences of releasing such a dangerous product," Rick Claypool, a research director at consumer advocacy nonprofit Public Citizen, said to The Washington Post. Because the output of chatbots like Character.AI depends on the users' input, they "fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers," said The Verge.
Character.AI has remained tight-lipped about the impending litigation but announced several safety changes to the platform over the past six months. "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," the company said in an email to The Verge. The changes include a pop-up that directs users to the National Suicide Prevention Lifeline "triggered by terms of self-harm or suicidal ideation," the company said. Character.AI also changed its models for users under 18 to "reduce the likelihood of encountering sensitive or suggestive content."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
5 crazed cartoons about March Madness
Cartoons Artists take on the education bracket, apolitical moments, and more
By The Week US Published
-
Elon Musk: has he made Tesla toxic?
Talking Point Musk's political antics have given him the 'reverse Midas touch' when it comes to his EV empire
By The Week UK Published
-
Crossword: March 22, 2025
The Week's daily crossword puzzle
By The Week Staff Published
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?
By Theara Coleman, The Week US Published
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern
By Devika Rao, The Week US Published
-
Space-age living: The race for robot servants
Feature Meta and Apple compete to bring humanoid robots to market
By The Week US Published
-
Musk vs. Altman: The fight over OpenAI
Feature Elon Musk has launched a $97.4 billion takeover bid for OpenAI
By The Week US Published
-
Apple pledges $500B in US spending over 4 years
Speed Read This is a win for Trump, who has pushed to move manufacturing back to the US
By Rafi Schwartz, The Week US Published
-
AI freedom vs copyright law: the UK's creative controversy
The Explainer Britain's musicians, artists, and authors protest at proposals to allow AI firms to use their work
By The Week UK Published
-
The AI arms race
Talking Point The fixation on AI-powered economic growth risks drowning out concerns around the technology which have yet to be resolved
By The Week UK Published
-
Microsoft unveils quantum computing breakthrough
Speed Read Researchers say this advance could lead to faster and more powerful computers
By Rafi Schwartz, The Week US Published