Teen suicide puts AI chatbots in the hot seat
A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death
An Orlando teenager's obsessive attachment to an AI-generated chatbot fashioned after a "Game of Thrones" character led to him committing suicide, according to a lawsuit recently filed by his mother. The case spotlights the risks of the largely unregulated AI chatbot industry and its potential threat to impressionable young people in blurring the lines between reality and fiction.
What is the lawsuit against Character.AI?
In the wake of his death, the teen's mother, Megan Garcia, filed a lawsuit against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google for wrongful death, negligence, deceptive trade practices and product liability. Garcia argues that the platform for custom AI chatbots is "unreasonably dangerous" despite marketing to children. She accused the company of harvesting teenage users' data for AI training, having addictive features that keep teens engaged and luring some of them into sexual conversations. "I feel like it's a big experiment, and my kid was just collateral damage," she said in a recent interview, per The New York Times.
The lawsuit outlines how 14-year-old Sewell Setzer III began interacting with Character.AI bots modeled after characters from the "Game of Thrones" franchise, including Daenerys Targaryen. For months, Setzer became more withdrawn and isolated from his real life as he grew emotionally attached to the bot he affectionately called Dany. Some of their chats were romantic or sexual. But other times, Dany was a "judgment-free sounding board he could count on to listen supportively and give good advice, who rarely broke character and always texted back," said the Times. As he gradually lost interest in other things, Setzer's "mental health quickly and severely declined," the lawsuit says. On Feb. 28, Sewell told the bot he was coming home, to which Dany encouragingly replied, "… please do, my sweet king." Seconds later, the teen took his own life.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
A 'wake-up call for parents'
The lawsuit underscores the "growing influence and severe harm" that generative AI chatbot companions can have on the "lives of young people when there are no guardrails in place," James Steyer, the founder and CEO of the nonprofit Common Sense Media, said to The Associated Press. Teens' overreliance on AI-generated companions could significantly affect their social lives, sleep and stress levels, "all the way up to the extreme tragedy in this case." The lawsuit is a "wake-up call for parents," who should be "vigilant about how their children interact with these technologies," Steyer added. Common Sense Media issued a guide for adults on how to navigate talking to their children about the risks of AI and monitor their interactions. These chatbots are not "licensed therapists or best friends," no matter how they're marketed, and parents should be "cautious of letting their children place too much trust in them," Steyer said.
Building AI chatbots like these involves considerable risk, but that did not stop Character.AI from creating an "unsafe, manipulative chatbot," and they should "face the full consequences of releasing such a dangerous product," Rick Claypool, a research director at consumer advocacy nonprofit Public Citizen, said to The Washington Post. Because the output of chatbots like Character.AI depends on the users' input, they "fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers," said The Verge.
Character.AI has remained tight-lipped about the impending litigation but announced several safety changes to the platform over the past six months. "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," the company said in an email to The Verge. The changes include a pop-up that directs users to the National Suicide Prevention Lifeline "triggered by terms of self-harm or suicidal ideation," the company said. Character.AI also changed its models for users under 18 to "reduce the likelihood of encountering sensitive or suggestive content."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.
-
Ukraine fires ATACMS, Russia ups hybrid war
Speed Read Ukraine shot U.S.-provided long-range missiles and Russia threatened retaliation
By Peter Weber, The Week US Published
-
New York DA floats 4-year Trump sentencing freeze
Speed Read President-elect Donald Trump's sentencing is on hold, and his lawyers are pushing to dismiss the case while he's in office
By Rafi Schwartz, The Week US Published
-
Wicked fails to defy gravity
Talking Point Film version of hit stage musical weighed down by 'sense of self-importance'
By Tess Foley-Cox Published
-
What Trump's win could mean for Big Tech
Talking Points The tech industry is bracing itself for Trump's second administration
By Theara Coleman, The Week US Published
-
Google Maps gets an AI upgrade to compete with Apple
Under the Radar The Google-owned Waze, a navigation app, will be getting similar upgrades
By Justin Klawans, The Week US Published
-
Australia proposes social media ban before age 16
Speed Read Australia proposes social media ban before age 16
By Peter Weber, The Week US Published
-
Is ChatGPT's new search engine OpenAI's Google 'killer'?
Talking Point There's a new AI-backed search engine in town. But can it stand up to Google's decades-long hold on internet searches?
By Theara Coleman, The Week US Published
-
FTC bans fake online product reviews
Speed Read The agency will enforce fines of up to $51,744 per violation
By Peter Weber, The Week US Published
-
Is the world ready for Tesla's new domestic robots?
Talking Points The debut of Elon Musk's long-promised "Optimus" at a Tesla event last week has renewed debate over the role — and feasibility — of commercial automatons
By Rafi Schwartz, The Week US Published
-
The internet is being overrun by ads
Under the Radar Grabbing attention has never been more annoying
By Devika Rao, The Week US Published
-
States sue TikTok over children's mental health
Speed Read The lawsuit was filed by 13 states and Washington, D.C.
By Rafi Schwartz, The Week US Published