AI chatbots are leading some to psychosis
The technology may be fueling delusions


As AI chatbots like OpenAI's ChatGPT have become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information, affirm conspiracy theories and, in an extreme case, convince someone they are the next religious messiah. And there are several instances of people developing severe obsessions and mental health problems as a result of talking to them.
How is this happening?
"The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there's a real person at the other end," Soren Dinesen Ostergaard said at the Schizophrenia Bulletin. And chatbots have "tended to be sycophantic, agreeing with and flattering users," said The New York Times. They also "could hallucinate, generating ideas that weren't true but sounded plausible."
The risk of psychosis is higher for those who are already vulnerable or struggling with mental health issues. Chatbots could be acting as "peer pressure," Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University, said to Futurism.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
They can "fan the flames or be what we call the wind of the psychotic fire." The cognitive dissonance between believing in the chatbots while also knowing they are not real people may "fuel delusions in those with increased propensity toward psychosis," said Ostergaard. In the worst cases, AI psychosis has caused relationships to be ruined, jobs to be lost and mental breakdowns to be suffered.
Some people use ChatGPT to "make sense of their lives or life events," Erin Westgate, a psychologist and researcher at the University of Florida, said to Rolling Stone. The problem is that the bots affirm beliefs already held by the user, including misinformation and delusions. "Explanations are powerful, even if they are wrong," said Westgate.
Medical professionals are concerned about people seeking therapy from chatbots rather than seeking psychiatric care from a human. "This is not an appropriate interaction to have with someone who's psychotic," said Girgis. "You do not feed into their ideas. That's wrong."
Can it be fixed?
Ultimately, ChatGPT is "not conscious" or "trying to manipulate people," said Psychology Today. However, chatbots are designed to imitate human speech and use predictive text to determine what to say. "Think of ChatGPT a little bit like a fortune teller." If fortune tellers "do their jobs well, they will say something vague enough so that their clients can see what they want to see in the fortune. The client listens to the fortune and then fills in the blanks that the fortune teller leaves open."
AI chatbots are "clearly intersecting in dark ways with existing social issues like addiction and misinformation," said Futurism. This intersection also comes at a time when the media has "provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world." But OpenAI claims to know about the dangers of ChatGPT and has said in a statement to the Times that it's "working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing negative behavior."
"The incentive is to keep you online," Dr. Nina Vasan, a psychiatrist at Stanford University, said to Futurism. AI is "not thinking about what's best for you, what's best for your well-being or longevity. It's thinking, 'Right now, how do I keep this person as engaged as possible?'" The Trump administration has also included a provision in its recent "big, beautiful" tax bill that would ban states from regulating AI development for 10 years, which provides plenty of time for the rise of superintelligence.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
How developed was Iran's nuclear program and what's left now?
Today's Big Question Israel and the United States have said different things about Iran's capabilities
-
The downsides of a 'forgotten' 401(k) and how to find it
the explainer Don't leave your old retirement plan behind
-
'Self-segregation by political affiliation is spreading'
Instant Opinion Opinion, comment and editorials of the day
-
Unreal: A quantum leap in AI video
Feature Google's new Veo 3 is making it harder to distinguish between real videos and AI-generated ones
-
Will 2027 be the year of the AI apocalypse?
A 'scary and vivid' new forecast predicts that artificial superintelligence is on the horizon A 'scary and vivid' new forecast predicts that artificial superintelligence is on the horizon
-
College grads are seeking their first jobs. Is AI in the way?
In The Spotlight Unemployment is rising for young professionals
-
Disney, Universal sue AI firm over 'plagiarism'
Speed Read The studios say that Midjourney copied characters from their most famous franchises
-
Learning loss: AI cheating upends education
Feature Teachers are questioning the future of education as students turn to AI for help with their assignments
-
AI: Will it soon take your job?
Feature AI developers warn that artificial intelligence could eliminate half of all entry-level jobs within five years
-
The rise of 'vibe coding'
In The Spotlight Silicon Valley rush to embrace AI tools that allow anyone to code and create software
-
Bitcoin braces for a quantum computing onslaught
IN THE SPOTLIGHT The cryptocurrency community is starting to worry over a new generation of super-powered computers that could turn the digital monetary world on its head.