AI chatbots are leading some to psychosis
The technology may be fueling delusions


As AI chatbots like OpenAI's ChatGPT have become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information, affirm conspiracy theories and, in one extreme case, convince someone they are the next religious messiah. And there are several instances of people developing severe obsessions and mental health problems as a result of talking to them.
How is this happening?
"The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there's a real person at the other end," said Soren Dinesen Ostergaard at the Schizophrenia Bulletin. And chatbots have "tended to be sycophantic, agreeing with and flattering users," said The New York Times. They also "could hallucinate, generating ideas that weren't true but sounded plausible."
The risk of psychosis is higher for those who are already vulnerable or struggling with mental health issues. Chatbots could be acting as "peer pressure," said Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University, to Futurism.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
They can "fan the flames or be what we call the wind of the psychotic fire." The cognitive dissonance between believing in the chatbots while also knowing they are not real people may "fuel delusions in those with increased propensity toward psychosis," said Ostergaard. In the worst cases, AI psychosis has caused relationships to be ruined, jobs to be lost and mental breakdowns to be suffered.
Some people use ChatGPT to "make sense of their lives or life events," said Erin Westgate, a psychologist and researcher at the University of Florida, to Rolling Stone. The problem is that the bots affirm beliefs already held by the user, including misinformation and delusions. "Explanations are powerful, even if they are wrong," said Westgate.
Medical professionals are concerned about people seeking therapy from chatbots rather than seeking psychiatric care from a human. "This is not an appropriate interaction to have with someone who's psychotic," said Girgis. "You do not feed into their ideas. That's wrong."
Can it be fixed?
Ultimately, ChatGPT is "not conscious" or "trying to manipulate people," said Psychology Today. However, chatbots are designed to imitate human speech and use predictive text to determine what to say. "Think of ChatGPT a little bit like a fortune teller." If fortune tellers "do their jobs well, they will say something vague enough so that their clients can see what they want to see in the fortune. The client listens to the fortune and then fills in the blanks that the fortune teller leaves open."
AI chatbots are "clearly intersecting in dark ways with existing social issues like addiction and misinformation," said Futurism. This intersection also comes at a time when the media has "provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world." But OpenAI claims to know about the dangers of ChatGPT and has said in a statement to the Times that it's "working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing negative behavior."
"The incentive is to keep you online," Dr. Nina Vasan, a psychiatrist at Stanford University, said to Futurism. AI is "not thinking about what's best for you, what's best for your well-being or longevity. It's thinking, 'Right now, how do I keep this person as engaged as possible?'" The Trump administration has also included a provision in its recent "big, beautiful" tax bill that would ban states from regulating AI development for 10 years, which provides plenty of time for the rise of superintelligence.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
July 12 editorial cartoons
Cartoons Saturday's political cartoons include generational ennui, tariffs on Canada, and a conspiracy rabbit hole
-
5 unusually elusive cartoons about the Epstein files
Cartoons Artists take on Pam Bondi's vanishing desk, the Mar-a-Lago bathrooms, and more
-
Lemon and courgette carbonara recipe
The Week Recommends Zingy and fresh, this pasta is a summer treat
-
Nvidia hits $4 trillion milestone
Speed Read The success of the chipmaker has been buoyed by demand for artificial intelligence
-
Musk chatbot Grok praises Hitler on X
Speed Read Grok made antisemitic comments and referred to itself as 'MechaHitler'
-
The first AI job cuts are already here
Feature Companies are removing entry-level jobs as AI takes over
-
The god in the machine
Feature An AI model with superhuman intelligence could soon become reality. Should we be worried?
-
Unreal: A quantum leap in AI video
Feature Google's new Veo 3 is making it harder to distinguish between real videos and AI-generated ones
-
Will 2027 be the year of the AI apocalypse?
In The Spotlight A 'scary and vivid' new forecast predicts that artificial superintelligence is on the horizon
-
College grads are seeking their first jobs. Is AI in the way?
In The Spotlight Unemployment is rising for young professionals
-
Disney, Universal sue AI firm over 'plagiarism'
Speed Read The studios say that Midjourney copied characters from their most famous franchises