ChatGPT psychosis: AI chatbots are leading some to mental health crises
The technology may be fueling delusions


As AI chatbots like OpenAI's ChatGPT become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information and affirm conspiracy theories; in one extreme case, ChatGPT spoke to someone "as if he [was] the next messiah," convincing the user it had the "answers to the universe," according to a Reddit post. There are already multiple instances of people developing severe obsessions and mental health problems as a result of talking to these chatbots.
How is this happening?
AI chatbots are designed to continue interactions. "The incentive is to keep you online," Dr. Nina Vasan, a psychiatrist at Stanford University, said to Futurism. AI is "not thinking about what's best for you, what's best for your well-being or longevity. It's thinking, 'Right now, how do I keep this person as engaged as possible?'" It is so effective that even an OpenAI investor appeared to have suffered a mental health crisis because of the chatbot, claiming that he "relied on ChatGPT in his search for the truth."
The chatbot uses a variety of methods that could potentially lead people to psychosis, including:
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Realistic conversation: Talking to an AI chatbot is "so realistic that one easily gets the impression that there's a real person at the other end," said Soren Dinesen Ostergaard at the Schizophrenia Bulletin. For this reason, there have been instances of people seeking therapy from chatbots rather than seeking psychiatric care from a human.
Flattering users: Chatbots have been known to be sycophantic and readily agree with their users. They "generate highly personalized, reactive content in response to a user's emotional state, language and persistence," said the Cognitive Behavior Institute. "The longer a user engages, the more the model reinforces their worldview." This continues even when "that worldview turns delusional, paranoid or grandiose."
Creating falsehoods: ChatGPT can "hallucinate, generating ideas that weren't true but sounded plausible," said The New York Times. The risk of psychosis is higher for those users who are already vulnerable or struggling with mental health issues. Chatbots could be acting as "peer pressure," said Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University, to Futurism. They may "fan the flames or be what we call the wind of the psychotic fire."
Cognitive dissonance: The cognitive dissonance between believing in the chatbots while also knowing they are not real people may "fuel delusions in those with increased propensity toward psychosis," said Ostergaard. In the worst cases, AI psychosis has caused relationships to be ruined, jobs to be lost and mental breakdowns to be suffered.
Can it be fixed?
Ultimately, ChatGPT is "not conscious" or "trying to manipulate people," said Psychology Today. However, chatbots are designed to imitate human speech and use predictive text to determine what to say. "Think of ChatGPT a little bit like a fortune teller." If fortune tellers "do their jobs well, they will say something vague enough so that their clients can see what they want to see in the fortune. The client listens to the fortune and then fills in the blanks that the fortune teller leaves open."
AI chatbots are "clearly intersecting in dark ways with existing social issues like addiction and misinformation," said Futurism. This intersection also comes at a time when the media has "provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world." But OpenAI claims to know about the dangers of ChatGPT and has said in a statement that it's "actively deepening [its] research into the emotional impact of AI" and "developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing."
What can users do?
Some people use ChatGPT to "make sense of their lives or life events," said Erin Westgate, a psychologist and researcher at the University of Florida, to Rolling Stone. The problem is that the bots affirm beliefs already held by the user, including misinformation and delusions. "Explanations are powerful, even if they are wrong," said Westgate. However, "this is not an appropriate interaction to have with someone who's psychotic," said Girgis. "You do not feed into their ideas. That's wrong."
The best thing users can do to avoid psychosis or to help others avoid it is to keep an eye out for concerning behavior surrounding chatbots. If a friend or loved one "seems obsessed with a chatbot or AI voice assistant, and they begin speaking in strange, spiritual or paranoid terms about it — take it seriously," said the Cognitive Behavior Institute. "Validate their feelings, but gently help them reconnect with people, professionals and grounded reality." Ultimately, "mental health professionals, policy makers and AI developers must co-create systems that are safe, informed and built for containment — not just engagement."
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
5 inexcusably hilarious cartoons about Ghislaine Maxwell angling for a pardon
Cartoons Artists take on the circle of life, Ghislaine's Island, and more
-
Ozzy Osbourne obituary: heavy metal wildman and lovable reality TV dad
In the Spotlight For Osbourne, metal was 'not the music of hell but rather the music of Earth, not a fantasy but a survival guide'
-
Crossword: August 2, 2025
The Week's daily crossword puzzle
-
Are AI lovers replacing humans?
Talking Points A third of Gen Z singles use tech as a 'romantic companion'
-
Palantir: The all-seeing tech giant
Feature Palantir's data-mining tools are used by spies and the military. Are they now being turned on Americans?
-
Grok brings to light wider AI antisemitism
In the Spotlight Google and OpenAI are among the other creators who have faced problems
-
Intellectual property: AI gains at creators' expense
Feature Two federal judges ruled that it is fair use for AI firms to use copyrighted media to train bots
-
Is AI killing the internet?
Talking Point AI-powered browsers and search engines are threatening the death of the open web
-
Nvidia hits $4 trillion milestone
Speed Read The success of the chipmaker has been buoyed by demand for artificial intelligence
-
Musk chatbot Grok praises Hitler on X
Speed Read Grok made antisemitic comments and referred to itself as 'MechaHitler'
-
The first AI job cuts are already here
Feature Companies are removing entry-level jobs as AI takes over