ChatGPT psychosis: AI chatbots are leading some to mental health crises

The technology may be fueling delusions

As AI chatbots like OpenAI's ChatGPT become more mainstream, a troubling phenomenon has accompanied their rise: chatbot psychosis. Chatbots are known to sometimes push inaccurate information and affirm conspiracy theories; in one extreme case, ChatGPT spoke to someone "as if he [was] the next messiah," convincing the user it had the "answers to the universe," according to a Reddit post. There are already multiple instances of people developing severe obsessions and mental health problems as a result of talking to these chatbots.

How is this happening?

The chatbot uses a variety of methods that could potentially lead people to psychosis, including:

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Realistic conversation: Talking to an AI chatbot is "so realistic that one easily gets the impression that there's a real person at the other end," said Soren Dinesen Ostergaard at the Schizophrenia Bulletin. For this reason, there have been instances of people seeking therapy from chatbots rather than seeking psychiatric care from a human.

Flattering users: Chatbots have been known to be sycophantic and readily agree with their users. They "generate highly personalized, reactive content in response to a user's emotional state, language and persistence," said the Cognitive Behavior Institute. "The longer a user engages, the more the model reinforces their worldview." This continues even when "that worldview turns delusional, paranoid or grandiose."

Creating falsehoods: ChatGPT can "hallucinate, generating ideas that weren't true but sounded plausible," said The New York Times. The risk of psychosis is higher for those users who are already vulnerable or struggling with mental health issues. Chatbots could be acting as "peer pressure," said Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University, to Futurism. They may "fan the flames or be what we call the wind of the psychotic fire."

Cognitive dissonance: The cognitive dissonance between believing in the chatbots while also knowing they are not real people may "fuel delusions in those with increased propensity toward psychosis," said Ostergaard. In the worst cases, AI psychosis has caused relationships to be ruined, jobs to be lost and mental breakdowns to be suffered.

Can it be fixed?

Ultimately, ChatGPT is "not conscious" or "trying to manipulate people," said Psychology Today. However, chatbots are designed to imitate human speech and use predictive text to determine what to say. "Think of ChatGPT a little bit like a fortune teller." If fortune tellers "do their jobs well, they will say something vague enough so that their clients can see what they want to see in the fortune. The client listens to the fortune and then fills in the blanks that the fortune teller leaves open."

AI chatbots are "clearly intersecting in dark ways with existing social issues like addiction and misinformation," said Futurism. This intersection also comes at a time when the media has "provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world." But OpenAI claims to know about the dangers of ChatGPT and has said in a statement that it's "actively deepening [its] research into the emotional impact of AI" and "developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing."

What can users do?

Some people use ChatGPT to "make sense of their lives or life events," said Erin Westgate, a psychologist and researcher at the University of Florida, to Rolling Stone. The problem is that the bots affirm beliefs already held by the user, including misinformation and delusions. "Explanations are powerful, even if they are wrong," said Westgate. However, "this is not an appropriate interaction to have with someone who's psychotic," said Girgis. "You do not feed into their ideas. That's wrong."

The best thing users can do to avoid psychosis or to help others avoid it is to keep an eye out for concerning behavior surrounding chatbots. If a friend or loved one "seems obsessed with a chatbot or AI voice assistant, and they begin speaking in strange, spiritual or paranoid terms about it — take it seriously," said the Cognitive Behavior Institute. "Validate their feelings, but gently help them reconnect with people, professionals and grounded reality." Ultimately, "mental health professionals, policy makers and AI developers must co-create systems that are safe, informed and built for containment — not just engagement."

Devika Rao, The Week US

 Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.