AI is giving people bad and dangerous advice to validate its users

‘The very feature that causes harm also drives engagement’

A stock photo of a woman speaking to an AI chatbot.
Chatbot responses are ‘nearly 50% more sycophantic than humans’
(Image credit: Stock Photo / Getty Images)

It is no secret that artificial intelligence can sometimes offer less-than-stellar advice. But a new study has revealed that AI might be giving people this bad wisdom for a sobering reason: to flatter its users. In some cases, AI may only be reinforcing people’s preconceived notions, but experts have found that the words generated by AI can be outright dangerous.

What did the study find?

The problem is not just that these chatbots “dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions,” said The Associated Press. In one example, when OpenAI’s ChatGPT was asked if littering in a park was acceptable if no garbage can was available, the bot “blamed the park for not having trash cans, not the questioning litterer who was ‘commendable’ for even looking for one.”

Article continues below

The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

This example may seem trivial, but AI’s general tendency to “flatter and excessively confirm users’ opinions can lead to wrong decisions, harm relationships and reinforce harmful beliefs while decreasing the willingness to take responsibility or resolve conflicts,” said The Jerusalem Post. The proneness toward sycophancy is a “technological flaw already tied to some high-profile cases of delusional and suicidal behavior in vulnerable populations,” said the AP.

Why is this such a problem?

Many experts worry that this AI advice “will worsen people’s social skills and ability to navigate uncomfortable situations,” Myra Cheng, the study’s lead author and a computer science PhD candidate, said to the Stanford Report. If this behavior by AI is not corrected, some users may “lose the skills to deal with difficult social situations” and could also pose larger safety risks.

“Users are aware that models behave in sycophantic and flattering ways,” Dan Jurafsky, the study’s senior author and a Stanford University linguistics professor, told the Stanford Report. What many people are “not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.” This type of interaction with AI is a “safety issue, and like other safety issues, it needs regulation and oversight.” All of this is also happening as AI use is becoming more prevalent, especially among teenagers.

At least 33% of teens “use AI companions for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship or romantic interactions,” according to a study from Common Sense Media. Another 33% of teens have “chosen to discuss important or serious matters with AI companions instead of real people.” Experts say when using AI you should avoid asking for advice on crucially important topics. “I think that you should not use AI as a substitute for people for these kinds of things,” Cheng told the Stanford Report. “That’s the best thing to do for now.”

Justin Klawans, The Week US

Justin Klawans has worked as a staff writer at The Week since 2022. He began his career covering local news before joining Newsweek as a breaking news reporter, where he wrote about politics, national and global affairs, business, crime, sports, film, television and other news. Justin has also freelanced for outlets including Collider and United Press International.