What are AI hallucinations?
ChatGPT and the like have been known to make things up – and that can cause real damage

If American sci-fi novelist Philip K. Dick were alive today, he might have given his most famous work the title: "Do AIs Hallucinate Electric Sheep?"
Generative AI systems such as ChatGPT and Dall-E have gained a reputation for giving out information that appears plausible but is actually completely false, a phenomenon researchers call an AI hallucination.
This is "both a strength and a weakness", said Nature. While it fuels their "celebrated" inventiveness, it also leads them to "sometimes blur truth and fiction", adding something incorrect to an otherwise factual article, for example. All the while it is "totally confident" about what it has produced, said theoretical computer scientist Santosh Vempala. "They sound like politicians."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
What happens when AI is wrong?
The type of hallucination AIs generate depends on the system. Large language models (LLMs) like ChatGPT are "sophisticated pattern predictors", said TechRadar, generating text by making predictions based on what word statistically follows the previous one.
Hallucinations occur when the system isn't sure about a question or answer and "fills in gaps" based on similar examples it has been given. This leads to information that is "incorrect, made up or irrelevant", said researchers Anna Choi and Katelyn Xiaoying Mei on The Conversation.
This can have serious consequences. In 2023, American lawyer Steven Schwartz used ChatGPT to help him write a legal brief to submit in court. But instead of finding legal precedents that would help his argument, the AI made up some cases and misidentified others. Schwartz was later fined after the opposing lawyers pointed out the inaccuracies.
ChatGPT's hallucinations may also spell trouble for its maker, OpenAI. This month, Norwegian Arve Hjalmar Holmen filed a complaint against the company after the chatbot falsely claimed he had killed two of his children.
Holmen, who has never been charged nor convicted of any crime, had asked ChatGPT to answer the question: " Who is Arve Hjalmar Holmen?", to which it answered that he was a "Norwegian individual" who had "gained attention" when his sons were "tragically found dead in a pond near their home in Trondheim, Norway, in December 2020". It added that he had received a 21-year prison sentence for their murder.
Digital rights group Noyb, acting on Holmen's behalf, said OpenAI had violated data accuracy rules by "knowingly allowing ChatGPT to produce defamatory results".
Can you stop AI hallucinations?
There may not be an easy answer to solving AI's flights of fancy. Hallucinations are "fundamental" to how LLMs work, said Nature, which could make it impossible to eliminate them completely. In addition, said Choi and Mei on The Conversation, "novel" responses when a system is asked to be creative, such as when writing a story or generating an image, are "expected and desired".
However, that does not mean companies cannot reduce the number of hallucinations a system has, or their effect, said TechTarget. Solutions could involve going back to the original material fed into the system to check for inaccuracies or using retrieval-augmented generation, allowing LLMs to access external, up-to-date information to improve accuracy.
Another possibility is automated reasoning to fact-check answers straight away, a system Amazon introduced to its generative AI offerings last December. Rather than "guessing or predicting" an answer, automated reasoning uses logic and problem-solving techniques to check its validity.
Until a solution is found, hallucinations will remain an " inherent challenge" for LLMs, said TechRadar. The answer? "Fact-check everything."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Elizabeth Carr-Ellis is a freelance journalist and was previously the UK website's Production Editor. She has also held senior roles at The Scotsman, Sunday Herald and Hello!. As well as her writing, she is the creator and co-founder of the Pausitivity #KnowYourMenopause campaign and has appeared on national and international media discussing women's healthcare.
-
Marine Le Pen: will her conviction fuel the far-right?
Talking Point With National Rally framing their ex-leader as a political martyr, is French court ruling an own goal for democracy?
By Genevieve Bates Published
-
The rise of tiny cocktails
The Week Recommends From mini martinis to 'snaquiris', Gen Z are driving the trend for downsized drinks
By Irenie Forshaw, The Week UK Published
-
Antigua's disturbing disappearances
Under the Radar Worried families, baffled authorities, and growing concern as the island searches for answers to its missing persons epidemic
By Rebekah Evans, The Week UK Published
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
By Theara Coleman, The Week US Published
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises
By The Week US Published
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?
By Theara Coleman, The Week US Published
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern
By Devika Rao, The Week US Published
-
Space-age living: The race for robot servants
Feature Meta and Apple compete to bring humanoid robots to market
By The Week US Published
-
Musk vs. Altman: The fight over OpenAI
Feature Elon Musk has launched a $97.4 billion takeover bid for OpenAI
By The Week US Published
-
AI freedom vs copyright law: the UK's creative controversy
The Explainer Britain's musicians, artists, and authors protest at proposals to allow AI firms to use their work
By The Week UK Published
-
The AI arms race
Talking Point The fixation on AI-powered economic growth risks drowning out concerns around the technology which have yet to be resolved
By The Week UK Published