What makes OpenAI’s text robot ‘malicious’?
Elon Musk-backed firm warns that artificial intelligence programme could be used to spread fake news

A new artificial intelligence (AI) programme that can generate plausible-sounding text has been deemed too dangerous for public consumption.
The Elon Musk-backed OpenAI, a non-profit research organisation, says its new GPT2 software is so good at writing human-style prose that it could be used for malicious use, such as spreading fake news.
Indeed, fears over the “breakthrough” are so great that the company is “breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the AI system”, The Guardian reports.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
According to the limited and strictly vetted research data that has been released, the AI taught itself to “write” by analysing millions of short stories and news articles - a process known as machine learning, says the BBC.
In tests, researchers fed the system a human-written text that read: “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown.”
From the reference material, the AI was capable of writing a “convincing seven-paragraph news story” that included “quotes from government officials”, reports Bloomberg.
However, the story and quotes were entirely fabricated.
Why is that dangerous?
Although GPT2’s current creations are generally “easily identifiable as non-human”, the system’s ability to complete writing tasks and translate texts from one language to another is unlike any other programme, says The Verge.
And “in a world where information warfare is increasingly prevalent”, the emergence of AI systems that “spout unceasing but cogent nonsense is unsettling”, the site adds.
David Luan, vice president of engineering at OpenAI, told Wired that “someone who has malicious intent” could use the system to “generate high-quality fake news”.
On a reassuring note, OpenAI’s policy director, Jack Clark, says the firm is “not sounding the alarm” just yet.
But that may change “if we have two or three more years of progress” in AI development, Clark added.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
How to create a healthy 'germier' home
Under The Radar Exposure to a broad range of microbes can enhance our immune system, especially during childhood
-
George Floyd: Did Black Lives Matter fail?
Feature The momentum for change fades as the Black Lives Matter Plaza is scrubbed clean
-
National debt: Why Congress no longer cares
Feature Rising interest rates, tariffs and Trump's 'big, beautiful' bill could sent the national debt soaring
-
Google's new AI Mode feature hints at the next era of search
In the Spotlight The search giant is going all in on AI, much to the chagrin of the rest of the web
-
How the AI takeover might affect women more than men
The Explainer The tech boom is a blow to gender equality
-
Did you get a call from a government official? It might be an AI scam.
The Explainer Hackers may be using AI to impersonate senior government officers, said the FBI
-
What Elon Musk's Grok AI controversy reveals about chatbots
In the Spotlight The spread of misinformation is a reminder of how imperfect chatbots really are
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
Elon Musk's SpaceX has created a new city in Texas
Under The Radar Starbase is home to SpaceX's rocket launch site