What makes OpenAI’s text robot ‘malicious’?
Elon Musk-backed firm warns that artificial intelligence programme could be used to spread fake news
A new artificial intelligence (AI) programme that can generate plausible-sounding text has been deemed too dangerous for public consumption.
The Elon Musk-backed OpenAI, a non-profit research organisation, says its new GPT2 software is so good at writing human-style prose that it could be used for malicious use, such as spreading fake news.
Indeed, fears over the “breakthrough” are so great that the company is “breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the AI system”, The Guardian reports.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
According to the limited and strictly vetted research data that has been released, the AI taught itself to “write” by analysing millions of short stories and news articles - a process known as machine learning, says the BBC.
In tests, researchers fed the system a human-written text that read: “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown.”
From the reference material, the AI was capable of writing a “convincing seven-paragraph news story” that included “quotes from government officials”, reports Bloomberg.
However, the story and quotes were entirely fabricated.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Why is that dangerous?
Although GPT2’s current creations are generally “easily identifiable as non-human”, the system’s ability to complete writing tasks and translate texts from one language to another is unlike any other programme, says The Verge.
And “in a world where information warfare is increasingly prevalent”, the emergence of AI systems that “spout unceasing but cogent nonsense is unsettling”, the site adds.
David Luan, vice president of engineering at OpenAI, told Wired that “someone who has malicious intent” could use the system to “generate high-quality fake news”.
On a reassuring note, OpenAI’s policy director, Jack Clark, says the firm is “not sounding the alarm” just yet.
But that may change “if we have two or three more years of progress” in AI development, Clark added.
-
The ‘eclipse of the century’ is coming in 2027Under the radar It will last for over 6 minutes
-
Striking homes with indoor poolsFeature Featuring a Queen Anne mansion near Chicago and mid-century modern masterpiece in Washington
-
Why are federal and local authorities feuding over investigating ICE?TODAY’S BIG QUESTION Minneapolis has become ground zero for a growing battle over jurisdictional authority
-
Will regulators put a stop to Grok’s deepfake porn images of real people?Today’s Big Question Users command AI chatbot to undress pictures of women and children
-
Most data centers are being built in the wrong climateThe explainer Data centers require substantial water and energy. But certain locations are more strained than others, mainly due to rising temperatures.
-
The dark side of how kids are using AIUnder the Radar Chatbots have become places where children ‘talk about violence, explore romantic or sexual roleplay, and seek advice when no adult is watching’
-
Why 2025 was a pivotal year for AITalking Point The ‘hype’ and ‘hopes’ around artificial intelligence are ‘like nothing the world has seen before’
-
AI griefbots create a computerized afterlifeUnder the Radar Some say the machines help people mourn; others are skeptical
-
The robot revolutionFeature Advances in tech and AI are producing android machine workers. What will that mean for humans?
-
Separating the real from the fake: tips for spotting AI slopThe Week Recommends Advanced AI may have made slop videos harder to spot, but experts say it’s still possible to detect them
-
Inside a Black community’s fight against Elon Musk’s supercomputerUnder the radar Pollution from Colossal looms over a small Southern town, potentially exacerbating health concerns