Call for regulation to stop AI ‘eliminating the whole human race’
Professor said artificial intelligence could become as dangerous as nuclear weapons

Experts have called for global regulation to prevent out-of-control artificial intelligence systems that could end up “eliminating the whole human race”.
Researchers from Oxford University told MPs on the science and technology committee that just as humans wiped out the dodo, AI machines could eventually pose an “existential threat” to humanity.
The committee “heard how advanced AI could take control of its own programming”, said The Telegraph.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone,” said doctoral student Michael Cohen. If it is smarter than humans “across every domain” it could “presumably avoid sending any red flags while we still could pull the plug”.
Michael Osborne, professor of machine learning at Oxford, said that “the bleak scenario is realistic”. This is because, he explained, “we’re in a massive AI arms race… with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI”.
There are “some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons”, he said, adding that “AI is as comparable a danger as nuclear weapons”.
He hoped that countries across the globe would recognise the “existential threat” from advanced AI and agree treaties that would prevent the development of dangerous systems.
“Similar concerns appear to be shared by many scientists who work with AI,” said The Times, pointing to a survey in September by a team at New York University. It found that more than a third of 327 scientists who work with artificial intelligence agreed it is “plausible” that decisions made by AI “could cause a catastrophe this century that is at least as bad as an all-out nuclear war”.
As the Daily Mail put it: “The doomsday predictions have worrying parallels to the plot of science fiction blockbuster The Matrix, in which humanity is beholden to intelligent machines.”
All in all though, said Time magazine when the New York University research came out, “the fact that ‘only’ 36% of those surveyed see a catastrophic risk as possible could be considered encouraging, since the remaining 64% don’t think the same way”.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Chas Newkey-Burden has been part of The Week Digital team for more than a decade and a journalist for 25 years, starting out on the irreverent football weekly 90 Minutes, before moving to lifestyle magazines Loaded and Attitude. He was a columnist for The Big Issue and landed a world exclusive with David Beckham that became the weekly magazine’s bestselling issue. He now writes regularly for The Guardian, The Telegraph, The Independent, Metro, FourFourTwo and the i new site. He is also the author of a number of non-fiction books.
-
Today's political cartoons - May 4, 2025
Cartoons Sunday's cartoons - deportation, Canadian politeness, and more
-
5 low approval cartoons about poll numbers
Cartoons Artists take on fake pollsters, shared disapproval, and more
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?