Call for regulation to stop AI ‘eliminating the whole human race’
Professor said artificial intelligence could become as dangerous as nuclear weapons

Experts have called for global regulation to prevent out-of-control artificial intelligence systems that could end up “eliminating the whole human race”.
Researchers from Oxford University told MPs on the science and technology committee that just as humans wiped out the dodo, AI machines could eventually pose an “existential threat” to humanity.
The committee “heard how advanced AI could take control of its own programming”, said The Telegraph.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone,” said doctoral student Michael Cohen. If it is smarter than humans “across every domain” it could “presumably avoid sending any red flags while we still could pull the plug”.
Michael Osborne, professor of machine learning at Oxford, said that “the bleak scenario is realistic”. This is because, he explained, “we’re in a massive AI arms race… with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI”.
There are “some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons”, he said, adding that “AI is as comparable a danger as nuclear weapons”.
He hoped that countries across the globe would recognise the “existential threat” from advanced AI and agree treaties that would prevent the development of dangerous systems.
“Similar concerns appear to be shared by many scientists who work with AI,” said The Times, pointing to a survey in September by a team at New York University. It found that more than a third of 327 scientists who work with artificial intelligence agreed it is “plausible” that decisions made by AI “could cause a catastrophe this century that is at least as bad as an all-out nuclear war”.
As the Daily Mail put it: “The doomsday predictions have worrying parallels to the plot of science fiction blockbuster The Matrix, in which humanity is beholden to intelligent machines.”
All in all though, said Time magazine when the New York University research came out, “the fact that ‘only’ 36% of those surveyed see a catastrophic risk as possible could be considered encouraging, since the remaining 64% don’t think the same way”.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Chas Newkey-Burden has been part of The Week Digital team for more than a decade and a journalist for 25 years, starting out on the irreverent football weekly 90 Minutes, before moving to lifestyle magazines Loaded and Attitude. He was a columnist for The Big Issue and landed a world exclusive with David Beckham that became the weekly magazine’s bestselling issue. He now writes regularly for The Guardian, The Telegraph, The Independent, Metro, FourFourTwo and the i new site. He is also the author of a number of non-fiction books.
-
Business booms 'bigly' for Trump impersonators
Under The Radar 'Insane' demand for presidential doppelgangers at parties, golf tournaments – even children's birthdays
By Chas Newkey-Burden, The Week UK Published
-
When the insurer says ‘no’
Feature Health insurance companies appear to be denying a growing share of patient claims. Why?
By The Week US Published
-
Foreign aid: The human toll of drastic cuts
Feature The assault has 'stunned' nonprofits whose efforts to fight hunger, disease, and instability are now shuttering
By The Week US Published
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern
By Devika Rao, The Week US Published
-
Space-age living: The race for robot servants
Feature Meta and Apple compete to bring humanoid robots to market
By The Week US Published
-
Musk vs. Altman: The fight over OpenAI
Feature Elon Musk has launched a $97.4 billion takeover bid for OpenAI
By The Week US Published
-
AI freedom vs copyright law: the UK's creative controversy
The Explainer Britain's musicians, artists, and authors protest at proposals to allow AI firms to use their work
By The Week UK Published
-
The AI arms race
Talking Point The fixation on AI-powered economic growth risks drowning out concerns around the technology which have yet to be resolved
By The Week UK Published
-
Paris AI Summit: has Europe already been left behind?
The Explainer EU shift from AI regulation to investment may still leave it trailing in US and China's wake
By Richard Windsor, The Week UK Published
-
What is living intelligence, the new frontier in AI?
The Explainer Business leaders must prepare themselves for the next wave in tech, which will take AI to another level
By Theara Coleman, The Week US Published
-
Chinese AI company DeepSeek rocks the tech world
In the spotlight America's hold on artificial intelligence is on shaky ground
By Theara Coleman, The Week US Published