Call for regulation to stop AI ‘eliminating the whole human race’

Professor said artificial intelligence could become as dangerous as nuclear weapons

An abstract AI brain
Experts warn that the world is already in a ‘massive AI arms race’
(Image credit: Getty Images)

Experts have called for global regulation to prevent out-of-control artificial intelligence systems that could end up “eliminating the whole human race”.

The committee “heard how advanced AI could take control of its own programming”, said The Telegraph.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

“With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone,” said doctoral student Michael Cohen. If it is smarter than humans “across every domain” it could “presumably avoid sending any red flags while we still could pull the plug”.

Michael Osborne, professor of machine learning at Oxford, said that “the bleak scenario is realistic”. This is because, he explained, “we’re in a massive AI arms race… with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI”.

There are “some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons”, he said, adding that “AI is as comparable a danger as nuclear weapons”.

He hoped that countries across the globe would recognise the “existential threat” from advanced AI and agree treaties that would prevent the development of dangerous systems.

“Similar concerns appear to be shared by many scientists who work with AI,” said The Times, pointing to a survey in September by a team at New York University. It found that more than a third of 327 scientists who work with artificial intelligence agreed it is “plausible” that decisions made by AI “could cause a catastrophe this century that is at least as bad as an all-out nuclear war”.

As the Daily Mail put it: “The doomsday predictions have worrying parallels to the plot of science fiction blockbuster The Matrix, in which humanity is beholden to intelligent machines.”

All in all though, said Time magazine when the New York University research came out, “the fact that ‘only’ 36% of those surveyed see a catastrophic risk as possible could be considered encouraging, since the remaining 64% don’t think the same way”.

 
Chas Newkey-Burden has been part of The Week Digital team for more than a decade and a journalist for 25 years, starting out on the irreverent football weekly 90 Minutes, before moving to lifestyle magazines Loaded and Attitude. He was a columnist for The Big Issue and landed a world exclusive with David Beckham that became the weekly magazine’s bestselling issue. He now writes regularly for The Guardian, The Telegraph, The Independent, Metro, FourFourTwo and the i new site. He is also the author of a number of non-fiction books.