Stephen Hawking: humanity could be destroyed by AI
Developers and lawmakers must focus on ‘maximising’ the technology’s benefits to society

Stephen Hawking has warned that artificial intelligence (AI) could destroy mankind unless we take action to avoid the risks it poses.
Speaking at this year’s Web Summit in Portugal, the physicist said that along with benefits, the technology also brings “dangers like powerful autonomous weapons, or new ways for the few to oppress the many”.
In quotes reported by Forbes, he continued: “Success in creating effective AI could be the biggest event in the history of our civilisation. Or the worst. We just don’t know.”
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Hawking proposed that humanity could prevent AI from threatening our existence by regulating its development.
“Perhaps we should all stop for a moment and focus not only on making our AI better and more successful, but also on the benefit of humanity,” he added.
His comments come less than three months after Elon Musk, founder of Tesla and SpaceX, said AI was “vastly more risky” than the threat of a nuclear attack from North Korea.
Musk previously told a panel of US state politicians that “until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal”.
Hawking praised moves in Europe to regulate new technologies, reports CNBC, particularly proposals put forward by lawmakers earlier this year to establish “new rules around AI and robotics”.
Elon Musk says artificial intelligence is more dangerous than war with North Korea
15 August
Tesla and SpaceX founder Elon Musk says that artificial intelligence (AI) is "vastly more risky" than the threat of an attack from North Korea.
In the wake of growing tensions between North Korea and the US, Musk said in a tweet: "If you're not concerned about AI safety, you should be."
This isn't the first time the South Africa-born inventor has expressed his concerns about AI technology.
Speaking to an audience of US state politicians in Rhode Island last month, Musk said: "Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal."
To avoid AI becoming a threat to humanity, he says the US government should "learn as much as possible" and "gain insight" into how the technology works. It should also bring in regulations to ensure companies develop AI safely.
While the inventor concedes that "nobody likes being regulated", he says that everything else that's a "danger to the public is regulated" and that AI "should be too."
Musk's remarks come at the same time as artificial intelligence, developed by his OpenAI company, successfully defeated some of the world's top players on the computer game Dota 2, reports The Guardian.
The system "managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world's best players competing for a $24.8m (£19m) prize fund."
Elon Musk calls for AI regulation before 'it's too late'
18 July
SpaceX and Tesla founder Elon Musk has called for the development of artificial intelligence (AI) to be regulated before "it's too late".
Speaking at a meeting of US state politicians in Rhode Island last weekend, the South African-born inventor said: "AI is a rare case where I think we need to be pro-active in regulation instead of re-active.
"Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal."
He added: "AI is a fundamental risk to the existence of human civilisation."
He also said AI would have a substantial impact on jobs as "robots will be able to do everything better than us", adding that the transport sector, which he said accounted for 12 per cent of jobs in the US, would be "one of the first things to go fully autonomous".
Musk also talked of his "desire to establish interplanetary colonies on Mars" to act as safe havens if robots were to take over Earth, CleanTechnica reports.
To avoid that happening in the first place, he called on the the government to "learn as much as possible" and "gain insight" into how AI can be safely developed.
However, critics says Musk's remarks could be "distracting from more pressing concerns", writes Tim Simonite on Wired.
Ryan Calo, a cyber law expert at the University of Washington, told the website: "Artificial intelligence is something policy makers should pay attention to.
"But focusing on the existential threat is doubly distracting from its potential for good and the real-world problems it’s creating today and in the near term."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Another name for the Gulf of Mexico | May 17 editorial cartoons
Cartoons Saturday's political cartoons feature a new Air Force One, Volodymyr Zelenskyy, and 'undocumented' immigrants.
-
5 jumbo-sized political cartoons about Qatar's 'gift' to Trump
Cartoons Editorial cartoons feature artists' takes on Donald Trump's visit to the Middle East.
-
Broccoli and cashew stir-fry recipe
The Week Recommends This nutty dish is a satisfying vegetarian option
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
Elon Musk's SpaceX has created a new city in Texas
under the radar Starbase is home to SpaceX's rocket launch site
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants