Stephen Hawking: humanity could be destroyed by AI
Developers and lawmakers must focus on ‘maximising’ the technology’s benefits to society

Stephen Hawking has warned that artificial intelligence (AI) could destroy mankind unless we take action to avoid the risks it poses.
Speaking at this year’s Web Summit in Portugal, the physicist said that along with benefits, the technology also brings “dangers like powerful autonomous weapons, or new ways for the few to oppress the many”.
In quotes reported by Forbes, he continued: “Success in creating effective AI could be the biggest event in the history of our civilisation. Or the worst. We just don’t know.”
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Hawking proposed that humanity could prevent AI from threatening our existence by regulating its development.
“Perhaps we should all stop for a moment and focus not only on making our AI better and more successful, but also on the benefit of humanity,” he added.
His comments come less than three months after Elon Musk, founder of Tesla and SpaceX, said AI was “vastly more risky” than the threat of a nuclear attack from North Korea.
Musk previously told a panel of US state politicians that “until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal”.
Hawking praised moves in Europe to regulate new technologies, reports CNBC, particularly proposals put forward by lawmakers earlier this year to establish “new rules around AI and robotics”.
Elon Musk says artificial intelligence is more dangerous than war with North Korea
15 August
Tesla and SpaceX founder Elon Musk says that artificial intelligence (AI) is "vastly more risky" than the threat of an attack from North Korea.
In the wake of growing tensions between North Korea and the US, Musk said in a tweet: "If you're not concerned about AI safety, you should be."
This isn't the first time the South Africa-born inventor has expressed his concerns about AI technology.
Speaking to an audience of US state politicians in Rhode Island last month, Musk said: "Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal."
To avoid AI becoming a threat to humanity, he says the US government should "learn as much as possible" and "gain insight" into how the technology works. It should also bring in regulations to ensure companies develop AI safely.
While the inventor concedes that "nobody likes being regulated", he says that everything else that's a "danger to the public is regulated" and that AI "should be too."
Musk's remarks come at the same time as artificial intelligence, developed by his OpenAI company, successfully defeated some of the world's top players on the computer game Dota 2, reports The Guardian.
The system "managed to win all its 1-v-1 games at the International Dota 2 championships against many of the world's best players competing for a $24.8m (£19m) prize fund."
Elon Musk calls for AI regulation before 'it's too late'
18 July
SpaceX and Tesla founder Elon Musk has called for the development of artificial intelligence (AI) to be regulated before "it's too late".
Speaking at a meeting of US state politicians in Rhode Island last weekend, the South African-born inventor said: "AI is a rare case where I think we need to be pro-active in regulation instead of re-active.
"Until people see robots going down the street killing people, they don't know how to react because it seems so ethereal."
He added: "AI is a fundamental risk to the existence of human civilisation."
He also said AI would have a substantial impact on jobs as "robots will be able to do everything better than us", adding that the transport sector, which he said accounted for 12 per cent of jobs in the US, would be "one of the first things to go fully autonomous".
Musk also talked of his "desire to establish interplanetary colonies on Mars" to act as safe havens if robots were to take over Earth, CleanTechnica reports.
To avoid that happening in the first place, he called on the the government to "learn as much as possible" and "gain insight" into how AI can be safely developed.
However, critics says Musk's remarks could be "distracting from more pressing concerns", writes Tim Simonite on Wired.
Ryan Calo, a cyber law expert at the University of Washington, told the website: "Artificial intelligence is something policy makers should pay attention to.
"But focusing on the existential threat is doubly distracting from its potential for good and the real-world problems it’s creating today and in the near term."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
The state of Britain's Armed Forces
The Explainer Geopolitical unrest and the unreliability of the Trump administration have led to a frantic re-evaluation of the UK's military capabilities
By The Week UK
-
Anti-anxiety drug has a not-too-surprising effect on fish
Under the radar The fish act bolder and riskier
By Devika Rao, The Week US
-
Crossword: April 21, 2025
The Week's daily crossword
By The Week Staff
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
By Theara Coleman, The Week US
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
By Joel Mathis, The Week US
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
By Elizabeth Carr-Ellis, The Week UK
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
By Theara Coleman, The Week US
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises
By The Week US
-
OpenAI's new model is 'really good' at creative writing
Under the Radar CEO Sam Altman says he is impressed. But is this merely an attempt to sell more subscriptions?
By Theara Coleman, The Week US
-
Could artificial superintelligence spell the end of humanity?
Talking Points Growing technology is causing growing concern
By Devika Rao, The Week US
-
Space-age living: The race for robot servants
Feature Meta and Apple compete to bring humanoid robots to market
By The Week US