Experts call for AI pause over risk to humanity
Open letter says powerful new systems should only be developed once it is known they are safe

Tech leaders and experts including Elon Musk, Apple co-founder Steve Wozniak and engineers from Google, Amazon and Microsoft have called for a six-month pause in the development of artificial intelligence systems to allow time to make sure they are safe.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter titled Pause Giant AI Experiments.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it said.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” it added.
The letter also said that in recent months AI labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
“The warning comes after the release earlier this month of GPT-4… an AI program developed by OpenAI with backing from Microsoft,” said Deutsche Welle (DW). The latest iteration from the makers of ChatGPT has “wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents”, added Reuters.
The open letter has been signed by “major AI players”, according to The Guardian, including Musk, who co-founded OpenAI, Emad Mostaque, who founded London-based Stability AI, and Wozniak.
Engineers from Amazon, DeepMind, Google, Meta and Microsoft also signed it, but among those who have not yet put their names to it are OpenAI CEO Sam Altman and Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft respectively.
The letter “feels like the next step of sorts”, said Engadget, from a 2022 survey of over 700 machine learning researchers. It found that “nearly half of participants stated there’s a 10 percent chance of an ‘extremely bad outcome’ from AI, including human extinction”.
But the letter has also attracted criticism. Johanna Björklund, an AI researcher and associate professor at Umea University in Sweden, told DW: “I don’t think there’s a need to pull the handbrake.” She called for more transparency rather than a pause.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Jamie Timson is the UK news editor, curating The Week UK's daily morning newsletter and setting the agenda for the day's news output. He was first a member of the team from 2015 to 2019, progressing from intern to senior staff writer, and then rejoined in September 2022. As a founding panellist on “The Week Unwrapped” podcast, he has discussed politics, foreign affairs and conspiracy theories, sometimes separately, sometimes all at once. In between working at The Week, Jamie was a senior press officer at the Department for Transport, with a penchant for crisis communications, working on Brexit, the response to Covid-19 and HS2, among others.
-
May 24 editorial cartoons
Cartoons Saturday's political cartoons feature Medicare and Medicaid cuts, James Comey's social media post, and Trump's big beautiful bill.
-
5 cartoons about the Russia-Ukraine peace talks
Cartoons Artists take on a stand-in for Vladimir Putin and phone calls with Donald Trump.
-
Donald Trump's foreign policy flip in the Middle East
Talking Point Surprise lifting of sanctions on Syria shows Saudi Arabia, the UAE and Qatar are now effectively 'dictating US foreign policy'
-
How the AI takeover might affect women more than men
The Explainer The tech boom is a blow to gender equality
-
Did you get a call from a government official? It might be an AI scam.
The Explainer Hackers may be using AI to impersonate senior government officers, said the FBI
-
What Elon Musk's Grok AI controversy reveals about chatbots
In the Spotlight The spread of misinformation is a reminder of how imperfect chatbots really are
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
Elon Musk's SpaceX has created a new city in Texas
Under The Radar Starbase is home to SpaceX's rocket launch site
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening