Experts call for AI pause over risk to humanity
Open letter says powerful new systems should only be developed once it is known they are safe
Tech leaders and experts including Elon Musk, Apple co-founder Steve Wozniak and engineers from Google, Amazon and Microsoft have called for a six-month pause in the development of artificial intelligence systems to allow time to make sure they are safe.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter titled Pause Giant AI Experiments.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it said.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” it added.
The letter also said that in recent months AI labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.
“The warning comes after the release earlier this month of GPT-4… an AI program developed by OpenAI with backing from Microsoft,” said Deutsche Welle (DW). The latest iteration from the makers of ChatGPT has “wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents”, added Reuters.
The open letter has been signed by “major AI players”, according to The Guardian, including Musk, who co-founded OpenAI, Emad Mostaque, who founded London-based Stability AI, and Wozniak.
Engineers from Amazon, DeepMind, Google, Meta and Microsoft also signed it, but among those who have not yet put their names to it are OpenAI CEO Sam Altman and Sundar Pichai and Satya Nadella, CEOs of Alphabet and Microsoft respectively.
The letter “feels like the next step of sorts”, said Engadget, from a 2022 survey of over 700 machine learning researchers. It found that “nearly half of participants stated there’s a 10 percent chance of an ‘extremely bad outcome’ from AI, including human extinction”.
But the letter has also attracted criticism. Johanna Björklund, an AI researcher and associate professor at Umea University in Sweden, told DW: “I don’t think there’s a need to pull the handbrake.” She called for more transparency rather than a pause.
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Jamie Timson is the UK news editor, curating The Week UK's daily morning newsletter and setting the agenda for the day's news output. He was first a member of the team from 2015 to 2019, progressing from intern to senior staff writer, and then rejoined in September 2022. As a founding panellist on “The Week Unwrapped” podcast, he has discussed politics, foreign affairs and conspiracy theories, sometimes separately, sometimes all at once. In between working at The Week, Jamie was a senior press officer at the Department for Transport, with a penchant for crisis communications, working on Brexit, the response to Covid-19 and HS2, among others.
-
Italian senate passes law allowing anti-abortion activists into clinics
Under The Radar Giorgia Meloni scores a political 'victory' but will it make much difference in practice?
By Chas Newkey-Burden, The Week UK Published
-
Magazine interactive crossword - May 3, 2024
Puzzles and Quizzes Issue - May 3, 2024
By The Week US Published
-
Magazine solutions - May 3, 2024
Puzzles and Quizzes Issue - May 3, 2024
By The Week US Published
-
AI is causing concern among the LGBTQ community
In the Spotlight One critic believes that AI will 'always fail LGBTQ people'
By Justin Klawans, The Week US Published
-
When even art is artificial
Opinion The AI threat to human creativity
By William Falk Published
-
The push for media literacy in education amid the rise of AI
In the Spotlight A pair of congresspeople have introduced an act to mandate media literacy in schools
By Justin Klawans, The Week US Published
-
The complex environmental toll of artificial intelligence
The explainer AI is very much mostly not green technology
By Devika Rao, The Week US Published
-
Artificial history
Opinion Google's AI tailored the past to fit modern mores, but only succeeded in erasing real historical crimes
By Theunis Bates Published
-
AI is recreating the voices of mass shooting victims
The Explainer The parents of these victims are using the AI to try and lobby Congress for gun control
By Justin Klawans, The Week US Published
-
The murky world of AI training
Under the Radar Despite public interest in artificial intelligence models themselves, few consider how those models are trained
By Austin Chen, The Week UK Published
-
Is Google's new AI bot 'woke'?
Talking Points Gemini produced images of female popes and Black Vikings. Now the company has stepped back.
By Joel Mathis, The Week US Published