Do we need to pause the 'out-of-control' AI arms race, for humanity's sake?
Recent breakthroughs are impressive, but they have some arguing we should slow our roll with such consequential tech
Artificially intelligent chatbots and image generators broke through into the popular imagination with the release of OpenAI's ChatGPT-3.5. The recently released GPT-4 is even better at mimicking humanity and answering its questions.
But while many people see this new generation of AI as a fun way to generate poetry or images of Pope Francis in a fashionable papal puffer jacket, Microsoft founder Bill Gates recently called it the first shot in a new revolution "as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," that will change the way people work, learn, travel, get health care, and communicate with each other."
A group of AI luminaries, computer scientists, and tech industry notables warned in an March 29 open letter that maybe we're moving too fast with artificial intelligence, with too few guardrails. The thousand or so motley signatories of the letter, organized by the nonprofit Future of Life Institute, include OpenAI cofounder Elon Musk, Apple co-founder Steve Wozniak, AI pioneer Yoshua Bengio, Stability AI CEO Emad Mostaque, AI researchers Stuart Russell and Gary Marcus, and Rachel Bronson, president of the Doomsday Clock-setting Bulletin of the Atomic Scientists.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control," the letter cautions. AI systems with "human-competitive intelligence can pose profound risks to society and humanity," and if AI labs won't agree to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," then "governments should step in and institute a moratorium."
AI leaders warning that they may be creating an existential Frankenstein's monster is jarring, as is the fact that they want governments to stop them. Is the largely unregulated AI juggernaut really putting humanity at risk?
What are the commentators saying?
"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two," Geoffrey Hinton, known as the "godfather of artificial intelligence," told CBS News. The advances in AI are comparable to "the Industrial Revolution, or electricity ... or maybe the wheel." But when asked how likely it is AI will wipe out humanity, he said, "It's not inconceivable, that's all I'll say."
"It's not often that Hinton, Musk, and I are even in partial agreement," Gary Marcus writes at his Road to AI We Can Trust substack. "Is AI going to kill us all? I don't know, and you don't either." But "my scenarios for doom are perhaps not the same as Hinton's or Musk's," which focus on the dangers of sentient artificial "superintelligence" in which "computers rapidly and radically self-improve themselves," he said. The bigger near-term concern is the "mediocre AI" we already have, widely accessible and just smart enough to cause mass death and chaos in the hands of "criminals, including terrorists" — after all, "a single idiot with the nuclear codes could destroy the world."
Just to be "alarmist for a second," what if these profound AI-generated changes are "like when they were building the atomic bomb or biological weapons or something else super dangerous," with no "kind of regulatory framework" to steer or limit it, Emily Bazelon said on Slate's Political Gabfest. "There's a survey going on of people who work in labs developing AI, and the median answer they gave for how likely it is that this could drive humans, I don't know, out of life entirely was 10 percent. That's not zero. And I don't see the world preparing for it."
"A pause is a good idea, but the letter is vague and doesn't take the regulatory problems seriously," James Grimmelmann, a Cornell University professor of digital and information law, tells The Associated Press. "It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars."
What's next for AI and humanity?
The AI letter signatories, including Musk, say they want a moratorium until industry and government can develop "robust AI governance systems" with "new and capable regulatory authorities dedicated to AI," at a minimum.
"A number of governments are already working to regulate high-risk AI tools," AP reports. The approach outlined by the United Kingdom on Wednesday said it "will avoid heavy-handed legislation which could stifle innovation," focusing instead on "practical guidance" and broad principles. The European Union is working to pass more detailed rules, with a liability cudgel, but the EU's 2021 proposal "still has not been adopted into law," The Washington Post notes. "Some U.S. lawmakers have called for new regulations on AI and its development, but no substantive proposals have advanced through the legislature."
The public isn't overly concerned about AI, which "certainly pales into insignificance compared to the other things that they are worried about like the prospect of armed conflict, or even the impact of climate change," James Frayne at British pollster Public First tells Politico. "One assumes that at some point there will be an event which shocks them, and shakes them, and makes them think very differently about AI," he added. And "at that point there will be great demands for the government to make sure that they're all over this in terms of regulation. They will expect the government to not only move very quickly, but to have made significant progress already."
"At the dawn of every profound development in technology, you can count on the profoundly stupid, too," Shira Ovide writes at the Post. But consumers and AI labs aren't powerless, — we can learn that "just because you can apply AI to some task, it doesn't mean that you should." It's important to remember that AI and its affiliated technologies "are tools and not magic," she adds. "They're not appropriate for everything. Hammers are great, too, but we don't use them to cook lasagna."
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Peter has worked as a news and culture writer and editor at The Week since the site's launch in 2008. He covers politics, world affairs, religion and cultural currents. His journalism career began as a copy editor at a financial newswire and has included editorial positions at The New York Times Magazine, Facts on File, and Oregon State University.
-
The best homes of the year
Feature Featuring a grand turret entrance in New York and built-in glass elevator in Arizona
By The Week Staff Published
-
Nordstrom family, investor to take retail chain private
Speed Read The business will be acquired by members of the family and El Puerto de Liverpool, a Mexican real estate company
By Peter Weber, The Week US Published
-
Biden commutes most federal death sentences
Speed Read The president downgraded the punishment of 37 of 40 prisoners on death row to life in prison without parole
By Peter Weber, The Week US Published