How could AI cause human extinction?

A look at three scenarios that have AI industry insiders very worried

An illustrated image of the evolution of humans, marching toward a city on fire
Experts are worried AI technology could threaten humanity's existence
(Image credit: Illustrated / Getty Images)

Along with all the praise for the rapid advancement of artificial intelligence comes an ominous warning from some of the industry's top leaders about the potential for the technology to backfire on humanity. Some warn "AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down," The New York Times said, "though researchers sometimes stop short of explaining how that would happen."

A group of industry experts recently warned AI technology could threaten humanity's very existence. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," reads the one-line open letter released by the Center for AI Safety, a nonprofit organization. The statement was signed by more than 350 executives, researchers, and engineers from the AI sector, including Sam Altman, chief executive of OpenAI, and Geoffrey Hinton and Yoshua Bengio, two of the Turing award-winning researchers considered the "godfathers of AI." The message is foreboding, but also vague, failing to provide any details about how an AI apocalypse could come about.

What are the commentators saying?

One plausible scenario is that AI falls into the hands of "malicious actors" who use it to create "novel bioweapons more lethal than natural pandemics," Dan Hendrycks, the director of the Center for AI Safety, wrote in an email to CBS MoneyWatch. Or these entities could "intentionally release rogue AI that actively attempt to harm humanity." If the rogue AI was "intelligent or capable enough," Hendrycks added, "it may pose significant risk to society as a whole."

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Or AI could be used to help hackers. "There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology," former Google CEO Eric Schmidt said while speaking at The Wall Street Journal's CEO Council. Zero-day exploits are security vulnerabilities hackers find in software and systems. While it might seem far-fetched today, Schmidt said AI's rapid advancement makes it more likely. "And when that happens," he added, "we want to be ready to know how to make sure these things are not misused by evil people."

Another worry is that AI could go rogue on its own, interpreting the task for which it was originally designed in a new and nefarious way. For example, AI built to rid the world of cancer could decide to unleash nuclear missiles to eliminate all cancer cells by blowing everyone up, science journalist and author Tom Chivers said in an interview for Ian Leslie's substack, The Ruffian. "So the fear is not that the AI becomes malicious; it's that it becomes competent," Chivers deduced. AI tools are only "maximizing the number that you put in its reward function." Unfortunately, "it turns out that what we desire as humans is hard to pin down to a reward function, which means things can go terribly wrong."

What's next?

The debate now turns to ways in which AI technology should be regulated or contained. In an open letter published in late March, a separate group of AI experts, tech industry executives, and scientists suggested slowing down the "out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control." AI labs would have to pause training the AI systems for "at least six months," the letter suggested. If not, then "governments should step in and institute a moratorium."

When Altman met with a Senate Judiciary subcommittee in early May, he agreed that a government agency should be in charge of policing AI projects that operate "above a certain scale of capabilities." In a post for OpenAi, he suggested the industry leaders needed to coordinate "to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society."

The U.S. government has been "publicly weighing the possibilities and perils of artificial intelligence," The Associated Press wrote. The Biden administration has held talks with top tech CEOs "to emphasize the importance of ethical and responsible AI development," CNN said. In May the White House unveiled a "roadmap" for federal investments in research and development that "promotes responsible American innovation, serves the public good, protects people's rights and safety, and upholds democratic values." The administration has also hinted at the possibility of regulating the AI industry in the future.

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us
Theara Coleman, The Week US

Theara Coleman has worked as a staff writer at The Week since September 2022. She frequently writes about technology, education, literature and general news. She was previously a contributing writer and assistant editor at Honeysuckle Magazine, where she covered racial politics and cannabis industry news.