Artificial intelligence is reshaping military strategy and weaponry. Could autonomous killing machines be next?
How are militaries deploying AI?
Some countries are employing the technology to speed up decision-making, giving commanders an edge in combat. In its war in Gaza, Israel has relied on a machine learning system called The Gospel that is thought to crunch vast quantities of data from satellites, radars, drones, and other sources to identify Hamas targets, from rocket launchers to command posts to individual fighters. "Basically, Gospel imitates what a group of intelligence officers used to do in the past," said Tal Mimran, an Israeli cybersecurity expert. Gospel can suggest 200 targets in about 10 days, a rate 50 times faster than that of its human counterparts. AI also has a growing physical presence on the battlefield, with autonomous combat devices — basically fighting robots — starting to proliferate across air, land, and sea. For now, human controllers make the final decision about what and when to strike, but some experts believe it's only a matter of time before robots are making deadly battlefield decisions, or AI command systems are giving troops direct firing orders. The advantage in war will go "to those who no longer see the world like humans," Army research officials Thom Hawkins and Alexander Kott wrote in 2022. "We can now be targeted by something relentless, a thing that does not sleep."
Is the U.S. using the technology?
The Pentagon, which requested $3 billion for AI-related projects in its most recent budget submission, is rushing to develop autonomous weaponry and AI-powered battlefield-management systems. It has already deployed miniature surveillance drones with a degree of autonomy, and a Gospel-like system called Project Maven that has been used to locate Houthi rocket launchers in Yemen and to narrow targets for strikes on Iran-backed militias in Iraq and Syria. That's just the beginning of the Pentagon's AI ambitions. Its Replicator program aims to build thousands of relatively inexpensive self-guided combat devices — aircraft, ground vehicles, and sea vessels and submarines — that could identify and swarm targets autonomously. That program is intended to offset China's numerical advantage in warships, aircraft, and troops should Beijing decide to invade Taiwan, a U.S. ally. "We'll counter the [Chinese military's] mass with mass of our own," U.S. Deputy Secretary of Defense Kathleen Hicks said last year, "but ours will be harder to plan for, harder to hit, harder to beat." That program is still in its early stages, but others, such as the Air Force's plan to deploy 1,000 so-called collaborative combat aircraft, are farther along.
How will those aircraft be used?
They're AI-powered drones that fly alongside piloted aircraft. These robotic wingmen cost as little as $3 million apiece — a single F-35 jet costs $80 million — and could zoom ahead to conduct high-risk surveillance or attack enemy air defenses in missions considered too dangerous for humans. Fighter pilots are currently making test runs alongside an experimental aircraft, the XQ-58A Valkyrie, at Eglin Air Force Base in Florida. "It's a very strange feeling," said Maj. Ross Elder. "I'm flying off the wing of something that's making its own decisions. And it's not a human brain." Similar machines are appearing in the seas. Next year, the Australian navy will take delivery of three unmanned, AI-powered submarines called Ghost Sharks. Those school bus-size craft cost $15 million apiece — less than 0.1 percent of the price tag of the human-operated nuclear subs Australia has on order — and can be built quickly and descend much deeper and linger underwater for longer than traditional manned subs. But the use of such technology in war carries "a huge amount of legal, ethical, and moral implications," said U.S. Army Gen. Mark Milley, former chair of the Joint Chiefs of Staff.
What are the concerns?
Some experts worry that AI's ability to analyze massive amounts of information — surveillance data, social media posts, even shopping habits — could result in these weapons being used to target specific ethnicities or other population groups. Drone swarms, which are being developed by China as well as the U.S., "could wipe out, say, all males between 12 and 60 in a city," said Stuart Russell, a computer scientist at the University of California, Berkeley. "Unlike nuclear weapons, they leave no radioactive crater, and they keep all the valuable physical assets intact." There's also the fear that autonomous weapons will eventually be allowed to make decisions on when and what to attack — a power over life and death that, until now, has rested with humans. And while AI is smart, it's not perfect. AI chatbots frequently make up or "hallucinate" nonexistent facts, books, and people, leading some scientists to worry that AI weapons or command systems might hallucinate "legitimate targets," with fatal consequences.
Will AI be allowed to make kill decisions?
The U.S. says its AI-driven weapon systems will not be allowed to take lethal action independently. "It's not Terminator," said 18th Airborne Corps Col. Joseph O'Callaghan, who is overseeing the testing of Project Maven. But in a possible sign of what's next, the U.S., Russia, Israel, and others have argued at the United Nations that human control of autonomous weapons is not required by international law. Meanwhile, China has called for narrow legal limits on autonomous weapons that arms-control advocates say are effectively meaningless. In an AI arms race that could decide the global balance of power, no nation is willing to commit to binding agreements that might cede advantage to a less constrained foe. "Individual decisions versus not doing individual decisions is the difference between winning and losing," U.S. Air Force Secretary Frank Kendall said, "and you're not going to lose."
Ukraine's frontline AI laboratory
Faced with a larger and better-equipped enemy, Ukraine has turned to AI to help neutralize Russia's battlefield advantage. With the support of U.S. tech firms such as Palantir and Clearview AI, Kyiv has used AI-powered systems to analyze satellite images and drone footage of Russian positions, sift and interpret Russian radio communications, and detect land mines. It has also sent AI-enhanced drones into the air, most of which rely on a human operator to make attack decisions. But electronic jamming by Russian forces can sever the link between human and drone, rendering the vehicle useless. So Ukrainian drone firm Twist Robotics has developed an autonomous quadcopter that can stay on target and complete a strike even if the command signal is blocked — and if the target moves. It's not known if these smart drones have yet been used to kill Russian troops. "The peak of this type of warfare isn't even here yet," a Ukrainian drone commander told The Times (U.K.). "It lies in the future."