Should military drones be allowed to decide who to kill?
One day, humans may be confronted with technology that allows robot armies to kill without the green light of human operators
America's drone program is widely seen as the weapon of choice for the Obama administration. In the president's first term alone, drones were used an estimated four times more than during the entirety of the Bush administration. Clearly, drones will continue to play a key role in the future of combat.
Currently, there's a human operator behind every machine, ensuring that someone can be held accountable for any misfires or civilian casualties. But what happens when technology advances to the point that humans are removed from the equation?
That's the question being posed in a new Human Rights Watch report that calls for an international ban on autonomous drones before they can be added to military arsenals worldwide. Realistically, the sophisticated software required to program a self-reliant "killer robot" that chooses its own targets is still 20 to 30 years away, but the advocacy group would rather not take chances.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
"Giving machines the power to decide who lives and dies on the battlefield would take technology too far," said Steve Goose, the Arms Division director at Human Rights Watch. "Human control of robotic warfare is essential to minimizing civilian deaths and injuries."
But is human involvement really such a good thing? "History has shown that human soldiers are capable of committing the world's worst atrocities despite their supposed humanity," says Tech News Daily. Georgia Tech robotics researcher Ronald Arkin goes so far to argue that a robot wouldn't fall victim to fatigue, and thus would be less susceptible to making boneheaded decisions or getting angry and sadistically abusing its power.
We should all fear the possibility of autonomous death machines, says Tom Malinowski at The Washington Post. Imagine Syria's Bashar Assad commanding robots "programmed to track and kill protest leaders or to fire automatically on any group of more than five people congregating below." He'd possess a weapon that no other dictator in history had access to: "An army that will never refuse an order, no matter how immoral." Clearly, whether machines should be allowed to serve as both jury and executioner is a decision that will inevitably have to be confronted, preferably sooner rather than later.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com