How killer robots will save us all
Autonomous robots might sound scary, but in the future, they could actually prevent a lot of unnecessary deaths
If there's one thing science fiction has taught us, it's that advancements in robotics lead to one inevitable end: The robots become self-aware, then decide to destroy their human creators. But this scenario is no longer completely fictional; we've gotten to the point where the idea of killer robots is something we need to confront, so we can decide how to handle it.
And I am here, my friends, to defend killer robots. Indeed, I believe they can make the world a safer and even more humane place.
Just to clarify, when I say "killer robots," I'm referring to robotic military systems that can make their own decisions to fire weapons. Our military (and increasingly those of other countries) use lots of different kinds of robots, from drones in the sky to bomb-disposal bots on the ground. While some of these have weapons, only a human being can fire them. But the prospect of turning over life-and-death decisions to autonomous robotic systems has some people very worried. Indeed, the case against killer robots would seem almost self-evident. After all, machines can't feel empathy, or grapple with moral quandaries, or exhibit caring and concern, or have a conscience, or be held accountable for their decisions, or carry into battle any of the emotional qualities we expect from soldiers who temper their work with their humanity.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
For those reasons, a growing number of people and nations are encouraging the development of some kind of international regime to restrain killer robots, similar to those restricting the use of chemical weapons and landmines. Four years ago, Human Rights Watch issued a comprehensive report making the case against autonomous weapons systems, and this week released a new memo calling for international policies prohibiting "the development, production, and use of fully autonomous weapons."
On the surface, such calls seem like a good idea. The problem, however, is when we think of weapon-wielding robots, we're probably thinking either of technology as it exists today, or of the apocalyptic sci-fi vision we get from The Terminator. Neither accurately represent where we might find ourselves in 20 or 30 years, when artificial intelligence has progressed far beyond its current capabilities.
The justification for allowing robots to wield weapons is simple: Humans are terrible at making decisions. Anything we fear killer robots might be capable of doing (short of the actual robot apocalypse) is something we humans already do to one another. Human soldiers become tired, inattentive, confused, uncertain, angry — all of which can lead to mistakes, including the killing of innocent civilians. AI may still pale in comparison to humans when it comes to distinguishing real threats from imagined ones, but it's just a matter of time before it catches up.
But what about gut-instincts? What about those emotional hunches that help flesh-and-blood humans make calculated decisions? They're little more than the product of being able to integrate our knowledge and prior experiences with all the data our senses are delivering to us at a given moment. When a soldier sees someone walking down the street with a lump under his jacket, he's processing all the available information to make a judgment about whether or not the man is a threat. In the not-so-distant future, robots will be able to process all that information far more effectively than humans in the vast majority of situations. Plus, they won't get tired or scared or angry. When robots reach that point, not employing autonomous weapons systems will mean sacrificing the lives of civilians (and soldiers).
Consider the analogy with self-driving cars. Many of us are still uncomfortable with letting a computer drive us around town. But as the technology grows more sophisticated — and is incrementally deployed in commercially available cars, like with self-parking systems and adaptive cruise control — people are beginning to warm to the idea of handing over the wheel. We're more aware of the central problem self-driving cars are meant to solve, which is that of our own dangerous driving habits. While traffic fatalities have declined in recent years, the numbers are still staggering: In 2014, more than 32,000 Americans died in car crashes. Once we have a full fleet of self-driving cars, traffic deaths will plummet dramatically, and could even be virtually eliminated.
But there's no denying there will probably be a few isolated cases in which a robotic system mishandles a situation or can't find its way to the decision that could avert a tragedy. Some people might even die as a result. These rare cases are probably stumbling blocks for those who oppose self-driving cars. Think how awful it would be to sit in an autonomous vehicle as it plunges you off a cliff. And it's these same rare cases that frighten those who oppose autonomous weapons, as well. Think how awful it would be if a killer robot shot someone you cared about because its programming went haywire.
Those are indeed scary thoughts. But they ignore the equally-scary alternative, which is all the people killed on the roads, and the all the innocent civilians killed by the military, all due to poor human decision-making. Perhaps we'll decide to accept a greater death toll to avoid even a single death by robot, because doing so makes us feel less afraid and more in control.
It's a very human response. But that doesn't make it the right one.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Paul Waldman is a senior writer with The American Prospect magazine and a blogger for The Washington Post. His writing has appeared in dozens of newspapers, magazines, and web sites, and he is the author or co-author of four books on media and politics.
-
Can AI tools be used to Hollywood's advantage?
Talking Points It makes some aspects of the industry faster and cheaper. It will also put many people in the entertainment world out of work
By Anya Jaremko-Greenwold, The Week US Published
-
'Paraguay has found itself in a key position'
Instant Opinion Opinion, comment and editorials of the day
By Justin Klawans, The Week US Published
-
Meet Youngmi Mayer, the renegade comedian whose frank new memoir is a blitzkrieg to the genre
The Week Recommends 'I'm Laughing Because I'm Crying' details a biracial life on the margins, with humor as salving grace
By Scott Hocker, The Week US Published