Why we should welcome our new robot overlords

If you think robots are scary, check out this thing called a human

There's nothing to fear.
(Image credit: (Colin Anderson/Blend Images/Corbis))

Are you frightened of robots? In the movies, when a robot or artificial intelligence shows up, there's a fair chance that it's going to go rogue and start killing people. Those stories work with (and reinforce) our fears about machines threatening us, but what may be most frightening about the robots in our near future is that they'll highlight our own limitations and inadequacies.

That's already happening in the workplace, where robots are replacing more and more workers, particularly in manufacturing. Though we've had industrial robots for quite a while now (the first one, called Unimate, was put to work in a General Motors factory in 1961), the number of tasks they can do is rapidly increasing, and there are whole classes of human occupations that could disappear in the next decade or two. For instance, being a warehouse "picker" for a company like Amazon may be a dreadful job, but it's something that thousands of Americans do; before long that work will almost certainly be done by robots, with fewer and fewer humans involved. Robots are even writing news stories (though I think opinion writing is safe, at least for a while).

Whenever a worker gets displaced by a robot, it's a human tragedy, and we haven't yet figured out how we're going to deal with the millions of people who are unable to find work when their jobs have been mechanized. But the combination of technology and market forces makes it inevitable: As soon as a robot can do the same work just as well as a human can, it's only a question of when the robot becomes cheaper. And it shows that while the human may have done a perfectly fine job assembling widgets, he wasn't good at it so much as he was good at it for a human — and now there's something superior.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Which brings us to driverless cars. Automobiles have always been an obvious candidate for robotic progress, not only because of the inefficiency of a system in which most people's cars just sit around waiting to be driven most of the time, but also because human beings are absolutely terrible drivers. Not you, of course — if you're like most people, you think you're more skilled than the average person behind the wheel. The problem, however, is that the average skill is so low. The toll in automotive carnage is staggering: Over 30,000 Americans are killed on the roads every year, and the number of auto fatalities worldwide is well over a million. The vast majority of those accidents are caused not by falling cranes or lightning strikes but by simple human error.

There's still technological progress that needs to be made before we can make our entire auto fleet driverless, and there are ethical quandaries we need to settle before we incorporate them into the programming. The one everyone mentions is whether you'd want your car to make the decision to plunge off a cliff and kill you if it meant saving the lives of the four people in the other car with which you're about to collide. But it's almost certain that a driverless car fleet will dramatically reduce the lives lost on the road, and thus decrease the awful sum of human suffering that auto travel causes.

While Google and others are working on cars that can operate completely on their own, progress is happening elsewhere incrementally. We went from ABS brakes (where the car overrides your decision to carry out a particular action) to cars that can parallel park on their own, to adaptive cruise control, in which the car adjusts your speed to maintain a safe distance from the cars around it. The new cars that come out in the next five or 10 years will take on more and more of the tasks that you now do, because they'll be better at it than you are. By the time you're ready to ditch your car and sign on with Driverless Uber (or whatever it ends up being), it won't seem like much of a leap, because you will have already handed over more and more driving tasks to the computer in your car.

Speaking of violent deaths averted, you can't talk about this subject without mentioning the possibility of "killer robots," which has aroused the concern of groups like Human Rights Watch. The military is increasingly using robots for a variety of tasks, most of them non-lethal, like bomb disposal and surveillance. Even the drones we use are all controlled by a human pilot, and for now there seems to be agreement both inside and outside the military that any decision to fire a weapon should remain in human hands, or at least human-supervised hands (though the Department of Defense is leaving the door open to killer robots in the future).

This seems sensible — what if a robot saw a civilian carrying something and thought it might be a weapon, then decided to fire? That would be awful. The problem with leaving those decisions only in human hands is that humans, even those with special training, aren't much better at protecting civilian lives than we are at driving.

If we look at just the American military, we kill far fewer civilians in our contemporary wars than we did when we were fire-bombing Dresden and Tokyo, but we still kill civilians all the time, by the hundreds or even the thousands. Stories of robots going berserk and killing innocent people are common in fiction, but in human-waged war, civilian massacres are a regular occurrence. Soldiers goes on a revenge spree after watching comrades die, officials launch a missile strike at what turns out to be a wedding party — it may not happen every day, but it happens whenever we go to war.

As frightening as it may be to contemplate real-live Terminators walking down your street, is there any reason at all to believe that they'd be more likely to kill civilians than actual soldiers are? Not so long as we gradually increase their autonomy only in proportion to increases in their capability to take in, process, and act on information with precision. Unlike human soldiers, robot soldiers wouldn't make mistakes out of inattention, fatigue, anger, or fear. We could quickly program them with the language skills they'd need to communicate with a local population, something that's all but impossible to do with thousands of humans.

Military robots may be dramatic and frightening for some to contemplate, but robots are going to be increasingly incorporated into our daily lives, to take on practical tasks like caring for the increasing number of elderly people. And like it or not, robots will provide companionship, because we have an almost limitless capacity to assign human characteristics to non-humans and become emotionally attached to them. Ask any dog or cat owner about their deep connection to their pet — who, let's be honest, has a limited range of emotions and behaviors, and only the most rudimentary ability to communicate with us. Before long, we'll be forging far more profound relationships with robots, whether they're "alive" or not. I'd even argue that some among us are going to fall in love with them.

So what will be left for us humans? If robots can drive better than us, fight better than us, and work better than us, perhaps it's the tasks that require creativity that they won't be able to touch. But I wouldn't bet on that, either.

We tend to think of creativity as existing in some ineffable realm with no connection to the mundane material world — ideas float down to us on sunbeams or are delivered by a muse, and no piece of software could experience a moment of epiphany. But the truth is that most of the time, creativity consists mostly of rearranging the same old things in a slightly different way. It involves taking what's known and tweaking what's been done. There's no theoretical reason why software couldn't accomplish that, particularly as the software gets more and more sophisticated. An A.I. might not yet be able to design a beautiful and original building or create a new style of painting, but I'm pretty sure that eventually it will.

It's obviously less certain whether robots and A.I.'s will create great art than whether they can be great at assembling products or driving. But again and again, we're going to find that we want them to take over a task for the simple reason that they're better at it than we are. It's profoundly unsettling, because it shows us our own limits. We've always known that even Usain Bolt can't outrun a cheetah, and John Henry couldn't outdo the steam hammer (at least without working himself to death). But when it turns out we aren't so great at even the complicated stuff, where does that leave us? It's a question we're going to need to answer.

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us
Paul Waldman

Paul Waldman is a senior writer with The American Prospect magazine and a blogger for The Washington Post. His writing has appeared in dozens of newspapers, magazines, and web sites, and he is the author or co-author of four books on media and politics.