Can computer programmers account for morality?

As Google's self-driving cars proliferate and the U.S. drone program grows, software engineers face the daunting task of anticipating — and coding for — moral quandaries

A cyclist rides past one of Google's self-driving cars outside the company's headquarters in California.
(Image credit: Justin Sullivan/Getty Images)

Within the next two or three decades, Google's self-driving cars will likely be commonplace, and our roads will be safer for it. Self-driving cars don't drink. They don't get tired after a long day at the office. And they'll have better reflexes, along with the ability to instantly communicate with one another to minimize accidents, leaving you free to text or tweet to your heart's content. Three states — California, Florida, and Nevada — have already deemed the vehicles legal, and it's only a matter of time before many of the other 47 follow suit.

But what happens to a self-driving car when it's time to make the morally unclear, split-second decisions that will inevitably come up on the road? It's a question Gary Marcus, a professor of psychology at NYU, tackles in a forward-looking New Yorker essay:

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up
Chris Gayomali is the science and technology editor for TheWeek.com. Previously, he was a tech reporter at TIME. His work has also appeared in Men's Journal, Esquire, and The Atlantic, among other places. Follow him on Twitter and Facebook.