Human-like AI is dangerous for society
Software that mimics humans is a marketing stunt with dangerous potential
The voice on the other end of the phone sounded just a little too human.
In May, Google shocked the world with a demo of Duplex, its AI robocall assistant for accomplishing real-world tasks. The system can do things that you, as a busy person, might have little time or patience for, like booking a hair appointment or a restaurant reservation. But with its authentic-sounding "hmms" and "uhs," the system raised some serious concerns, because the humans who answered the phone calls did not seem to realize they were talking to a piece of software. And indeed, this should worry us. Convincing human-like AI could be deployed for dubious reasons with disastrous consequences.
As more and more people come in contact with autonomous systems like Duplex, the danger is not that these systems will suddenly wake up and take over the world, despite the hysterical portrayals in the media and pop culture. Instead, the real danger is that humans will become but a passive data point in the designing of those systems, to disastrous ends.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Artificial intelligence is meant to be a tool for humans, to make our lives easier and find solutions to everyday problems. It is not meant to replace us. And yet, we design it to replicate human-ness with eerie fidelity. We don't do this with other tools — hammers look like hammers, not people — so why do we do this with AI?
The answer is simple: because it makes great marketing.
When machines accommodate and gesture toward the nuances of our own behavior, we are much more willing to integrate them into our lives. Things that look and sound like us trigger our admirable human capacity for empathy. In the case of Duplex, the closer a voice sounds to human, the more reluctant the receiver of a robocall might be to hang up. But the human-ness of artificial intelligence could easily mask a dubious attempt to sell you something. Indeed, it could become all too easy to commoditize our trust. For example, we might be prone to read friendly intent into a bank chatbot that makes warm and witty banter, even if its purpose is to push students toward taking out unnecessary loans.
There are other concerning examples of AI being anthropomorphized and used as a marketing ploy. Last October, Saudi Arabia made headlines by "granting citizenship" to a talking robot named Sophia. This was a marketing stunt meant to signal the country's focus on technological innovation. But if we look more closely, this move should be considered especially cruel in a country that only allowed real human women to drive last year, and where women still require a male guardian to make financial and legal decisions. A robot, it seems, can breezily be granted more rights than half of the population of that country, all for a short-term spot in the news cycle.
Perhaps this seems like an overreaction. But I assure you, it is not. Talk of AI and personhood at the level of nations, even if it starts as a marketing stunt, can have far-reaching repercussions. A proposal now before the European Union would grant robots a distinct status of "electronic persons," a move that attempts to answer the question of who is to blame when AI behavior has lethal consequences. "The more autonomous robots are, the less they can be considered simple tools in the hands of other actors (such as the manufacturer, the owner, the user, etc.)," the proposal reads."This, in turn, makes the ordinary rules on liability insufficient and calls for new rules which focus on how a machine can be held — partly or entirely — responsible for its acts or omissions."
This kind of language, if enacted, is a disastrous misappropriation of human responsibility. When we are unwilling to hold humans accountable for the systems they create and release into the world, and instead allow an "electronic person" to bear the blame, we are no better than children who point the finger at an invisible "friend" for the bowl they themselves have broken. The resulting laws warp people's notion of AI in ways that make society less free, and less able to benefit from the cutting edge technologies that are within our reach.
"Pull back the curtain of AI and there are millions of exploited people," writes the technology pioneer Jaron Lanier in his book, Dawn of the New Everything: "The only way to reduce the harm is to stop believing in AI as a new type of creature, but instead evaluate algorithms as tools to be used by people."
It is time to think more critically about the impact of anthropomorphizing software. The first step is to see anthropomorphized AI for the marketing ploy it is. But the real antidote is to actively look for ways non-technical people can be involved in the design of the systems that affect their lives.
One recent project at Columbia University's Gang Intervention and Computer Science Project provides an example of how this might be done. William Frey and his colleagues describe how formerly gang-involved young people in Chicago were hired as subject matter experts in the development of a system to monitor and prevent gang violence. Such monitoring systems for social media are already used by police — but without the important context and nuance former members of those communities provide, the systems can be wrong and innocent people can be arrested and criminalized based on little more than a tweet.
While a number of big companies have launched internal programs to implement ethical guardrails around the use of AI, these remain largely opaque processes enacted at the executive levels of private corporations. As for Google's Duplex, a second demo in June was explicit in highlighting ways the system might disclose its status as a machine at the outset of a robocall.
Still, a good question to ask of software made to mimic us is: What aspect of the human is being imitated, and to what end? Does the specific product or system have to be anthropomorphic in its design to be effective? How we answer this question could have broad implications for how we interact with AI in the future. We can move through the digital age in deference to our robotic overlords, or we might be savvy enough to realize we are the ones in charge.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Kat Mustatea is a playwright, technologist, and contributor at Forbes, whose work focuses on the intersection of technology and the arts. Her forthcoming TED talk is about the role of artist and audience in the age of machine intelligence.
-
Assad's fall upends the Captagon drug empire
Multi-billion-dollar drug network sustained former Syrian regime
By Richard Windsor, The Week UK Published
-
Crossword: December 19, 2024
The Week's daily crossword
By The Week Staff Published
-
Sudoku medium: December 19, 2024
The Week's daily medium sudoku puzzle
By The Week Staff Published
-
Are zoos ethical?
The Explainer Examining the pros and cons of supporting these controversial institutions
By Justin Klawans Published
-
Will COVID-19 wind up saving lives?
The Explainer By spurring vaccine development, the pandemic is one crisis that hasn’t gone to waste
By Noah Millman Last updated
-
Coronavirus vaccine guide: Everything you need to know so far
The Explainer Effectiveness, doses, variants, and methods — explained
By Jeva Lange Last updated
-
The climate refugees are here. They're Americans.
The Explainer Wildfires are forcing people from their homes in droves. Where will they go now?
By Joel Mathis Published
-
Coronavirus' looming psychological crisis
The Explainer On the coming epidemic of despair
By David Faris Published
-
The growing crisis in cosmology
The Explainer Unexplained discrepancies are appearing in measurements of how rapidly the universe is expanding
By Matthew Francis Last updated
-
What if the car of the future isn't a car at all?
The Explainer The many problems with GM's Cruise autonomous vehicle announcement
By Navneet Alang Published
-
The threat of killer asteroids
The Explainer Everything you need to know about asteroids hitting Earth and wiping out humanity
By The Week Staff Published