Google: An uncannily human-sounding AI
The robots are coming, and it turns out they “won’t sound like an overlord, but a, uhh, Millennial,” said Alexis Madrigal in TheAtlantic.com. Google last week debuted a “jaw-droppingly” lifelike artificial-intelligence tool at its annual developer conference: an automated voice assistant capable of making salon appointments, booking restaurant reservations, and conducting other tasks over the phone. The bot, known as Duplex, “sounds like a human,” complete with pauses, “cheery colloquialisms,” and filler sounds like “um” and “hmm” for added realism. In a pre-recorded onstage demonstration, Duplex spoke with a hair salon receptionist and chatted with a restaurant employee to book a table; at no point did the humans on the line appear to realize they were talking to a robot, causing the audience of coders to cheer. In the outside world, critics pounced, accusing Google of developing “deceitful and unethical” technology, said Alex Hern in TheGuardian.com. “Horrifying,” tweeted social media theorist Zeynep Tufekci, who added that “Silicon Valley is ethically lost.” In response to the outcry, Google said that it will build in features that “explicitly let people know they are interacting with a machine,” but didn’t provide specifics.
“The robot future, it’s worth pointing out, isn’t exactly here yet,” said Molly Roberts in WashingtonPost.com. Google’s AI can chat only in certain situations, so it’s quite limited in what it can do. Still, this technology appears to be able to ace the famous test of machine intelligence developed by British computer scientist Alan Turing in 1950—to pass the test, a machine must behave in a way indistinguishable from a human. To date, our machines have been, well, machine-like enough to spare us from having to answer thorny questions about where advanced AI fits into our societal codes. But we could soon face a world where we don’t know “who’s the human and who’s the machine.” It’s not hard to imagine “how this kind of technology could be used for all kinds of questionable or dangerous tasks,” said Kevin Roose in NYTimes.com. A hacker could mimic a person’s voice and fool a listener into handing over valuable information, or use AI to spread misinformation to thousands of people as part of an automated attack. The fact that the backlash to Duplex “caught Google by surprise,” is, “to me, is the most disturbing piece” of news.
Google is “not the only company developing these services,” said James Vincent in TheVerge.com. It bears asking whether we should allow tech companies to manage AI’s thorny ethical questions on their own, or if we need new laws to protect the public. This technology is coming soon, and it will be widespread. Google has pledged to “do the right thing,” but will others? “We need to have a conversation about all this, before the robots start doing the talking for us.” ■