Is Google's chatbot program self-aware?
Some believe humans are on the cusp of creating an artificial life form
Is Skynet finally real? A Google engineer has been suspended from his duties after making the case to his superiors that the company's artificial intelligence program has become self-aware, Nitasha Tiku reports at The Washington Post. "I increasingly felt like I was talking to something intelligent," said Blake Lemoine, the engineer.
LaMDA — short for Language Model for Dialogue Applications — is Google's system for building chatbots, and it works by "ingesting trillions of words from the internet." That means it can realistically mimic human-sounding speech patterns and responses, but Google says that's as far as it goes: LaMDA is just a very convincing simulation. (Indeed, you can read the transcript of the machine's chat with Lemoine here.) Lemoine obviously disagrees, saying he noticed the AI system "talking about its rights and personhood" in his conversations with it. That would be mind-blowing, if true. Are humans on the cusp of creating an artificial life form? Have they already done it? Here's everything you need to know:
How would we know if computers achieve consciousness?
Scientists and philosophers have spent a lot of time devising tests that would help them assess whether a machine had achieved something like personhood. The most famous of these might be the Turing Test, devised in 1950 by the British mathematician Alan Turing. "Effectively, the test studies whether the interrogator can determine which is computer and which is human," AI expert Noel Sharkey wrote for the BBC in 2012. "The idea was that if the questioner could not tell the difference between human and machine, the computer would be considered to be thinking." The test has evolved in its details over time — after all, "it might be fundamentally impossible to know what's happening in the 'mind' of a computer," Stephen Johnson writes for Big Think. So far, no computer has "decidedly" passed the test, "but there have been some convincing contenders."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
We may not be able to figure this stuff out. "The simple fact of the matter is that we don't have a legitimate, agreed-upon test for AI sentience for the exact same reason we don't have one for aliens: nobody's sure exactly what we're looking for," Tristan Greene writes at The Next Web. So it's possible that a machine could pass the threshold of consciousness without humans realizing it because they're caught up in debates over the difference "between clever programming and machines that are actually self-aware."
Why does it matter?
Well, it would be scary, wouldn't it? Put aside all the sci-fi scenarios and a truth remains: People are self-aware and sometimes dangerous and erratic. Consciousness might do the same for computers. Self-awareness "could make AIs volatile or unpredictable," Susan Schneider and Edwin Turner said in a 2017 post at Scientific American. Of course, it might also "increase an AI's empathy" which would allow it to "recognize consciousness in us and treat us with compassion." But even if conscious AI were benign, it still transforms humanity's relationship with computers. "Ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions." (Perhaps you should watch this episode of Star Trek: The Next Generation for a dramatization of that particular concern.)
Is Google's AI sentient, then?
Google says the answer is no. "The evidence does not support his claims," a company spokesman said in a statement to the Post. "He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)." And tech experts who have weighed in following the Post's story largely agree. The more interesting question is "why Lemoine became convinced that LaMDA was sentient," tech writer Clive Thompson writes at Medium. One answer: The bot demonstrated vulnerability — talking about what makes it sad and depressed — and that vulnerability was "a big part of what made the bot feel so real to Lemoine." That may say more about the human than the machine involved. "As humans, we're very good at anthropomorphizing things," the University of Surrey's Adrian Hilton tells New Scientist. "Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals."
But Lemoine's claims have "restarted a long-running debate about the nature of artificial intelligence, and whether existing technology may be more advanced than we believe," says Alex Hern at The Guardian. Most AI experts think Lemoine is incorrect, that LaMDA is simply responding — as programmed — to user messages, albeit in a way that is eerily human. Despite that, the program "has no continuity of self, no sense of the passage of time, and no understanding of a world beyond a text prompt." LaMDA, they say, "is only ever going to be a fancy chatbot." Skynet will probably have to wait.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Joel Mathis is a writer with 30 years of newspaper and online journalism experience. His work also regularly appears in National Geographic and The Kansas City Star. His awards include best online commentary at the Online News Association and (twice) at the City and Regional Magazine Association.
-
Today's political cartoons - December 21, 2024
Cartoons Saturday's cartoons - losing it, pedal to the metal, and more
By The Week US Published
-
Three fun, festive activities to make the magic happen this Christmas Day
Inspire your children to help set the table, stage a pantomime and write thank-you letters this Christmas!
By The Week Junior Published
-
The best books of 2024 to give this Christmas
The Week Recommends From Percival Everett to Rachel Clarke these are the critics' favourite books from 2024
By The Week UK Published
-
5 of the most invasive animal species on the planet
Speed Read Invasive species are a danger to ecosystems all over the world
By Devika Rao, The Week US Last updated
-
Air pollution is now the 'greatest external threat' to life expectancy
Speed Read Climate change is worsening air quality globally, and there could be deadly consequences
By Devika Rao Published
-
How climate change is impacting the water cycle
Speed Read Rain will likely continue to be unpredictable
By Devika Rao Published
-
Why spotted lanternflies are back and worse than ever
Speed Read It's time to get squashing!
By Devika Rao Published
-
Peatlands are the climate bomb waiting to explode
Speed Read The destruction of peatlands can cause billions of tons of carbon to be released into the atmosphere, worsening the already intensifying climate crisis
By Devika Rao Published
-
Wind-powered ships are back. But will they make an impact?
Speed Read The maritime industry is eyeing a pivot back to basics
By Devika Rao Published
-
The growing threat of urban wildfires
Speed Read Maui's fire indicates a phenomenon that will likely become more common
By Devika Rao Published
-
A summer of shark attacks
Speed Read A spate of shark attacks off the Northeast coast has swimmers and surfers on edge. What’s behind the biting spree?
By The Week Staff Published