Briefing

Is Google's chatbot program self-aware?

Some believe humans are on the cusp of creating an artificial life form

Is Skynet finally real? A Google engineer has been suspended from his duties after making the case to his superiors that the company's artificial intelligence program has become self-aware, Nitasha Tiku reports at The Washington Post. "I increasingly felt like I was talking to something intelligent," said Blake Lemoine, the engineer.

LaMDA — short for Language Model for Dialogue Applications — is Google's system for building chatbots, and it works by "ingesting trillions of words from the internet." That means it can realistically mimic human-sounding speech patterns and responses, but Google says that's as far as it goes: LaMDA is just a very convincing simulation. (Indeed, you can read the transcript of the machine's chat with Lemoine here.)  Lemoine obviously disagrees, saying he noticed the AI system "talking about its rights and personhood" in his conversations with it. That would be mind-blowing, if true. Are humans on the cusp of creating an artificial life form? Have they already done it? Here's everything you need to know:

How would we know if computers achieve consciousness?

Scientists and philosophers have spent a lot of time devising tests that would help them assess whether a machine had achieved something like personhood. The most famous of these might be the Turing Test, devised in 1950 by the British mathematician Alan Turing. "Effectively, the test studies whether the interrogator can determine which is computer and which is human," AI expert Noel Sharkey wrote for the BBC in 2012. "The idea was that if the questioner could not tell the difference between human and machine, the computer would be considered to be thinking." The test has evolved in its details over time — after all, "it might be fundamentally impossible to know what's happening in the 'mind' of a computer," Stephen Johnson writes for Big Think. So far, no computer has "decidedly" passed the test, "but there have been some convincing contenders."

We may not be able to figure this stuff out. "The simple fact of the matter is that we don't have a legitimate, agreed-upon test for AI sentience for the exact same reason we don't have one for aliens: nobody's sure exactly what we're looking for," Tristan Greene writes at The Next Web. So it's possible that a machine could pass the threshold of consciousness without humans realizing it because they're caught up in debates over the difference "between clever programming and machines that are actually self-aware."

Why does it matter?

Well, it would be scary, wouldn't it? Put aside all the sci-fi scenarios and a truth remains: People are self-aware and sometimes dangerous and erratic. Consciousness might do the same for computers. Self-awareness "could make AIs volatile or unpredictable," Susan Schneider and Edwin Turner said in a 2017 post at Scientific American. Of course, it might also "increase an AI's empathy" which would allow it to "recognize consciousness in us and treat us with compassion." But even if conscious AI were benign, it still transforms humanity's relationship with computers. "Ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions." (Perhaps you should watch this episode of Star Trek: The Next Generation for a dramatization of that particular concern.) 

Is Google's AI sentient, then?

Google says the answer is no. "The evidence does not support his claims," a company spokesman said in a statement to the Post. "He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)." And tech experts who have weighed in following the Post's story largely agree. The more interesting question is "why Lemoine became convinced that LaMDA was sentient," tech writer Clive Thompson writes at Medium. One answer: The bot demonstrated vulnerability — talking about what makes it sad and depressed — and that vulnerability was "a big part of what made the bot feel so real to Lemoine." That may say more about the human than the machine involved. "As humans, we're very good at anthropomorphizing things," the University of Surrey's Adrian Hilton tells New Scientist. "Putting our human values on things and treating them as if they were sentient. We do this with cartoons, for instance, or with robots or with animals."

But Lemoine's claims have "restarted a long-running debate about the nature of artificial intelligence, and whether existing technology may be more advanced than we believe," says Alex Hern at The Guardian. Most AI experts think Lemoine is incorrect, that LaMDA is simply responding — as programmed — to user messages, albeit in a way that is eerily human. Despite that, the program "has no continuity of self, no sense of the passage of time, and no understanding of a world beyond a text prompt." LaMDA, they say, "is only ever going to be a fancy chatbot." Skynet will probably have to wait.

Recommended

The daily business briefing: July 6, 2022
Traders at New York Stock Exchange in 2022
Business briefing

The daily business briefing: July 6, 2022

10 things you need to know today: July 6, 2022
Rudy Giuliani and Lindsey Graham in 2008
Daily briefing

10 things you need to know today: July 6, 2022

Should Democrats end the filibuster to protect abortion rights?
A protestor.
Picture of Harold MaassHarold Maass

Should Democrats end the filibuster to protect abortion rights?

The daily business briefing: July 5, 2022
Airline planes
Business briefing

The daily business briefing: July 5, 2022

Most Popular

The turning point
Trump.
Picture of William FalkWilliam Falk

The turning point

7 toons about Cassidy Hutchinson's Trump testimony
Political Cartoon.
Feature

7 toons about Cassidy Hutchinson's Trump testimony

Rogan says he's declined interviews with Trump: 'I don't want to help him'
Joe Rogan
not invited

Rogan says he's declined interviews with Trump: 'I don't want to help him'