Artificial intelligence is silly
On Silicon Valley's ludicrous Terminator scenario
According to many of the world's smartest people, the greatest crisis faced by what is left of civilization is not the appalling concentration of wealth in the hands of the global meritocracy or stagnant wages or unemployment or the vicious internal contradictions of globalized free trade or our dumpster-like attitude toward the splendors of creation or even nuclear war. It's evil computers, like the ones in Terminator, which at some undisclosed point will try to kill us all.
In a recent public letter Google's Sergey Brin outlined his fears about a world in which our safety is threatened by machines with "sci-fi style sentience." His comments echoed those made on various occasions by Tesla's Elon Musk, whose far-out theories about the "existential threat" of artificial intelligence are either moronic or too heady for this mere earthling. Some hackers believe it is possible that in the not-too-distant future an omnipotent, omniscient computer program called "Roko's basilisk" will consume the entire universe and punish foolish mortals out of pure mechanical spite for all of eternity.
There is even an entire movement dedicated to placating the supposedly inevitable cyber-tyrant by, yes, programming it into existence, under the assumption that the robo-snake might feel more kindly disposed towards its co-creators. So far as I am aware no one has ever attempted to answer the most basic computer-related question, namely, what happens if the basilisk freezes? Will the universe shut down temporarily while the next update installs?
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
How worried you are about all of this depends upon your willingness to give assent to the bizarre notion that a hypothetical computer version of you, whatever that might mean, is the same thing as, well, you. I for one find it absolutely unconvincing.
None of which is to suggest that existing computer technology is not capable of many remarkable things. As Brin points out, today's computers can recognize what is in a photo, translate text, transcribe speech, suggest responses to emails, and even diagnose illnesses. With video-game like precision from half a world away, we can detect human beings at a funeral in Pakistan and blow them to smithereens. Algorithms recognize that a person who buys diapers might in the future wish to buy them. They can even tell you how to spell "pastime," a feat that impresses me far more than the Apollo landings.
What artificial intelligences are not good at, though, is anything other than the specific narrowly focused roles for which they are programmed. The head of Google's own AI division has suggested that the fears of many of his colleagues are misplaced, arguing that the most advanced computers in the world are less skilled at performing general tasks than the average human 4-year-old. Personally, I think he's selling 4-year-olds short there. Show me a computer that can poop in a bath tub. But his point nevertheless stands. There is all the difference in the world between a program that can recognize a cat after being shown every picture of a cat available on the internet and what my 16-month-old daughter does every morning when she shimmies up to me and says "meow."
In any case, all of this hand-wringing about an imminent cyper-apocalypse very conveniently elides the non-hypothetical problems posed by advanced computing and automation in the here and now. "How will they affect employment across different sectors?" Brin asked pollyannishly in his recent letter to Google investors, as if the answers were not already clear. Recent technological developments have destroyed journalism, music, television, film, and the book trade in the time it took for George W. Bush to be president. The endless glut of capital in the tech industry is a scandal. Even former Facebook executives have begun to admit that social media is probably bad for us.
In the gleaming futures envisioned by the Brins and Musks there will be computers to do everything, even program and fix computers. For some, a future in which, courtesy of automation, there exists no meaningful work and we all have unlimited leisure time to spend in front of our blockchain-encrypted holographic mobile entertainment devices sounds beautiful. To me, it sounds like Pixar's Wall-E, a living hell of distraction, alienation, and ennui from which only only a god can save us.
It is this very real possibility, and not some ludicrous Terminator scenario, that should make us all wary of technology.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Matthew Walther is a national correspondent at The Week. His work has also appeared in First Things, The Spectator of London, The Catholic Herald, National Review, and other publications. He is currently writing a biography of the Rev. Montague Summers. He is also a Robert Novak Journalism Fellow.
-
5 contentious cartoons about Matt Gaetz's AG nomination
Cartoons Artists take on ethical uncertainty, offensive justice, and more
By The Week US Published
-
Funeral in Berlin: Scholz pulls the plug on his coalition
Talking Point In the midst of Germany's economic crisis, the 'traffic-light' coalition comes to a 'ignoble end'
By The Week UK Published
-
Joe Biden's legacy: economically strong, politically disastrous
In Depth The President boosted industry and employment, but 'Bidenomics' proved ineffective to winning the elections
By The Week UK Published