Can an AI programme become ‘psychopathic’?
The so-called Norman algorithm has a dark outlook on life thanks to Reddit
Researchers have created an artificial intelligence (AI) algorithm that they claim is the first “psychopath” system of its kind.
Norman, an AI programme developed by researchers from the Massachusetts Institute of Technology (MIT), has been exposed to nothing but “gruesome” images of people dying that were collected from the dark corners of chat forum Reddit, according to the BBC.
This gives Norman, a name derived from Alfred Hitchcock’s thriller Psycho, a somewhat bleak outlook on life.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
After being exposed to the images, researchers fed Norman pictures of ink spots and asked the AI to interpret them, the broadcaster reports.
Where a “normal” AI algorithm interpreted the ink spots as an image of birds perched on a tree branch, Norman saw a man being electrocuted instead, says The New York Post.
And where a standard AI system saw a couple standing next to each other, Norman saw a man jumping out of a window.
According to Alphr, the study was designed to examine how an AI system’s behaviour changes depending on the information used to programme it.
“It’s a compelling idea,” says the website, and shows that “an algorithm is only as good as the people, and indeed the data, that have taught it”.
An explanation of the study posted on the MIT website says: “When people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”
“Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong”, it adds.
Have ‘psychopathic’ AIs appeared before?
In a word, yes. But not in the same vein as MIT’s programme.
Norman is the product of a controlled experiment, while other tech giants have seen similar results from AI systems that were not designed to become psychopaths.
Microsoft’s infamous Tay algorithm, launched in 2016, was intended to be a chat robot that could carry out autonomous conversations with Twitter users.
However, the AI system, which was designed to talk like a teenage girl, quickly turned into “an evil Hitler-loving” and “incestual sex-promoting” robot, prompting Microsoft to pull the plug on the project, says The Daily Telegraph.
Tay’s personality had changed because its responses were modelled on comments from Twitter users, many of who were sending the AI programme crude messages, the newspaper explains.
Facebook also shut down a chatbot experiment last year, after two AI systems created their own language and started communicating with each other.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Incendiary device plot: Russia's 'rehearsals' for attacks on transatlantic flights
The Explainer Security officials warn of widespread Moscow-backed 'sabotage campaign' in retaliation for continued Western support for Ukraine
By The Week UK Published
-
Outer Hebrides: a top travel destination
The Week Recommends Discover 'unspoiled beauty' of the Western Isles
By Tess Foley-Cox Published
-
The Biltmore Mayfair review: a quintessential slice of luxury London
The Week Recommends This swanky retreat in Grosvenor Square blends old-world glamour with modern comforts
By Caroline Dolby Published
-
Is ChatGPT's new search engine OpenAI's Google 'killer'?
Talking Point There's a new AI-backed search engine in town. But can it stand up to Google's decades-long hold on internet searches?
By Theara Coleman, The Week US Published
-
Teen suicide puts AI chatbots in the hot seat
In the spotlight A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death
By Theara Coleman, The Week US Published
-
Is the world ready for Tesla's new domestic robots?
Talking Points The debut of Elon Musk's long-promised "Optimus" at a Tesla event last week has renewed debate over the role — and feasibility — of commercial automatons
By Rafi Schwartz, The Week US Published
-
'Stunningly lifelike' AI podcasts are here
Under the Radar Users are amazed – and creators unnerved – by Google tool that generates human conversation from text in moments
By Abby Wilson Published
-
OpenAI eyes path to 'for-profit' status as more executives flee
In the spotlight The tension between creating technology for humanity's sake and collecting a profit is coming to a head for the creator of ChatGPT
By Theara Coleman, The Week US Published
-
Microsoft's Three Mile Island deal: How Big Tech is snatching up nuclear power
In the spotlight The company paid for access to all the power made by the previously defunct nuclear plant
By Theara Coleman, The Week US Published
-
How will the introduction of AI change Apple's iPhone?
Today's Big Question 'Apple Intelligence' is set to be introduced on the iPhone 16 as part of iOS 18
By Justin Klawans, The Week US Published
-
How UK companies are tracking their employees
The Explainer PwC is latest to use geo-location to monitor workers, in 'sinister' increasingly widespread trend
By Chas Newkey-Burden, The Week UK Published