AI ripe for exploitation by criminals, experts warn
Researchers call for lawmakers to help prevent hacks and attacks

Artificial intelligence (AI) could be used for nefarious purposes within as little as five years, according to a new report by experts.
The newly published report, called The Malicious Use of Artificial Intelligence, by 26 researchers from universities and tech firms warns that the ease of access to “cutting-edge” AI could lead to it being exploited by bad actors.
The technology is still in its infancy and is mostly unregulated. If laws over AI development are not introduced soon, say the researchers, a major attack using the technology could occur by as soon as 2022.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
According to The Daily Telegraph, cybercriminals could use the tech to scan a target’s social media presence “before launching ‘phishing’ email attacks to steal personal data or access sensitive company information”.
Terrorists could also use AI to hack into driverless cars, the newspaper adds, or hijack “swarms of autonomous drones to launch attacks in public spaces”.
The new report calls for lawmakers to work with tech experts “to understand and prepare for the malicious use of AI”, BBC News says.
The authors are also urging firms to acknowledge that AI “is a dual-use technology” that poses both benefits and dangers to society, and to adopt practices “from disciplines with a longer history of handling dual-use risks”.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Co-author Miles Brundage, of the Future of Humanity Institute at Oxford University, insists people shouldn’t abandon AI development, however.
“The point here is not to paint a doom-and-gloom picture, there are many defences that can be developed and there’s much for us to learn,” Brundage told The Verge.
“I don’t think it’s hopeless at all, but I do see this paper as a call to action,” he added.
-
Deep thoughts: AI shows its math chops
Feature Google's Gemini is the first AI system to win gold at the International Mathematical Olympiad
-
The jobs most at risk from AI
The Explainer Sales and customer services are touted as some of the key jobs that will be replaced by AI
-
Why AI means it's more important than ever to check terms and conditions
In The Spotlight WeTransfer row over training AI models on user data shines spotlight on dangers of blindly clicking 'Accept'
-
Are AI lovers replacing humans?
Talking Points A third of Gen Z singles use tech as a 'romantic companion'
-
Palantir: The all-seeing tech giant
Feature Palantir's data-mining tools are used by spies and the military. Are they now being turned on Americans?
-
Grok brings to light wider AI antisemitism
In the Spotlight Google and OpenAI are among the other creators who have faced problems
-
Intellectual property: AI gains at creators' expense
Feature Two federal judges ruled that it is fair use for AI firms to use copyrighted media to train bots
-
Is AI killing the internet?
Talking Point AI-powered browsers and search engines are threatening the death of the open web