AI ripe for exploitation by criminals, experts warn
Researchers call for lawmakers to help prevent hacks and attacks

Artificial intelligence (AI) could be used for nefarious purposes within as little as five years, according to a new report by experts.
The newly published report, called The Malicious Use of Artificial Intelligence, by 26 researchers from universities and tech firms warns that the ease of access to “cutting-edge” AI could lead to it being exploited by bad actors.
The technology is still in its infancy and is mostly unregulated. If laws over AI development are not introduced soon, say the researchers, a major attack using the technology could occur by as soon as 2022.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
According to The Daily Telegraph, cybercriminals could use the tech to scan a target’s social media presence “before launching ‘phishing’ email attacks to steal personal data or access sensitive company information”.
Terrorists could also use AI to hack into driverless cars, the newspaper adds, or hijack “swarms of autonomous drones to launch attacks in public spaces”.
The new report calls for lawmakers to work with tech experts “to understand and prepare for the malicious use of AI”, BBC News says.
The authors are also urging firms to acknowledge that AI “is a dual-use technology” that poses both benefits and dangers to society, and to adopt practices “from disciplines with a longer history of handling dual-use risks”.
Co-author Miles Brundage, of the Future of Humanity Institute at Oxford University, insists people shouldn’t abandon AI development, however.
“The point here is not to paint a doom-and-gloom picture, there are many defences that can be developed and there’s much for us to learn,” Brundage told The Verge.
“I don’t think it’s hopeless at all, but I do see this paper as a call to action,” he added.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Today's political cartoons - May 11, 2025
Cartoons Sunday's cartoons - shark-infested waters, Mother's Day, and more
-
5 fundamentally funny cartoons about the US Constitution
Cartoons Artists take on Sharpie edits, wear and tear, and more
-
In search of paradise in Thailand's western isles
The Week Recommends 'Unspoiled spots' remain, providing a fascinating insight into the past
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises