AI ripe for exploitation by criminals, experts warn
Researchers call for lawmakers to help prevent hacks and attacks

Artificial intelligence (AI) could be used for nefarious purposes within as little as five years, according to a new report by experts.
The newly published report, called The Malicious Use of Artificial Intelligence, by 26 researchers from universities and tech firms warns that the ease of access to “cutting-edge” AI could lead to it being exploited by bad actors.
The technology is still in its infancy and is mostly unregulated. If laws over AI development are not introduced soon, say the researchers, a major attack using the technology could occur by as soon as 2022.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
According to The Daily Telegraph, cybercriminals could use the tech to scan a target’s social media presence “before launching ‘phishing’ email attacks to steal personal data or access sensitive company information”.
Terrorists could also use AI to hack into driverless cars, the newspaper adds, or hijack “swarms of autonomous drones to launch attacks in public spaces”.
The new report calls for lawmakers to work with tech experts “to understand and prepare for the malicious use of AI”, BBC News says.
The authors are also urging firms to acknowledge that AI “is a dual-use technology” that poses both benefits and dangers to society, and to adopt practices “from disciplines with a longer history of handling dual-use risks”.
Co-author Miles Brundage, of the Future of Humanity Institute at Oxford University, insists people shouldn’t abandon AI development, however.
“The point here is not to paint a doom-and-gloom picture, there are many defences that can be developed and there’s much for us to learn,” Brundage told The Verge.
“I don’t think it’s hopeless at all, but I do see this paper as a call to action,” he added.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Grok brings to light wider AI antisemitism
In the Spotlight Google and OpenAI are among the other creators who have faced problems
-
Intellectual property: AI gains at creators' expense
Feature Two federal judges ruled that it is fair use for AI firms to use copyrighted media to train bots
-
Is AI killing the internet?
Talking Point AI-powered browsers and search engines are threatening the death of the open web
-
Nvidia hits $4 trillion milestone
Speed Read The success of the chipmaker has been buoyed by demand for artificial intelligence
-
Musk chatbot Grok praises Hitler on X
Speed Read Grok made antisemitic comments and referred to itself as 'MechaHitler'
-
The first AI job cuts are already here
Feature Companies are removing entry-level jobs as AI takes over
-
The god in the machine
Feature An AI model with superhuman intelligence could soon become reality. Should we be worried?
-
AI chatbots are leading some to psychosis
The explainer The technology may be fueling delusions