AI ripe for exploitation by criminals, experts warn
Researchers call for lawmakers to help prevent hacks and attacks

Artificial intelligence (AI) could be used for nefarious purposes within as little as five years, according to a new report by experts.
The newly published report, called The Malicious Use of Artificial Intelligence, by 26 researchers from universities and tech firms warns that the ease of access to “cutting-edge” AI could lead to it being exploited by bad actors.
The technology is still in its infancy and is mostly unregulated. If laws over AI development are not introduced soon, say the researchers, a major attack using the technology could occur by as soon as 2022.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
According to The Daily Telegraph, cybercriminals could use the tech to scan a target’s social media presence “before launching ‘phishing’ email attacks to steal personal data or access sensitive company information”.
Terrorists could also use AI to hack into driverless cars, the newspaper adds, or hijack “swarms of autonomous drones to launch attacks in public spaces”.
The new report calls for lawmakers to work with tech experts “to understand and prepare for the malicious use of AI”, BBC News says.
The authors are also urging firms to acknowledge that AI “is a dual-use technology” that poses both benefits and dangers to society, and to adopt practices “from disciplines with a longer history of handling dual-use risks”.
Co-author Miles Brundage, of the Future of Humanity Institute at Oxford University, insists people shouldn’t abandon AI development, however.
“The point here is not to paint a doom-and-gloom picture, there are many defences that can be developed and there’s much for us to learn,” Brundage told The Verge.
“I don’t think it’s hopeless at all, but I do see this paper as a call to action,” he added.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Google's new AI Mode feature hints at the next era of search
In the Spotlight The search giant is going all in on AI, much to the chagrin of the rest of the web
-
How the AI takeover might affect women more than men
The Explainer The tech boom is a blow to gender equality
-
Did you get a call from a government official? It might be an AI scam.
The Explainer Hackers may be using AI to impersonate senior government officers, said the FBI
-
What Elon Musk's Grok AI controversy reveals about chatbots
In the Spotlight The spread of misinformation is a reminder of how imperfect chatbots really are
-
Is Apple breaking up with Google?
Today's Big Question Google is the default search engine in the Safari browser. The emergence of artificial intelligence could change that.
-
Inside the FDA's plans to embrace AI agencywide
In the Spotlight Rumors are swirling about a bespoke AI chatbot being developed for the FDA by OpenAI
-
Digital consent: Law targets deepfake and revenge porn
Feature The Senate has passed a new bill that will make it a crime to share explicit AI-generated images of minors and adults without consent
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening