Police to use AI to identify child abuse images
Plan would cut costs and help officers avoid psychological trauma

Police forces are planning to use artificial intelligence (AI) systems to identify images of child abuse, in a bid to prevent officers from suffering psychological trauma.
Image recognition software is already used by the Metropolitan Police’s forensics department, which last year searched more than 53,000 seized devices for incriminating evidence, The Daily Telegraph reports. But the systems are not “sophisticated enough to spot indecent images and video”.
However, plans are being developed to move sensitive data collected by police to cloud providers such as Google and Microsoft, according to the newspaper.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
This would allow specialists to harness the tech giants’ massive computing power for analytics, without needing to invest in a multimillion-pound hardware infrastructure.
It would also reduce the risk of police officers suffering psychological trauma as a result of analysing the images, as they would largely be removed from the process.
The Metropolitan’s chief of digital forensics, Mark Stokes, told The Daily Telegraph: “We have to grade indecent images for different sentencing, and that has to be done by human beings right now.
“You can imagine that doing that for year on year is very disturbing.”
With the help of Silicon Valley providers, AI could be trained to detect abusive images “within two to three years”, Stokes adds.
Image searches is not the only use of AI technology by the authorities. In May, The Verge reported that Durham Police were planning to use AI technology to determine whether arrested suspects should remain in custody.
The system, which was trialled over the summer, gauges a suspect’s risk to society based on a range of factors including the severity of their crime and whether they are a “flight risk”.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Today's political cartoons - May 9, 2025
Cartoons Friday's cartoons - India-Pakistan tensions, pope hopeful, and more
-
The Week US terms and conditions
-
Leo XIV vs. Trump: what will first American Pope mean for US Catholics?
Today's Big Question New pope has frequently criticised the president, especially on immigration policy, but is more socially conservative than his predecessor
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises