Police to use AI to identify child abuse images
Plan would cut costs and help officers avoid psychological trauma

Police forces are planning to use artificial intelligence (AI) systems to identify images of child abuse, in a bid to prevent officers from suffering psychological trauma.
Image recognition software is already used by the Metropolitan Police’s forensics department, which last year searched more than 53,000 seized devices for incriminating evidence, The Daily Telegraph reports. But the systems are not “sophisticated enough to spot indecent images and video”.
However, plans are being developed to move sensitive data collected by police to cloud providers such as Google and Microsoft, according to the newspaper.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
This would allow specialists to harness the tech giants’ massive computing power for analytics, without needing to invest in a multimillion-pound hardware infrastructure.
It would also reduce the risk of police officers suffering psychological trauma as a result of analysing the images, as they would largely be removed from the process.
The Metropolitan’s chief of digital forensics, Mark Stokes, told The Daily Telegraph: “We have to grade indecent images for different sentencing, and that has to be done by human beings right now.
“You can imagine that doing that for year on year is very disturbing.”
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
With the help of Silicon Valley providers, AI could be trained to detect abusive images “within two to three years”, Stokes adds.
Image searches is not the only use of AI technology by the authorities. In May, The Verge reported that Durham Police were planning to use AI technology to determine whether arrested suspects should remain in custody.
The system, which was trialled over the summer, gauges a suspect’s risk to society based on a range of factors including the severity of their crime and whether they are a “flight risk”.
-
Africa's largest dam is making diplomatic waves
Under the Radar Ethiopians view using the Nile as a 'sovereign right' but the vast hydroelectric project has 'fuelled nationalist fervour' in Egypt and Sudan
-
Jessica Francis Kane's 6 favorite books that prove less is more
Feature The author recommends works by Penelope Fitzgerald, Marie-Helene Bertino, and more
-
Trump's drug war is now a real shooting war
Talking Points The Venezuela boat strike was 'not a mere law enforcement action'
-
The tiny Caribbean island sitting on a digital 'goldmine'
Under The Radar Anguilla's country-code domain name is raking in millions from a surprise windfall
-
GPT-5: Not quite ready to take over the world
Feature OpenAI rolls back its GPT-5 model after a poorly received launch
-
Deep thoughts: AI shows its math chops
Feature Google's Gemini is the first AI system to win gold at the International Mathematical Olympiad
-
The jobs most at risk from AI
The Explainer Sales and customer services are touted as some of the key jobs that will be replaced by AI
-
Why AI means it's more important than ever to check terms and conditions
In The Spotlight WeTransfer row over training AI models on user data shines spotlight on dangers of blindly clicking 'Accept'
-
Are AI lovers replacing humans?
Talking Points A third of Gen Z singles use tech as a 'romantic companion'
-
Palantir: The all-seeing tech giant
Feature Palantir's data-mining tools are used by spies and the military. Are they now being turned on Americans?
-
Grok brings to light wider AI antisemitism
In the Spotlight Google and OpenAI are among the other creators who have faced problems