Durham police to use AI for custody decisions
System has an 89 per cent success rate in identifying suspects who are likely to offend

Police in Durham are preparing to use artificial intelligence (AI) to assist officers deciding whether or not to send a suspect into custody, reports the BBC.
A system has been developed to categorise suspects into "low, medium or high risk of offending". It has been developed using five years of criminal history data.
Sheena Urwin, head of criminal justice at Durham Constabulary, told the BBC: "I imagine in the next two to three months we'll probably make it a live tool to support officers' decision making".
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Police trialled the harm assessment risk tool (Hart) for a two-year period starting in 2013, says Alphr, during which researchers discovered it had a 98 per cent success rate in identifying low-risk subjects and an 89 per cent rate for high-risk subjects.
It's decisions are based on factors such as "seriousness of alleged crime and previous criminal history".
Hart "leans towards a cautious outlook", says Alphr, so it is more likely to label a suspect as medium or high-risk, reducing the danger of "releasing dangerous criminals".
Such technology is becoming a vital tool in helping police in their investigations.
Last month, a man was charged with murdering his wife after investigators were able to work out her final moments using her Fitbit health tracker.
Information on how many steps the victim had walked indicated she had been active for an hour after the time her husband said she died, says The Guardian.
It also suggested she had "traveled more than 1,200ft after arriving home", adds the paper, while her husband said she was murdered by intruders immediately after arriving.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Today's political cartoons - May 7, 2025
Cartoons Wednesday's cartoons - film industry tariffs, self-deportation, and more
-
Weer at Soho Theatre Walthamstow: a 'silly, seductive, slapstick joy'
The Week Recommends Natalie Palamides' 'tear-inducingly funny' one-woman show opens London's newest venue
-
Can the world stop Israel from starving Gaza?
Today's Big Question Total blockade on food and aid enters its third month, and Israel is accused of 'weaponising starvation'
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend
-
Not there yet: The frustrations of the pocket AI
Feature Apple rushes to roll out its ‘Apple Intelligence’ features but fails to deliver on promises