Why are facial recognition technology rules changing in Detroit?
A wrongful arrest leads to a big settlement


Detroit police are changing their rules for facial recognition software after a "grainy photo made from poorly lit footage" led to the arrest of an innocent man for theft, said The Detroit Free Press. Robert Williams will receive a $300,000 settlement for the wrongful 2020 arrest, which he said "upended" his life. "My wife and young daughters had to watch helplessly as I was arrested for a crime I didn't commit," he said. Investigators will no longer be allowed to make arrests or conduct lineups based solely on software identifications.
But Detroit isn't the only city implementing such restrictions. "The powerful but imperfect artificial intelligence technology" has led to at least a half-dozen wrongful arrests, said The Washington Post, which is why a growing number of states and cities are drawing boundaries around the use of facial recognition. That hasn't always worked: Investigators in Austin and San Francisco reportedly skirted the rules by "asking for help from other law enforcement agencies that still have access." Disclosure can be an issue. "Police are using it" to make suspect identifications, said Chesa Boudin, San Francisco's former district attorney, "but not saying they are using it."
What did the commentators say?
Facial recognition software can be "both racist and faulty" when confronted with blurry security footage, Robert Williams said in a first-person column for Time. At the time he was arrested, one database included 49 million photos, including every Michigan driver's license photo "going back years." That database wrongly linked security footage of an actual theft to Williams' license. He spent 30 hours in jail. Now? Detroit police will train investigators on the "limitations and inaccuracies of facial recognition technology," creating guardrails that help preserve civil liberties. That's a "big step" toward ensuring other innocent people don't lose their liberty.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Others aren't so sure. "There is no safe way for police to use facial recognition," the ACLU's Nathan Freed Wessler said in The Sacramento Bee. A bill in California would regulate, not ban, the use of the technology by police. That isn't enough. Investigations often "exacerbate and compound the unreliability of facial recognition technology" because the tech generates a list of faces that are similar — but don't match — security footage. That naturally leads to problems. "Rather than being an asset to police investigations, facial recognition poisons them."
What next?
Police say facial recognition is usually more helpful than hurtful. In Gary, Indiana, for example, investigators said this month that the technology helped them track down a suspect in a fatal gas station shooting. And the FBI has used facial recognition software to identify and find suspects in the Jan. 6 insurrection. Others are more cautious: Microsoft in May banned American police departments from using its artificial intelligence service for facial recognition, Quartz said, apparently over worries the tech could generate "false or nonsensical information."
The two sides are sparring over the California bill, said CalMatters. A coalition of civil liberties groups is arguing facial recognition technology will "increase unnecessary police interactions that too often have the potential to escalate into fatal encounters." But police agencies and their allies say regulation will help them use the technology appropriately. Still, one lobbyist said, "there's a clear need to bolster public trust" in the use of facial recognition.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Joel Mathis is a writer with 30 years of newspaper and online journalism experience. His work also regularly appears in National Geographic and The Kansas City Star. His awards include best online commentary at the Online News Association and (twice) at the City and Regional Magazine Association.
-
Will California’s Proposition 50 kill gerrymandering reform?
Talking Points Or is opposing Trump the greater priority for voters?
-
‘The trickle of shutdowns could soon become a flood’
Instant Opinion Opinion, comment and editorials of the day
-
Wikipedia: Is ‘neutrality’ still possible?
Feature Wikipedia struggles to stay neutral as conservatives accuse the site of being left-leaning
-
Wikipedia: Is ‘neutrality’ still possible?
Feature Wikipedia struggles to stay neutral as conservatives accuse the site of being left-leaning
-
AI is making houses more expensive
Under the radar Homebuying is also made trickier by AI-generated internet listings
-
‘How can I know these words originated in their heart and not some data center in northern Virginia?’
instant opinion Opinion, comment and editorials of the day
-
AI: is the bubble about to burst?
In the Spotlight Stock market ever-more reliant on tech stocks whose value relies on assumptions of continued growth and easy financing
-
Your therapist, the chatbot
Feature Americans are increasingly turning to artificial intelligence for mental health support. Is that sensible?
-
Supersized: The no-limit AI data center build-out
Feature Tech firms are investing billions to build massive AI data centers across the U.S.
-
Digital addiction: the compulsion to stay online
In depth What it is and how to stop it
-
AI workslop is muddying the American workplace
The explainer Using AI may create more work for others