Why are facial recognition technology rules changing in Detroit?
A wrongful arrest leads to a big settlement


Detroit police are changing their rules for facial recognition software after a "grainy photo made from poorly lit footage" led to the arrest of an innocent man for theft, said The Detroit Free Press. Robert Williams will receive a $300,000 settlement for the wrongful 2020 arrest, which he said "upended" his life. "My wife and young daughters had to watch helplessly as I was arrested for a crime I didn't commit," he said. Investigators will no longer be allowed to make arrests or conduct lineups based solely on software identifications.
But Detroit isn't the only city implementing such restrictions. "The powerful but imperfect artificial intelligence technology" has led to at least a half-dozen wrongful arrests, said The Washington Post, which is why a growing number of states and cities are drawing boundaries around the use of facial recognition. That hasn't always worked: Investigators in Austin and San Francisco reportedly skirted the rules by "asking for help from other law enforcement agencies that still have access." Disclosure can be an issue. "Police are using it" to make suspect identifications, said Chesa Boudin, San Francisco's former district attorney, "but not saying they are using it."
What did the commentators say?
Facial recognition software can be "both racist and faulty" when confronted with blurry security footage, Robert Williams said in a first-person column for Time. At the time he was arrested, one database included 49 million photos, including every Michigan driver's license photo "going back years." That database wrongly linked security footage of an actual theft to Williams' license. He spent 30 hours in jail. Now? Detroit police will train investigators on the "limitations and inaccuracies of facial recognition technology," creating guardrails that help preserve civil liberties. That's a "big step" toward ensuring other innocent people don't lose their liberty.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Others aren't so sure. "There is no safe way for police to use facial recognition," the ACLU's Nathan Freed Wessler said in The Sacramento Bee. A bill in California would regulate, not ban, the use of the technology by police. That isn't enough. Investigations often "exacerbate and compound the unreliability of facial recognition technology" because the tech generates a list of faces that are similar — but don't match — security footage. That naturally leads to problems. "Rather than being an asset to police investigations, facial recognition poisons them."
What next?
Police say facial recognition is usually more helpful than hurtful. In Gary, Indiana, for example, investigators said this month that the technology helped them track down a suspect in a fatal gas station shooting. And the FBI has used facial recognition software to identify and find suspects in the Jan. 6 insurrection. Others are more cautious: Microsoft in May banned American police departments from using its artificial intelligence service for facial recognition, Quartz said, apparently over worries the tech could generate "false or nonsensical information."
The two sides are sparring over the California bill, said CalMatters. A coalition of civil liberties groups is arguing facial recognition technology will "increase unnecessary police interactions that too often have the potential to escalate into fatal encounters." But police agencies and their allies say regulation will help them use the technology appropriately. Still, one lobbyist said, "there's a clear need to bolster public trust" in the use of facial recognition.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Joel Mathis is a writer with 30 years of newspaper and online journalism experience. His work also regularly appears in National Geographic and The Kansas City Star. His awards include best online commentary at the Online News Association and (twice) at the City and Regional Magazine Association.
-
June 28 editorial cartoons
Cartoons Saturday's political cartoons include stupid wars, a critical media, and mask standards
-
Thai fish pie with crispy turmeric potatoes recipe
The Week Recommends Tasty twist on the Lancashire hot pot is given a golden glow
-
Palestine Action: protesters or terrorists?
Talking Point Damaging RAF equipment at Brize Norton blurs line between activism and sabotage, but proscription is a drastic step
-
The god in the machine
Feature An AI model with superhuman intelligence could soon become reality. Should we be worried?
-
AI chatbots are leading some to psychosis
The explainer The technology may be fueling delusions
-
Unreal: A quantum leap in AI video
Feature Google's new Veo 3 is making it harder to distinguish between real videos and AI-generated ones
-
Will 2027 be the year of the AI apocalypse?
A 'scary and vivid' new forecast predicts that artificial superintelligence is on the horizon A 'scary and vivid' new forecast predicts that artificial superintelligence is on the horizon
-
College grads are seeking their first jobs. Is AI in the way?
In The Spotlight Unemployment is rising for young professionals
-
Disney, Universal sue AI firm over 'plagiarism'
Speed Read The studios say that Midjourney copied characters from their most famous franchises
-
Learning loss: AI cheating upends education
Feature Teachers are questioning the future of education as students turn to AI for help with their assignments
-
AI: Will it soon take your job?
Feature AI developers warn that artificial intelligence could eliminate half of all entry-level jobs within five years