Police departments may frequently be requesting alterations to alerts generated by a gunshot-detecting ariticificial intelligence surveillance system called ShotSpotter, Vice News reports, citing court documents. The investigation raises questions of whether law enforcement is using ShotSpotter to fabricate evidence to support their narrative of events, but it remains unclear to what extent, if any, that is the case.
While Vice looked at multiple incidents, the report went into detail about a shooting in Chicago that took place during May 2020. The victim was fatally shot in the head by a man who was subsequently charged with his murder. The suspect maintained that the victim was shot in a drive-by, but the prosecution relied on a key piece of evidence that placed the suspect at a location and time that matched gunfire picked up by ShotSpotter microphones.
But the sensors initially detected the sound about a mile away from the crime scene and initially classified the noise as a firework. Later, though, a ShotSpotter analyst reportedly manually overrode the algorithms and "reclassified" the sound as a gunshot, while another analyst altered the alert's coordinates to a new location closer to the crime scene a few months later. When the suspect's lawyer raised the issue in court and requested the judge examine whether the forensic evidence was scientifically valid, the prosecution withdrew all ShotSpotter evidence related to the case.
Vice reports that the case wasn't an anomaly and pointed to multiple testimonies in which a ShotSpotter employee named Paul Greene stated that alerts were changed after the company received requests from multiple police departments to re-examine audio. Greene, who often serves as an expert witness, did not say that the new evidence was fabricated, however. Read more at Vice News.