Surveillance: Escaping the camera’s eye
The abuse of surveillance technology was the target last week of a new ban in San Francisco, said Kate Conger in The New York Times. The city’s Board of Supervisors voted to stop city departments from using facial recognition software. Civil liberties groups have warned that the widespread adoption of such software, combined with ever-present video cameras, could push the U.S. “in the direction of an overly oppressive surveillance state.” Facial recognition is already a popular tool of law enforcement; U.S. customs officials use it in many airports, and it helped “identify the suspect in the mass shooting at an Annapolis, Md., newspaper.” But critics say “the technology could easily be misused to surveil immigrants or unfairly target African-Americans or low-income neighborhoods.” In China the government uses pervasive surveillance “to keep tabs on the Uighurs,” a persecuted Muslim minority, singling them out by their appearance and tracking their comings and goings.
Facial recognition technology “imperils the space for disobedience built into any humane system of laws,” said The Economist. Do you want to be fined every time you jaywalk or exceed the speed limit? “Bear in mind how local governments’ budgets depend on fines and fees,” and that facial recognition cameras are a lot cheaper than a cop on every block. “I worry that we’re stumbling dumbly into a surveillance state,” said Farhad Manjoo in The New York Times. When protests erupted in Baltimore in 2015, police there used facial recognition software to search the crowds for people with outstanding warrants and arrested them immediately. This has “chilling implications for speech and assembly protected by the First Amendment.”
The San Francisco ordinance still won’t stop private companies from creeping on us, said Angela Chen in MIT Technology Review. Most people’s experience with facial recognition won’t be because of law enforcement. Rather, it will be “school security cameras or stores that show customers targeted ads. These uses come with the same risks of misidentification and discrimination,” but regulating them is more complicated. Should people have the right to opt out? How difficult should that be? And what happens to the data?
“This powerful technology requires oversight and caution to prevent it from being abused,” but “the technology itself isn’t evil,” said the Los Angeles Times in an editorial. There’s potential to help doctors diagnose diseases, “stop the use of stolen credit cards, and let blind people discern facial expressions.” We may welcome its use in helping police locate missing children or provide real-time security at public events. Facial recognition needs safeguards, but a total ban throws “the good uses out with the bad ones.” ■