Facial recognition technology is an increasingly common part of modern life, used for everything from unlocking iPhones to advanced surveillance technology. But concerns are being raised that face ID technology can exhibit racial biases and this has potentially serious ramifications.
Commercial face recognition software has repeatedly been shown to be “less accurate” on people with darker skin, writes Gizmodo. Civil rights advocates worry about the “disturbingly targeted” ways face-scanning can be used by police.
The most recent example of this involved Amazon and its new technology Rekognition.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
The facial recognition tool, which Amazon sells to web developers, wrongly identified 28 members of the US Congress – a disproportionate amount of them people of colour – as police suspects from mugshots, reports Reuters.
This is not the first such incident of this kind. As face ID technology becomes increasingly widespread, more and more companies have found that their algorithms have a racial bias.
Earlier this year, Google came under fire for failing to entirely fix a racist algorithm that was originally pointed out in 2015 by software engineer Jacky Alciné. He noticed that the image recognition algorithms in Google Photos were classifying his black friends as gorillas. Instead of fixing its facial recognition technology, Google blocked its image recognition algorithms from identifying gorillas altogether.
A similar issue occurred with Apple’s new face ID technique for unlocking its phones.
Last December, it was found that Apple’s Face ID tech couldn’t tell two Chinese women apart. Apple boasts that its Face ID technology is the most advanced in the world and says the probability that a random person could successfully use it to unlock a smartphone is “approximately 1 in 1,000,000”, according to the Inquirer. But the company was forced to issue a refund to a Chinese woman who reported that her co-worker was able to unlock her iPhone X using the face-scanning tech, “despite having reconfigured the facial recognition settings multiple times”.
The woman, known as Yan, was issued a refund and given a new phone – but encountered the same problem again.
This begs the question: why do these issues occur and can they be solved?
Gizmodo reports that MIT researchers Joy Buolamwini and Timnit Gebru have found that darker-skinned faces are “underrepresented” in the datasets used to train them. This leaves facial recognition “more inaccurate” when looking at dark faces.
Solving the issues of racial bias will require not only technical interventions, but also “hard limits” on how and when face-scanning can be used to protect vulnerable communities.
Even then, they say, face recognition will be “impossible without addressing racism in the criminal justice system it will inevitably be used in”.
Continue reading for free
We hope you're enjoying The Week's refreshingly open-minded journalism.
Subscribed to The Week? Register your account with the same email as your subscription.