From Google to Amazon to Self-Driving Cars: How Racially Biased Products Hurt Users with Dark Skin
Across the board, facial/human recognition tools have proven to be erroneous when it comes to accurately identifying dark skin colors. These errors show up in two ways:
I. The product mistakes people with dark skin for something/someone else.
II. The product is unable to detect dark skin
I. Product Mistakes People with Dark Skin for Something / Someone Else
Google Photos
Google Photos is a platform that provides users with a place to organize, manage and back up their personal photographs. It uses machine-learning technology to categorize photos with similar content. In 2015, Jacky Alciné, a Black software designer put Google under fire by highlighting the fact that the platform’s recognition feature had labeled him and his friends as Gorillas.
This error is extremely dehumanizing. Especially given the historical context. Black people have long been depicted as monkeys, gorillas and anything but human. When someone is dehumanized, it becomes easier to accept or justify their degradation.
Amazon Rekognition
Amazon has been marketing and selling a facial-recognition system to police. The technology can identify people’s faces in digital images and video. The ACLU recently conducted a study which showed that the software incorrectly identifies people of color. The ACLU used Rekognition to compare images of members of Congress with a database of mugshots. The software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The false matches were disproportionately people with darker skin.
The ACLU points to the possibility of a police officer getting a “match”, which incorrectly tells them that a person of color has a previous concealed-weapon arrest. We already live in a society that disproportionately and unjustly polices people of color. This software only exacerbates the issue. The implications of this are life-threatening.
II. Product is Unable to Detect Dark Skin
Automatic Soap Dispensers
A little while ago, a Facebook employee named a Chukwuemeka Afigbo shared a video showing an automatic bathroom soap dispenser failing to detect the hand of a dark-skinned man. The video went viral. In the video, a white man waves his hand under the dispenser and instantly gets soap. A Black man then proceeds to wave his hand under the dispenser in various ways, and the soap never gets released. In order to demonstrate that it is a skin color issue, the Black man places a white paper towel under the soap dispenser, and the soap emerges.
It is easy to want to dismiss this issue as a simple technology malfunction. I thought about all the times I’ve had this problem with soap dispensers and paper towel dispensers, and how I always assumed the machine wasn’t working. The machine may actually be working — just not for those with darker skin tones like myself.
Self – Driving Cars
Earlier this week, the Georgia Institute of Technology released findings from a study they conducted that looked at how accurately object-detection models, (like those used by self-driving cars) detect different skin tones. The results they obtained indicated that object detection models were 5% less accurate in detecting darker-skinned individuals.
While the study didn’t test the exact object detecting models being used by self-driving cars; the findings should still be considered valid. In 2018 Sam Huang wrote a medium article in which she describes riding in the back of an automated driving car. In the article, she describes how each time a pedestrian came into the vicinity of the car, an alert would pop up on the tablet screen indicated that a human was present. However, there were two people the car failed to detect:
“There were two pedestrians that the car failed to detect. Two men. Both walking on the sidewalk like any one of the other pedestrians we had encountered. The only difference this time was their race. All the previous other pedestrians had been white. These two men the car had failed to detect were black.”
What this means for self-driving cars is daunting. In essence, people with dark skin are more likely to get hit by a self-driving car than their white or lighter skin counterparts. This is a life or death issue.
How Does Technology Become Racially Biased?
There are two causes of racial bias in technology. The first is data bias, and the second is creator bias. In data bias, the datasets used to train software are polluted with prejudices and assumptions. These prejudices then get learned and memorized by the software.
In creator bias, those designing and developing the software lack concern for all end users. This can look like white engineers writing algorithms that focus solely on their own race, or it can even look like UX designers only conducting user testing on white people. Either way, you spin it, the effects can be devastating.
Moving Forward
In order to ensure the safety of all users, there are a couple of things that need to happen moving forward:
- Make sure engineering and developer teams are racially diverse. This will bring more people into the room who will write code consciously thinking about different races.
- Make sure that those conducting user tests have diverse samples, in order to limit coverage error.
- Train your software with datasets that are racially diverse. Do not just use datasets that consist of white faces.
- Check our Algorithmic Justice League’s Bias Checker to see if your product has any issues with bias.
Originally published here by the BlackUX Collective