From Cars That Can’t ‘See’ Dark Skin To Tech That Can’t ‘Hear’ Diverse Accents: These Researchers Are Tackling Bias
If you’re a person with dark skin, you may be more likely than your white friends to get hit by a self-driving car, according to a 2019 study out of the Georgia Institute of Technology. That’s because automated vehicles may better detect pedestrians with lighter skin tones.
The study’s authors started out with a simple question: How accurately do state-of-the-art object-detection models, like those used by self-driving cars, detect people from different demographic groups?
To find out, they looked at a large dataset of images that contain pedestrians. They divided up the people using the Fitzpatrick scale, a system for classifying human skin tones from light to dark.
The researchers then analyzed how often the models correctly detected the presence of people in the light-skinned group versus how often they got it right with people in the dark-skinned group.
The result? Detection was five percentage points less accurate, on average, for the dark-skinned group. That disparity persisted even when researchers controlled for variables like the time of day in images or the occasionally obstructed view of pedestrians.
“It’s not simply the technology that is creating the problem. It’s what’s imbued in the technology”
“A lot of people will look toward technology as the end all, be all solution to a lot of social issues, but often social issues are not solved by technology, and technology often exacerbates these social issues,” said Cierra Robson, associate director of the Ida B. Wells Just Data Lab which brings students, educators, and activists together to develop creative approaches to data conception, production, and circulation.
Founded in 2018 and led by Ruha Benjamin, a sociologist and professor in the Department of African American Studies at Princeton University, the lab focuses on finding ways to “rethink and retool the relationship between stories and statistics, power and technology, data and justice.”
“It’s not simply the technology that is creating the problem. It’s what’s imbued in the technology,” Benjamin said in a recent interview. “It’s the values and the norms and the hierarchies that are encoded in it.”
For example, researchers have pointed out that digital assistants like Siri have trouble understanding diverse accents. Benjamin recalled the experience of one of her acquaintances, who worked for a tech giant. When he pointed out that African-American Vernacular English (AAVE) had not been considered when programming their voice recognition mechanism, his supervisor responded, “No, this is for a high-end market. We’re not doing that.”
Benjamin says this is an example of biased design decisions. “It’s not that we’re technically unable to do it, it’s that in terms of the market viability and market identity of certain technologies, these judgements are made.“
When he pointed out that African-American Vernacular English (AAVE) had not been considered when programming their voice recognition mechanism, his supervisor responded, “No, this is for a high-end market. We’re not doing that.”
In her role at the Just Data Lab, Robson works closely with Princeton students on a variety of projects that look at how technology bias is contributing to bias in all areas of our lives, from healthcare to labor and education.
Through her work with the lab and the Civics of Technology conference, Robson hopes to inspire more students to ask critical questions about how data is sourced and how technology is used in Black and brown communities.
She hopes they can use their newfound knowledge to create better practices in whatever fields of work and study they choose to venture into.