March 3, 2022

Can Better Tech Fix Darker Skin Bias In AI Cameras? Google Seems To Thinks So

The accuracy of facial recognition has improved drastically since ‘deep learning’ techniques were introduced into the field about a decade ago but there’s still a long way to go.

A few years ago – the world’s largest scientific computing society, the Association for Computing Machinery in New York City, urged a suspension of private and government use of facial-recognition technology, because of “clear bias based on ethnic, racial, gender, and other human characteristics”, which it said injured the rights of individuals in specific demographic groups. 

So this is clearly a big issue but it seems Google wants to end this type of bias one step at a time – starting with their latest mobile phone. The tech giant seems to think its new phone will help solve an aspect of the problem – capturing darker skin tones well.

In a one-minute ad that reportedly cost millions, Google told Super Bowl fans about something Black people have known for a long time: most cameras aren’t great at capturing darker skin. It was promoting its latest smartphone, the Pixel 6, the first with its Real Tone feature, which the company says uses artificial intelligence to take better pictures of people with darker skin tones.

The phone debuted in October, and its new ad, which featured a new track by Lizzo, cost $14 million US for the air time alone based on reported rates, according to Hollywood reporter

Google says it worked with a team of darker-skinned image experts, who helped the company “acknowledge our own gaps,” improving the exposure of faces with dark skin and expanding the color range captured by the Pixel camera.

In a blog post from Lorraine Twohill, Google’s CMO explains that smartphone cameras, including the company’s own creations, have historically failed to accurately capture images of people of color, often making them look washed out or unnaturally bright or dark.

“Representation and equity in everything should always be the norm and the default. And until we reach it, our goal at Google will always be to make gains in the world every day through our products and storytelling,” Twohill said in a statement.

The tech giant’s attempt at reducing bias in tech comes after campaigners and AI experts have long complained about the inbuilt bias.

To test the camera, Google’s own employees spent time with it, snapping images and providing feedback on what needed to improve. She said in the post that the company also consulted outside experts who made recommendations including that Google needed to significantly increase the number of portraits used to train its camera models.

Specs of the camera?

The main rear camera, shared by the Pixel 6 and Pixel 6 Pro, is a 50-megapixel beast with decent-sized pixel wells and an f/1.85 equivalent aperture (no, it doesn’t capture as much light as an f/1.8 on a DSLR, but it’s still good).

The ultrawide one, also shared, is 12 megapixels and f/2.2 on a smaller sensor, so don’t expect mind-blowing image quality. The 6 Pro gets a 48-megapixel telephoto with less low-light capability but a 4x equivalent zoom. They’re all stabilized and have laser-assisted autofocus.

AI Bias

In interviews we’ve done in the past, Marcel Hedman, the founder of A. I group Nural Research, a company that explores how artificial intelligence is tackling global challenges, described the long-standing issue as a “multi-layered” problem that can “definitely be solved.”

He previously told us: “But I think it’s something that naturally happens when you look at the way that datasets are constructed, and they are constructed based on data compiled from sources that each person chooses.

“So it’s not surprising at all when those preferences are tailored to the preferences of each person, which again represents natural bias.”

Tech entrepreneur Mr. Hedman said he believed the advancement of technology was necessary but urged people to get digital literacy.

“I think we need to ensure that we have a high level of what I’d say is digital literacy, which will basically enable anyone who is receiving data to understand what that data actually means, in context, and where it comes from.

“But the other thing is, and this is from my own standpoint, I believe that it’s super important for us to ask, in which situations should we specifically be using machine learning and AI? And even if we could, we should definitely ask ourselves about where do we want to use and deploy the technology; just because it’s effective doesn’t mean we should use it.”

Article Tags :
Abbianca Makoni

Abbianca Makoni is a content executive and writer at POCIT! She has years of experience reporting on critical issues affecting diverse communities around the globe.