October 23, 2024

Meta Revives Facial Recognition—But What About Racial Bias Concerns?

Meta AI facial recongition

Meta is reintroducing facial recognition technology across Facebook and Instagram to fight against scammers who use celebrity images in fraudulent ads. 

This move comes after the company abandoned facial recognition in 2021 amid to privacy, accuracy, and racial bias concerns. 

A Return to Controversial Technology

In its latest effort to combat fraudulent ads, Meta’s facial recognition system will compare flagged images with the profile pictures of celebrities on Facebook and Instagram.  If a match is found, the ad will be automatically removed. 

Meta’s initial rollout focuses on 50,000 high-profile public figures. 

In addition to tackling fraudulent ads, Meta is testing the use of facial recognition to help users regain access to locked accounts. 

Previously, users had to provide official IDs for verification, but now, they can upload video selfies that will be compared to their profile images. 

This feature will be rolled out more broadly across Facebook and Instagram, and users trying to regain access to their accounts can opt in to the service.

Read: Meta’s AI Council Criticized For Excluding Women And People Of Color

Privacy Concerns

Meta promises that all biometric data from this process will be encrypted and securely stored.

The company also stressed that the facial data collected for ID verification will be deleted immediately after use, and users will have the option to opt out.

With its controversial past—including a $1.4 billion settlement over unauthorized facial recognition use—critics are skeptical about whether this is the best solution to combat scammers. 

Meta’s new feature will not be available in regions like the UK and the EU, where stricter privacy regulations apply.

Accuracy and Bias Concerns

Facial recognition technology has consistently shown to be less accurate for people of color, particularly Black individuals, often misidentifying faces with darker skin tones at higher rates than lighter skin tones.

The implications of these biases are particularly concerning in areas like law enforcement and surveillance.

ABC News reports that Meta’s director of global threat disruption, David Agranovich, declined to share figures on the accuracy of Meta’s facial recognition system. 

Although Meta claims its new measures have undergone a “robust privacy and risk review process,” the company’s reluctance to release overall accuracy data—or provide a breakdown by demographic groups—remains troubling.

POCIT has reached out to Meta for comment.

Sara Keenan

A multi-hyphenate journalist and podcaster based in London. Previously, a tech reporter at POCIT.