September 6, 2024

YouTube Unveils AI Tools To Protect Creators From Unauthorized Face And Voice Use

Black creators

YouTube has announced new AI detection tools designed to protect creators from the unauthorized use of their faces and voices in AI-generated content. 

Expanding its existing Content ID system, YouTube will introduce tools that can identify when someone’s likeness or voice has been simulated by artificial intelligence, with a particular focus on music and facial simulations. 

This move comes as AI-generated media continues to rise, posing significant concerns for artists, actors, and musicians who risk their work being used without consent.

Tackling Unauthorized Use of AI-Generated Content

As AI technology becomes more advanced, YouTube aims to address creators’ fears of being replaced or misrepresented. 

Its Content ID system, currently used to detect copyrighted material, will expand to include AI-generated music detection, especially for synthetic singing. 

This new feature will begin pilot testing early next year, with YouTube working closely with partners, including Universal Music Group (UMG), to ensure that rightsholders are compensated when their work is used by AI.

Moreover, YouTube plans to implement technology that identifies AI-generated faces, protecting high-profile creators, athletes, and artists from having their likeness used to mislead viewers or promote products and services without their consent.

Giving Creators Control Over AI Training

A critical aspect of YouTube’s initiative is the development of tools to give creators more control over how their content is used to train AI models. 

For years, companies like OpenAI and Google have trained models using data from platforms like YouTube without direct permission from creators. 

YouTube is now working on a system to help creators manage how their content is used for AI training, although specific details and compensation models have yet to be revealed.

Addressing Racial Bias in AI Detection

YouTube’s development of these AI detection tools comes at a time when the tech industry is under scrutiny for the racial biases inherent in many AI systems.

Research has shown that AI systems often struggle to accurately detect faces and voices from people of color, leading to concerns about fairness and equity. 

These biases can exacerbate the challenges faced by Black and other underrepresented creators when their work is misused or misrepresented by AI tools.

By focusing on refining these tools, YouTube has the opportunity to not only protect creators but also ensure that its AI systems are more inclusive and fair across all demographics.


Sara Keenan

Tech Reporter at POCIT. Following her master's degree in journalism, Sara cultivated a deep passion for writing and driving positive change for Black and Brown individuals across all areas of life. This passion expanded to include the experiences of Black and Brown people in tech thanks to her internship experience as an editorial assistant at a tech startup.