March 8, 2023

Timnit Gebru On AI Oversight: We Have Food And Drug Agencies, Why Is Tech Any Different?

Whether we’re talking about ChatGPT, AI lawyers or the chatbot that lets you chat with Tupac in real time – generative AI is taking the world by storm.

But these systems are also reproducing many of the same biases we see in the real world – from sexist performance reviews to racist images.

Leading AI ethicist Dr Timnit Gebru, known for her groundbreaking research on the risks of large language models, was forced forced out of her position as the co-head of Google’s AI ethics team after raising issues of workplace discrimination.

Now the founder and executive director of the Distributed Artificial Intelligence Research Institute, Gebru says training AI models on mountains of indiscriminate data from the internet leads to bias.

Big Data is Still Biased Data

“Size doesn’t guarantee diversity,” Gebru told Lesley Stahl on CBS News’ 60 Minutes.

Although there is a wealth of different data on the internet, Gebru explained that marginalized groups are less likely to have access to the internet. Among those who do use the internet, they are more likely to face harassment and bullying, leading them to spend less time there.

“We were not surprised to see racist, and sexist, and homophobic, and ableist, et cetera, outputs”

“The text that you’re using from the internet to train these models is going to be encoding the people who remain online, who are not bullied off—all of the sexist and racist things that are on the internet, all of the hegemonic views that are on the internet,” Gebru said. “So, we were not surprised to see racist, and sexist, and homophobic, and ableist, et cetera, outputs.”

While organizations and research groups are training and building models that can sort through AI-generated content and filter out toxicity, they are struggling to keep up with the wealth of data models are generating. There are also ethical concerns about who is tasked with moderating harmful content and the psychological impact of this work.

Building in oversight

“If you’re going to put out a drug, you gotta go through all sorts of hoops to show us that you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence. Same with food, right?” Gebru continued.

“There are agencies that inspect the food. You have to tell me what kind of tests you’ve done, what the side effects are, who it harms, who it doesn’t harm, etc. We don’t have that for a lot of things that the tech industry is building.”

“I do think that there should be an agency that is helping us make sure that some of these systems are safe, that they’re not harming us, that it is actually beneficial. There should be some sort of oversight. I don’t see any reason why this one industry is being treated so differently from everything else.”

Samara Linton

Community Manager at POCIT | Co-editor of The Colour of Madness: Mental Health and Race in Technicolour (2022), and co-author of Diane Abbott: The Authorised Biography (2020)