Black Computer Scientist Fired By Google For Speaking Out On AI’s Flaws Has Been Featured As The Next Harvard Business School Case Study
Timnit Gebru, a widely respected leader in AI ethics research, is known for co-authoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them.
Gebru, who recently founded her own firm, an independent artificial intelligence research institute, was awarded $3.7 million in funding after she was fired by Google.
And now she’s been cited as a case study by Harvard Business School’s research team in a new paper that details her experiences being ‘silenced’ by her previous employer.
The report, dubbed ‘Silencing no more on AI bias and the harms of large language models’, was written by Professor Tsedal Neeley.
What happened to Gebru?
In December 2020, Google fired Gebru, after she criticized the company’s approach to minority hiring and pushed to publish a research paper that pointed out flaws in a new type of A.I. system for learning languages.
Before she was let go, Dr. Gebru was seeking permission to publish a research paper about how A.I.-based language systems, including technology built by Google, may end up using the biased and hateful language they learn from text in books and on websites.
She previously told media she had grown tired over Google’s response to such complaints, including its refusal to publish the paper.
Jeff Dean, the head of Google AI, told colleagues in an internal email, which he later put online, that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet.
Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation.
But she was cut off from her corporate email account before her return.