Kenyan ChatGPT Moderators Call On Lawmakers To Stop Big Tech Exploitation
Kenyan content moderators who removed harmful content produced by OpenAI’s chatbot, ChatGPT, have petitioned the country’s lawmakers to investigate the nature of their work.
The petitioners are calling for an investigation into the “nature of the work, the conditions of the work, and the operations” of the Big Tech companies that outsource services in Kenya through companies like Sama.
Sama has been hit with several litigations on alleged exploitation, union-busting, and illegal mass layoffs of content moderators.
The workers are asking lawmakers to “regulate the outsourcing of harmful and dangerous technology” while protecting the workers that do the job.
How did we get here?
In 2019, former Facebook content moderator Daniel Motaung was fired after attempting to unionize workers at the outsourcing company Sama. Motaung then brought a legal case against Meta and Sama, which exposed the horrific circumstances facing content moderators.
Then at the beginning of this year, a TIME investigation exposed conditions many Kenyan workers had been subjected to while working under Sama.
In 2021 the report states Sama was contracted by OpenAI to “label textual descriptions of sexual abuse, hate speech, and violence.” Some 36 Kenyan moderators received just $1.32 to $2 an hour to sift through the harmful content, which left some workers ‘mentally scarred.’
Sama was also recently taken to court for union-busting and unlawfully laying off 260 workers when it decided to drop its content moderation services to concentrate on computer vision data annotation earlier this year.
In a watershed moment for the global tech industry, most recently, in May, more than 150 African content moderators who had provided services for AI tools used by Facebook, TikTok, and ChatGPT voted to unionize and establish the first African Content Moderators Union.
What have OpenAI and Sama Said?
OpenAI responded to the alleged exploitation, acknowledging the work is challenging. It claimed it had established and shared ethical and wellness standards with its data annotators for the work to be delivered “humanely and willingly,” reported TechCrunch.
“We recognize this is challenging work for our researchers and annotation workers in Kenya and around the world – their efforts to ensure the safety of AI systems have been precious,” said OpenAI’s spokesperson.
Sama additionally told TechCrunch it was open to working with the Kenyan government “to ensure that baseline protections are in place at all companies.”
It said it welcomes third-party audits of its working conditions, adding that employees have multiple channels to raise concerns.