OpenAI Paid Kenyan Workers Less Than $2 Per Hour, Leaving Many ‘Mentally Scarred’
OpenAI’s toxic working conditions
Microsoft’s $10 billion investment in OpenAI, alongside the recent announcement of the platform’s $29 billion valuation, has sparked speculation.
According to a recent TIME investigation, Kenyan workers hired to moderate the platform’s content had been left ‘mentally scarred’ from the harsh working conditions they were exposed to.
ChatGPT had reportedly been prone to blurting out violent, sexist, and racist remarks due to being trained to learn billions of words on the internet.
To help train the program, OpenAI followed in the footsteps of Meta by outsourcing workers through Sama, then a leading subcontractor for content moderation. The San Francisco-based firm had reportedly hired around 36 Kenyan employees to help improve the bot.
“OpenAI sent tens of thousands of text snippets to an outsourcing firm in Kenya, beginning in November 2021,” TIME reported. “Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.”
Despite the job consisting of workers spending hours sifting through some of the most harmful content on the internet, employees received a payment of $1.32 to $2 an hour.
The psychological impact of human moderation
According to the report, one worker was left severely distressed due to the harmful content they had been moderating. Four workers had also admitted to being “mentally scarred by the work.”
The workers told TIME that Sama provided only group sessions to them, while one said their requests to see counselors on a one-to-one basis were repeatedly denied by management.
In response to the claims, Sama’s spokesperson said it was “incorrect” the employees had only had access to group sessions. According to the spokesperson, therapists were accessible at any given time.
Sama’s work for OpenAI took a turn for the worst after they had received over 1,000 images of harmful content, most of which were illegal in the US. Shortly after this, the moderation platform decided to terminate its contract with the platform.
This is not the first incident where Sama has come under fire for its harmful working conditions. Ethiopian and Kenyan human rights groups filed a lawsuit against Meta and Sama for failing to put enough safety moderation measures on the platform.
Earlier this month, Sama announced they would be closing their content moderation hub in Kenya.