Kenyan content moderators behind ChatGPT ask Parliament to probe Open AI and Sama

Beyond the viral ChatGPT, a generative AI product that was released last November by Open AI is the stories of exploitation of Kenyan content moderators who worked to ensure make the product less toxic earning less than $2 per hour.

In November 2021, Open AI outsourced the product's content moderation to Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. According to documents reviewed by TIME, OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech and violence.

Since the news about exploitation of content moderators on the ChatGPT project broke, Sama has argued that it pays almost double what other content moderation firms in East Africa pay and offers “a full benefits and pension package”. “Sama pays between Sh26,600 and Sh40,000 ($210 to $323) per month , which is more than double the minimum wage in Kenya and also well above the living wage. A comparative US wage would be between $30 and $45 per hour,” Sama wrote to Quartz Africa in January.

To double down on their efforts to find justice since the story was reported by The Time earlier this year, these Kenyan content moderators have filed a petition in parliament seeking a probe into Open AI and Sama.

“Sama engaged us and other young Kenyans on temporary contracts to do this work. The contracts did not describe sufficiently the nature of the job,” according to court documents filed by Richard Mwaura Mathenge, Mophat Ochieng Okinyi, Alex Mwaura Kairu and Bill Kelvin Mulinya on behalf of other moderators.

“Examples of the content that we were exposed to include; acts of bestiality, necrophilia, incestuous sexual violence, rape, defilement of minors, self-harm (e.g. suicide), and murder just to mention a few. [All of these] with no psychological support,” they said.

According to the court documents quoted by local media Citizen Digital: “The outsourcing model is commonly used by big technological companies based in the United States to export harmful and dangerous work to Kenyan youth. The outsourced workers are paid poorly and are not provided with the care they need to undertake such jobs. They are disposed of at will.”

“We recognise this is challenging work for our researchers and annotation workers in Kenya and around the world – their efforts to ensure the safety of AI systems have been immensely valuable,” an OpenAI spokesperson said. However, the company has not responded to specific claims made by the moderators.

Other exploitation allegations in Kenya involving Sama

Allegations around the exploitation of content moderators in Kenya by Sama are not new. Some former employees in Kenya sued Sama and its former client Meta—Facebook's parent—over illegal sacking and blacklisting. This is the third time these companies will be sued for issues related to the exploitation of content moderators in Kenya.

In January, Meta argued that the local employment and labour relations court had no jurisdiction over it because it is neither based in nor trades in Kenya, but it request to strike out the case was denied.

Since March 2023, Sama discontinued its content moderation operations for Meta. The social media company has since contracted Luxembourg-based Majorel as its new content moderation partner in the region. Majorel previously worked as TikTok's content moderator in the Middle East and North Africa where its employees claimed that they were treated like robots; reviewing videos of suicide and animal cruelty for less than $3 an hour.

As of 2022, Sama's contract to review harmful content for Meta was worth $3.9 million in 2022, according to internal Sama documents reviewed by TIME. The decision to drop Meta's contract came after a lawsuit was filed by Daniel Motaung, a South African national and ex-Sama content moderator, in Kenya last year accusing the two firms of forced labour and human trafficking, unfair labour relations, union busting and failure to provide "adequate" mental health and psychosocial support.

Over 150 moderators working for Facebook, TikTok, and ChatGPT have formed a workers union named the Content Moderators Union, drawing moderators from any major tech firm, to better press forward their grievances.

“Unionisation signals that gig work rights are labour rights, and workers deserve the protections provided by law in this field,” says Nanjira Sambuli, a Nairobi-based tech and international affairs fellow at the Carnegie Endowment for International Peace.