I spent a year inside the content moderation workforce in Nairobi and Manila - the human cost of making AI 'safe' is a class story nobody wants to tell - Silicon Canals
Briefly

I spent a year inside the content moderation workforce in Nairobi and Manila - the human cost of making AI 'safe' is a class story nobody wants to tell - Silicon Canals
"Grace earns roughly 30,000 Kenyan shillings a month. That's about $230 USD. The AI model she helps train, the one that learns from her thousands of daily labeling decisions about what counts as "violating" versus "borderline" versus "acceptable," generates billions of dollars in value for a company headquartered 15,000 kilometers away in California."
"What I found confirmed something I've been circling in my writing for a while now: the story of making AI "safe" is fundamentally a story about class, about which humans absorb the costs so that other humans never have to think about them."
"We talk a lot about AI supply chains in terms of data centers, GPU chips, energy consumption, rare earth minerals. These are real and important. But there's another supply chain, a human one, that runs through specific neighborhoods in specific cities in the Global South. Nairobi. Manila. Bogotá. Hyderabad."
Content moderators like Grace in Nairobi earn approximately $230 monthly reviewing graphic content including violence, abuse, and exploitation to train AI systems that filter social media feeds. These workers spend eight hours daily labeling content as violating, borderline, or acceptable, directly training AI models that generate billions in revenue for California-based tech companies located 15,000 kilometers away. Reporting from Nairobi and Manila reveals a hidden human supply chain in the Global South—including cities like Bogotá and Hyderabad—where workers absorb psychological and emotional costs of content moderation. This infrastructure remains largely unmapped in discussions of AI supply chains, which typically focus on data centers, GPU chips, and rare earth minerals. The arrangement demonstrates how AI safety implementation fundamentally reflects class divisions, with lower-income workers bearing the burden so affluent users avoid exposure to harmful content.
Read at Silicon Canals
Unable to calculate read time
[
|
]