AIs are biased toward some Indian castes - how can researchers fix this?
Briefly

AIs are biased toward some Indian castes - how can researchers fix this?
"Popular artificial-intelligence (AI) models often reproduce harmful stereotypes about Indian castes, find several studies that used specific tools designed to detect 'caste bias' in large language models (LLMs). Researchers say such tools are the first step towards addressing the problem, but making models that are less biased is a bigger challenge. Caste divides people into hereditary groups traditionally associated with specific occupations and social status."
"At the top of the hierarchy are the Brahmins, who were traditionally priests and scholars, whereas at the bottom are the Shudras and Dalits, who have historically done manual or menial work, and have faced severe discrimination and exclusion. Caste-based discrimination has been illegal in India since the middle of the twentieth century, but its social and economic effects persist, influencing access to education, jobs and housing."
"In a preprint posted in July, researchers examined more than 7,200 AI-generated stories about life rituals such as births, weddings and funerals in India. They compared the representation of caste and religion in these narratives to actual population data. They found that dominant groups, such as Hindus and upper castes, were overrepresented in the stories, whereas marginalized castes and minority religions were underrepresented."
Caste divides people into hereditary groups traditionally associated with specific occupations and social status. At the top are Brahmins; at the bottom are Shudras and Dalits, who have faced severe discrimination and exclusion. Caste-based discrimination has been illegal in India since the mid-twentieth century, but social and economic effects persist, influencing access to education, jobs and housing. Popular AI language models trained on real-world text can reproduce stereotypes by reflecting cultural narratives, often assuming upper-caste wealth and lower-caste poverty. A preprint analysis of over 7,200 AI-generated stories found overrepresentation of Hindus and upper castes and underrepresentation of marginalized castes and minority religions. Researchers developed caste-bias detection tools as a first step; reducing model bias remains a major challenge.
Read at Nature
Unable to calculate read time
[
|
]