Allegations have arisen that Workday's AI-assisted platform discriminates against prospective employees. Experts warn that AI hiring tools, used by 492 of the Fortune 500 companies, can perpetuate biases if not properly implemented. A University of Washington study indicates that AI resume screenings favored white-associated names in 85.1% of cases and female-associated names in only 11.1%. This bias could create a continuous feedback loop of discrimination, escalating the problem of biased hiring practices without clear limitations on the extent of the issue.
"If the AI is built in a way that is not attentive to the risks of bias...then it can not only perpetuate those patterns of exclusion, it could actually worsen it," law professor Pauline Kim.
Research offers stark evidence of AI's hiring discrimination. The University of Washington Information School published a study last year finding that in AI-assisted resume screenings across nine occupations using 500 applications, the technology favored white-associated names in 85.1% of cases and female associated names in only 11.1% of cases.
"You kind of just get this positive feedback loop of, we're training biased models on more and more biased data," Kyra Wilson said. "We don't really know kind of where the upper limit of that is yet, of how bad it is going to get before these models just stop working altogether."
Collection
[
|
...
]