
"There is a persistent myth of objectivity around AI, perhaps because people assume that once the systems are deployed, they can function without any human intervention. In reality, developers constantly tweak and refine algorithms with subjective decisions about which results are more relevant or appropriate. Moreover, the immense corpus of data that machine learning models train on can also be polluted."
"Tech giants learnt this the hard way when their AI advised people to consume rocks and make cheese stick on pizza with glue, perpetuated housing discrimination, hiring discrimination, severely cut healthcare benefits to deserving beneficiaries, amplified demographic stereotypes in generated images, recorded nearly 35% error rate in recognizing darker-skinned women in facial recognition systems and tried to address the issue in ethically murky waters before backing off."
"Objectivity and neutrality remain a tall ask for AI at present. While it's relatively easy to remove the human from the loop, our biases tend to stick around like rust in the machinery and AI has a way of amplifying it manifold. It has become common practice to blame biased or corrupted training data for algorithmic biases. While it is common for data to have gaps or be inaccurate and also reflect existing biases."
AI search promises more relevant results and relief from SEO- and ad-driven noise, but neutrality is not guaranteed. Developers continually make subjective decisions when tuning algorithms, and large training corpora often contain polluted or biased data. Real-world failures have included dangerous advice, discriminatory outcomes in housing and hiring, reductions in healthcare benefits, amplified stereotypes in generated images, and high error rates in facial recognition for darker-skinned women. Regulatory responses are emerging because removing humans from the loop does not remove human bias; instead, biases can persist and be amplified by AI systems.
Read at www.computer.org
Unable to calculate read time
Collection
[
|
...
]