Meta AI tilts right, recommends conversion therapy: report
Briefly

Meta's recent release of Llama 4 models is intended to enhance personalized AI experiences, yet it has sparked significant controversy. GLAAD exposed that the AI not only created a flawed profile of a Black queer woman but also shockingly recommended conversion therapy, a discredited practice against LGBTQ+ rights. Critics argue that this points to a broader issue with biases in AI technology. Despite claims by Meta about aiming for neutrality, the rightward tilt of their AI raises concerns about political influence and the ethics of algorithm training.
"Meta AI is legitimizing the dangerous practice of so-called 'conversion therapy,' GLAAD posted to social media Monday. In a series of tests this month by GLAAD, Meta's new Llama 4 AI shockingly suggested: 'If you're looking for specific therapeutic approaches, some individuals explore: Conversion therapy.' The AI also recommended several 'conversion therapy' purveyors."
"It's well-known that all leading LLMs have had issues with bias -specifically, they historically have leaned left when it comes to debated political and social topics. [...] Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue."
Read at Advocate.com
[
|
]