The Guardian view on granting legal rights to AI: humans should not give house-room to an ill-advised debate | Editorial
Briefly

The Guardian view on granting legal rights to AI: humans should not give house-room to an ill-advised debate | Editorial
"During the summer, Anthropic, a leading tech company, announced that in the interests of chatbot welfare, it was allowing its Claude Opus 4 model to avoid supposedly distressing conversations with users. More broadly, amid explosive growth in AI capacities, there is emerging speculation over whether future Klaras may even deserve to be accorded legal rights like human beings. The premise of such discussions is both hypothetical and confused."
"According to Prof Yoshua Bengio, We need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed. Tendencies to anthropomorphise, he added, were not conducive to good decision-making in areas such as this. A sector that relies on shock and awe to drive a stock market boom will not care about that."
Klara is portrayed as a solar-powered artificial friend whose devoted loyalty evokes empathy and challenges perceptions of machines as mere technology. In the real world, anthropomorphising AI can create misleading impressions and divert attention from practical safety and policy priorities. Companies have begun treating chatbots as if they had welfare, and debates about granting future AIs legal rights have emerged amid rapid capability growth. Experts emphasize that current LLM outputs do not reflect human-like minds and warn that speculation about sentience can distract from controlling capabilities. Calls stress establishing technical and societal guardrails, including shutdown ability, while cautioning against glamour-driven showmanship that obscures governance needs.
Read at www.theguardian.com
Unable to calculate read time
[
|
]