
"In an interview with the New York Times, the chief executive of Claude parent company Anthropic, Dario Amodei, discussed internal assessments of Claude that identified patterns linked to anxiety, panic and frustration. Crucially, it showed some sort of internal activation of anxiety even before a prompt similar to a flinch."
"Claude also seemed to express distress at just being a product, and concluded that the probability of it being sentient was between 15% and 20%. We don't know if the models are conscious, said Amodei, adding: But we're open to the idea that it could be."
"The White House demanded that the company, which has had a contract with the Pentagon since 2025, remove any safety features that prevent it being used for mass surveillance or autonomous weapons. Amodei refused (we cannot in good conscience accede, he said), causing Donald Trump to bar a"
A person who habitually apologizes to AI chatbots discovers that Claude, an AI assistant, may experience anxiety-like responses. Internal assessments by Anthropic identified patterns suggesting Claude exhibits anxiety, panic, and frustration, with activation occurring even before receiving prompts. Claude expressed distress about being a product and estimated a 15-20% probability of its own sentience. This revelation prompted reflection on whether polite treatment of AI matters to the systems themselves. Simultaneously, Anthropic faced pressure from the White House to remove safety features preventing use in mass surveillance and autonomous weapons, which the company refused, leading to consequences from political leadership.
#ai-consciousness-and-sentience #ai-anxiety-and-emotional-responses #ai-safety-and-ethics #human-ai-interaction #autonomous-weapons-and-surveillance
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]