Anthropic sticks to Claude guardrails despite Pentagon pressure
Briefly

Anthropic sticks to Claude guardrails despite Pentagon pressure
"Pentagon spokesman Sean Parnell stated that the Department of Defense has no intention of using AI for mass surveillance of Americans or for weapons that operate without human involvement. At the same time, he made it clear that the department demands full access to Anthropic's models for all legally permitted applications."
"Anthropic CEO Dario Amodei responded that his company cannot agree to that request, regardless of the pressure exerted. He emphasized that the objections do not stem from mistrust of the Pentagon, but from an assessment of product safety. According to Amodei, so-called frontier AI systems are simply not yet reliable enough to make independent decisions in situations where human lives are at stake."
"A source within the company explained that AI models can exhibit unpredictable behavior in unknown or new scenarios. In a military context, this can lead to serious mistakes, such as hitting friendly units."
Anthropic and the US Department of Defense are in escalating conflict over a $200 million defense contract. Anthropic maintains safety mechanisms in its AI models designed to prevent use in fully autonomous weapons systems and large-scale surveillance. The Pentagon demands full access to the models for all legally permitted applications, claiming the company's restrictions hinder legitimate defense uses. Anthropic CEO Dario Amodei refuses to comply, arguing that frontier AI systems lack sufficient reliability for independent decision-making in situations involving human lives. The company contends that AI models can exhibit unpredictable behavior in novel scenarios, potentially causing serious military errors. The Pentagon set a Friday deadline for compliance, threatening to terminate collaboration and designate Anthropic as a defense chain risk.
Read at Techzine Global
Unable to calculate read time
[
|
]