
"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court. Anthropic cited statute 10 USC 3252, which states a supply chain risk designation can only extend as it pertains to contracts with the Pentagon."
"If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude-through our API, claude.ai, or any of our products-is completely unaffected. If you are a Department of War contractor, this designation-if formally adopted-would only affect your use of Claude on Department of War contract work."
"Defense officials want to use AI models for all lawful purposes in classified and not have to adhere to company's ideas of what is safe and isn't, particularly in matters of national security. The Pentagon has separately accepted Anthropic rival OpenAI's safety measures and limitations."
President Trump and Defense Secretary Pete Hegseth threatened Anthropic with a supply chain risk designation to pressure the AI company into complying with Pentagon demands for unrestricted use of its Claude model in classified operations without safety constraints. Anthropic rejected the ultimatum, stating that no intimidation would change its position on mass domestic surveillance and fully autonomous weapons. The company argues that such a designation cannot legally extend beyond direct Pentagon contracts to affect commercial customers or military contractors using Claude for non-defense purposes. Anthropic committed to legal challenge and a six-month transition period to replace Claude in Pentagon operations. Meanwhile, the Pentagon has reportedly accepted OpenAI's safety measures as an alternative.
Read at Axios
Unable to calculate read time
Collection
[
|
...
]