
"The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work."
"In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was in the hot seat. "Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," he said. This is a message to other companies as well: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances."
"There's plenty to unpack here. For one thing, there's a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the raid to remove Venezuela's president Nicolás Maduro (that's what's being reported; the company denies it). There's also the fact that Anthropic publicly supports AI regulation-an outlier stance in the industry and one that runs counter to the administration's policies."
Anthropic secured US clearance for classified use last year but now faces Pentagon scrutiny and possible reconsideration of a $200 million contract. The company objects to participating in certain lethal military operations and could be designated a "supply chain risk," which would bar the Pentagon from working with firms using Anthropic's AI. Pentagon officials emphasized partner willingness to support warfighters and troop safety, while other AI companies pursue high clearances for defense work. Questions center on alleged use of Claude in a Venezuela raid, Anthropic's pro-regulation stance, and whether military demands may undermine AI safety.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]