Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute
Briefly

Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute
"This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons. No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."
"We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties."
"In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security."
Anthropic rejected Pentagon demands to remove restrictions on its AI model Claude, specifically refusing to permit mass domestic surveillance of Americans and fully autonomous weapons development. Following failed negotiations, Secretary of Defense Pete Hegseth designated Anthropic a supply chain risk and directed all military contractors to cease commercial activity with the company. President Trump ordered federal agencies to phase out Anthropic technology within six months. Anthropic maintained its position that mass domestic surveillance contradicts democratic values and poses risks to fundamental liberties, while supporting lawful foreign intelligence and counterintelligence applications of AI.
Read at The Hacker News
Unable to calculate read time
[
|
]