"Our strong preference is to continue to serve the Department and our warfighters - with our two requested safeguards in place. We remain ready to continue our work to support the national security of the United States."
"The Pentagon demanded that Anthropic its Claude AI product available for 'all lawful purposes' - including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a 'safety stack' built into that model."
"Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon's terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a 'supply chain risk' - a designation usually reserved for firms from adversaries like China and 'never before applied to an American company.'"
Anthropic rejected Defense Secretary Pete Hegseth's ultimatum to remove guardrails from its Claude AI system for military use. The Pentagon demanded the AI be available for all lawful purposes, including mass surveillance and fully autonomous weapons development. Anthropic refused, citing ethical concerns about operating without safeguards. The DoD threatened to cancel a $200 million contract and designate Anthropic as a supply chain risk—a designation typically reserved for adversarial nations and never before applied to an American company. CEO Dario Amodei stated the company cannot comply in good conscience while remaining committed to supporting U.S. national security through other means.
Read at Engadget
Unable to calculate read time
Collection
[
|
...
]