The government's AI standoff could decide who really controls America's military tech
Briefly

The government's AI standoff could decide who really controls America's military tech
"Anthropic CEO Dario Amodei said he could not "in good conscience" allow the tech to be used for mass domestic surveillance or to independently direct autonomous weapons - two use cases he says violate the company's ethical guardrails."
"Anthropic argued that restrictions around its system's use for surveillance and autonomous weapons systems were not adequately reflected - or enforceable - in the government's draft contract language. Defense officials responded that they need to be able to deploy Claude for any "lawful use" - a term that would give the military broad discretion."
"The dispute has turned into a broader confrontation between the Pentagon and the private AI sector - not just over military contracts, but who ultimately sets the terms for how these powerful systems are used."
Anthropic was designated a supply-chain risk by the Pentagon and barred from defense contractors after refusing to agree to military use of its Claude model for mass domestic surveillance and autonomous weapons systems. CEO Dario Amodei stated these applications violated the company's ethical guidelines. Simultaneously, OpenAI announced a Department of Defense deal to deploy its AI models in classified environments. The conflict represents a broader confrontation between the Pentagon and private AI companies over military contracts and control of powerful AI systems. Anthropic argued that government contract language inadequately reflected restrictions on surveillance and autonomous weapons use, while defense officials insisted they need broad "lawful use" discretion for deployment flexibility.
Read at Business Insider
Unable to calculate read time
[
|
]