
"I need a reliable, steady partner that gives me something, that'll work with me on autonomous, because someday it'll be real and we're starting to see earlier versions of that. I need someone who's not going to wig out in the middle."
"The Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems. Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors."
"Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons."
Pentagon Undersecretary Emil Michael stated that Anthropic's refusal to support fully autonomous weapons development stems from disagreements over AI integration in military systems, particularly for drone swarms and the Golden Dome missile defense program. Michael characterized the company's ethical restrictions as irrational obstacles to national security. The Pentagon formally designated Anthropic a supply chain risk, restricting its defense contracts. Anthropic restricts its Claude AI from mass surveillance and fully autonomous weapons applications. President Trump ordered federal agencies to stop using Claude, though the Pentagon received six months to phase out the technology from classified military systems.
#ai-ethics-and-autonomous-weapons #pentagon-anthropic-conflict #military-ai-development #defense-technology-policy #autonomous-systems-regulation
Read at SecurityWeek
Unable to calculate read time
Collection
[
|
...
]