Steve Bannon says Anthropic 'had it right' in rejecting deal with the Pentagon
Briefly

Steve Bannon says Anthropic 'had it right' in rejecting deal with the Pentagon
"I think Anthropic had it right. Allowing the Pentagon to operate Anthropic's frontier model Claude with little guardrails is too dangerous. We have no earthly idea what is happening in the weapons lab with AI."
"Anthropic cannot in good conscience accede to the Pentagon's requests due to concerns over mass domestic surveillance and fully autonomous weapons."
"The Pentagon effectively blacklisted Anthropic by labeling it a supply chain risk, barring federal agencies from using the tech, which led to a lawsuit from Anthropic."
"Despite the legal and business fallout, Anthropic won big in the court of public opinion, with Claude temporarily overtaking ChatGPT in the App Store."
Steve Bannon expressed support for Anthropic's decision to reject a Pentagon deal, emphasizing the dangers of allowing the military to operate AI models without sufficient oversight. He highlighted the need for transparency regarding AI's use in weapons labs. The conflict began in February when the Pentagon pressured Anthropic to accept terms for using its AI model, Claude. Anthropic's CEO raised concerns about mass surveillance and autonomous weapons, leading to a blacklist by the Pentagon. Despite this, Anthropic gained public support and recently announced a new model, Mythos, which was paused due to cybersecurity issues.
Read at www.businessinsider.com
Unable to calculate read time
[
|
]