
"AI has been used in the planning and execution of military operations that have led to an unknown number of casualties, and roiled the Middle East. If that makes you feel uneasy, you're not alone. The CEO of Anthropic, Dario Amodei, has been embroiled in a very ugly, public spat with the US president after he refused to relax two red lines for Claude: that it should not be used for mass domestic surveillance, nor to build fully autonomous weapons that select and engage targets without meaningful human control."
"In the last three months Donald Trump's White House has reportedly used AI twice to effect regime change, or to in the most recent case in Iran get as close to doing so as possible, and leaving it up to rank-and-file Iranians to finish the job. First, Anthropic's Claude AI model which most people use as a slightly more discerning alternative to ChatGPT was supposedly used both to plan and execute the snatching of Nicolas Maduro from his compound in Venezuela."
While AI offers numerous beneficial applications from personal assistance to government efficiency, militarization of AI presents serious concerns requiring greater attention. Recent reports indicate the Trump administration used Anthropic's Claude AI model twice in military contexts: allegedly planning operations in Venezuela and analyzing intelligence for missile strikes on Iran. These operations resulted in unknown casualties and regional destabilization. Anthropic's CEO Dario Amodei publicly opposed relaxing safeguards against mass surveillance and autonomous weapons systems, while OpenAI signed a Pentagon agreement claiming stronger protections. The deployment of AI in military planning and execution represents a significant escalation with profound implications for global security and human control over military decisions.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]