
"The main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses. In a letter to staff, Amodei refers to OpenAI's messaging as "straight up lies," stating that Altman is falsely "presenting himself as a peacemaker and dealmaker.""
"Anthropic specifically took issue with the DoD's insistence on the company's AI being available for "any lawful use." OpenAI said in a blog post that its contract allows use of its AI systems for "all lawful purposes." Critics have pointed out that the law is subject to change, and what is considered illegal now might end up being allowed in the future."
"It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose. We ensured that the fact that it is not covered under lawful use was made explicit in our contract."
Anthropic and the Department of Defense failed to reach an agreement on military access to the company's AI technology. Anthropic, holding an existing $200 million DoD contract, demanded assurances against domestic mass surveillance and autonomous weaponry use. The DoD subsequently partnered with OpenAI instead. Anthropic's CEO Dario Amodei criticized OpenAI's deal as "safety theater," claiming OpenAI prioritized employee appeasement over abuse prevention. OpenAI stated its contract includes protections against these red lines and restricts use to "lawful purposes." However, critics note that legal definitions change over time, potentially rendering current protections ineffective if laws evolve to permit previously prohibited uses.
#ai-safety-and-governance #department-of-defense-contracts #autonomous-weapons-and-surveillance #openai-vs-anthropic
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]