OpenA says Pentagon set 'scary precedent' binning Anthropic
Briefly

OpenA says Pentagon set 'scary precedent' binning Anthropic
"No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as "social credit")."
"The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities."
"We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law."
OpenAI established an agreement with the U.S. Department of War enabling deployment of its advanced AI systems in classified environments. The deal includes three core restrictions: prohibition on mass domestic surveillance, prevention of autonomous weapons system direction, and prevention of high-stakes automated decision-making. OpenAI maintains control over its safety protocols through cloud deployment, cleared personnel oversight, and contractual protections. The agreement requires human control in autonomous weapons systems and mandates rigorous verification and testing before deployment. OpenAI encouraged the Pentagon to extend identical terms to competing AI companies, citing a desire to reduce tensions between the Department and U.S. AI laboratories.
Read at Theregister
Unable to calculate read time
[
|
]