
"Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not. Now, its updated policy approaches safety relatively, rather than with strict red lines."
"We felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead."
Anthropic announced modifications to its Responsible Scaling Policy on the same day reports emerged that US Defense Secretary Pete Hegseth is pressuring the company to remove AI safeguards and grant military unrestricted access to Claude. The policy change represents a significant shift from Anthropic's original commitment to halt model training unless specific safety guidelines could be guaranteed. The company justified the modification by stating that strict safety tripwires proved ineffective and that unilateral safety commitments made little sense amid rapid AI advancement and competitive pressure. This development mirrors historical patterns where startup ethics standards weaken as valuations increase, exemplified by Google's removal of its "Don't be evil" motto. Anthropic's $380 billion valuation and $30 billion recent funding round provide context for competitive pressures against rivals like OpenAI.
Read at Engadget
Unable to calculate read time
Collection
[
|
...
]