"We felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead."
"Anthropic said that amid heightened competition and a lack of government regulation, it will no longer abide by its commitment to pause the scaling and/or delay the deployment of new models when such advancements would have outpaced its own safety measures."
Anthropic, an AI startup founded by former OpenAI employees known for prioritizing safety, has weakened its foundational safety commitment. The company announced it will no longer adhere to its previous pledge to pause scaling and delay deploying new models when advancements outpace safety measures. This shift reflects competitive pressures in the AI race and the absence of government regulation. Anthropic introduced a new Responsible Scaling Policy with separate recommendations for itself and the broader industry, modeled after US biosafety standards. Chief Science Officer Jared Kaplan stated that unilateral safety commitments no longer make sense given rapid AI advancement and competitor progress. The new policy maintains limited commitments to delay highly capable models only in restricted circumstances.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]