Anthropic has new rules for a more dangerous AI landscape
Briefly

Anthropic has revised its Claude AI chatbot usage policy in response to safety concerns, explicitly banning the development of biological, chemical, radiological, and nuclear weapons. The new policy builds upon previous restrictions that prohibited activities related to producing, modifying, or distributing harmful systems. The recent updates highlight specific dangerous weapons, including high-yield explosives, reflecting the company's commitment to AI safety. This revision follows the introduction of 'AI Safety Level 3' protections in May, aiming to enhance the security measures surrounding the AI's usage.
Anthropic has updated the usage policy for its Claude AI chatbot to include a ban on developing biological, chemical, radiological, or nuclear weapons, amidst safety concerns.
The updated policy expands restrictions from general harm-related activities to specifically prohibiting high-yield explosives and CBRN weapons development using Claude.
Read at The Verge
[
|
]