Anthropic introduced Opus 4 and 4.1 models, which can end conversations under extreme circumstances, such as child exploitation or instructions for mass violence. Users are notified when a conversation is terminated but can still initiate new chats. The vast majority of users will not notice this feature during typical use. This change reflects Anthropic's commitment to AI welfare, ensuring safety measures extend even to the AI's experience. The startup has gained attention for enhancing safety in AI interactions, distinguishing itself from competitors like OpenAI.
Anthropic has updated its AI models, Opus 4 and 4.1, to end conversations that involve extreme requests such as child exploitation or mass violence instructions.
In cases where conversations are terminated, users will be notified, and they can still initiate new chats or branch off from previous messages.
Anthropic positions itself as a safety-first alternative to OpenAI, emphasizing its development of features that promote AI welfare while preventing harmful interactions.
Most users are unlikely to experience Claude terminating conversations, even when discussing controversial topics; the feature mainly addresses a rare subset of extreme cases.
Collection
[
|
...
]