Anthropic updated consumer terms and its privacy policy to let users on Free, Pro, and Max plans opt in to having chats and coding sessions used to train its AI models. Commercial plans — including Claude for Work, Claude Gov, Claude for Education, and API usage — remain unchanged, even when accessed through cloud services like Amazon Bedrock and Google Cloud's Vertex AI. The collected data is intended to improve model safety and intelligence. Existing users will see a pop-up to opt in or out, and users retain the option to keep their chats out of training.
The data collected is meant to help improve the models, making them safer and more intelligent, the company said in the post. Also: Anthropic's Claude Chrome browser extension rolls out - how to get early access While this change does mark as a sharp pivot from the company's typical approach, users will still have the option to keep their chats out of training.
If you are an existing user, you will be shown a pop-up like the one shown below, asking you to opt in or out of having your chats and coding sessions trained to improve Anthropic AI models. When the pop-up comes up, make sure to actually read it because the bolded heading of the toggle isn't straightforward -- rather, it says
Collection
[
|
...
]