Anthropic users face a new choice – opt out or share your chats for AI training

Anthropic brings a number of major changes to how user data is handling, so that all Claude users have to decide on 28 September whether they want their conversations to be used to train AI models. While the company has directed us Blog post We have formed a number of our own theories about the policy changes When asked about what the relocation has asked.
But first, what changes: previously Anthropic did not use chat data for consumers for model training. Now the company wants to train its AI systems at user conversations and coding sessions, and it said that retaining data is expanding up to five years for those who do not unsubscribe.
That is a huge update. Earlier, users of the consumer products from Anthropic were told that their prompts and conversation outputs would be automatically removed from the back of Anthropic within 30 days “unless legal or policy to keep them longer” or their input was marked as the violation of the policy, in which case the input of a user and outputs may be stored.
By the consumer we mean that the new policy applies to Claude Free, Pro and Max users, including those who use Claude Code. Business customers who use Claude GOV, Claude for work, Claude for Education or API access will not be influenced, which is how OpenAi protects business customers in the same way against data training policy.
So why does this happen? In that message about the update, Anthropische frames the changes to user choice, saying that users do not help improve our model safety, our systems for detecting harmful content more accurately and less likely to mark harmless conversations. ” Users will also help improve future Claude models in skills such as coding, analysis and reasoning, which ultimately leads to better models for all users. “
In short, help us to help you. But the complete truth is probably a little less selfless.
Just like any other large business model company, anthropic data needs more than people need to have fuzzy feelings about the brand. Training AI models require huge amounts of high-quality conversation data and access to millions of claude interactions should offer exactly the type of real-world content that can improve the competitive positioning of Anthropic against rivals such as OpenAi and Google.
WAN event
San Francisco
|
27-29 October 2025
In addition to the competitive pressure of AI development, the changes also seem to be a reflection of broader industrial shifts in data policy, because companies such as anthropic and openi are confronted with an increasing research on their data content practices. For example, OpenAI is currently fighting a judicial order that forces the company to retain all consumer chatgpt interviews for an indefinite period of time, including deleted chats, due to a lawsuit that has been filed by the New York Times and other publishers.
In June, OpenAi Coo Brad Lightcap called this’ A Understanding and unnecessary question“That” fundamentally contrary to the privacy obligations we have entered into to our users. “The judicial order influences chatgpt free, plus, pro and team users, although Enterprise customers and people with zero data retention agreements are still being protected.
What is alarming is how much confusion all this changing user policy creates for users, many of whom do not remain aware of them.
In all honesty, everything now moves fast, so if the technology changes, the privacy policy will undoubtedly change. But many of these changes are reasonably sweeping and are only mentioned volatile in the midst of the other news of the companies. (You would not think that Tuesday’s policy changes were very large news for anthropic users based on where the company posted this update on its perspage.)

But many users do not realize that the guidelines with which they have been agreed have changed because the design guarantees it practically. Most chatgpt -users continue to click on “Remove” switches that do not remove anything technically. In the meantime, Anthropic’s implementation of her new policy is a well -known pattern.
How so? New users will choose their preference while registering, but existing users will be confronted with a pop -up with “updates for consumer terms and policy” in large text and a prominent black “accept” button with a much smaller switching switch for training rights below in smaller print and automatically set to “to”.
As observed earlier Today at The Verge, the design shouts the concern that users can quickly click on ‘Accept’ without noticing that they agree to share data.
In the meantime, the bet for user consciousness could not be higher. Privacy experts have long warned that the complexity around AI makes a meaningful permission of the users almost unreachable. According to the Biden administration, the Federal Trade Commission even came in, warning That AI companies risk enforcement measures if they are involved in “secretly changing the service conditions or privacy policy, or a disclosure behind hyperlinks, buried in Legalese or in small print”.
Whether the committee operates with fair three Of the five commissioners – these practices today still have an open question, a question that we have asked directly to the FTC.




