Claude Anthropic changed its policy in 2025 allowing training with user data by default. Discover why monitoring policies is essential governance under the EU AI Act.
Trust This Team

In August 2025, Anthropic announced significant changes to Claude's terms of use for consumer users. The main change? Conversations would now be used to train AI models, with opt-out available, but not as default. Additionally, the data retention period jumped to 5 years.
If you think this doesn't affect your company because you use corporate accounts, think twice. The problem isn't just what changed, but the fact that it changed without fanfare and you might not have noticed in time.
This case echoes what happened with Slack in May 2024, when the platform updated its policy to train global models with user message data – also by opt-out via email to support, not by configuration in the panel.
The lesson is clear: public privacy policies change. They change discreetly. And when you find out, your data may have already been processed in ways your company didn't authorize.
Claude's terms update brought three critical changes that went unnoticed by many users:
Previously, conversations from free users weren't used to train models. After the change, chat transcripts began feeding the development of future Claude models – unless the user actively opted out. This reverses the consent logic: instead of asking permission, Anthropic assumed everyone agreed by default.
The data storage period increased significantly. Conversations that previously had shorter lifecycles can now remain on Anthropic's servers for up to five years. For companies concerned with data minimization and long-term exposure, this is a red flag.
Anthropic was explicit: paying users (Pro, Team and Enterprise) wouldn't have their data used to train models. But here's the problem: if your company's collaborators use free personal accounts to test prompts or accelerate work, those interactions aren't protected. And you probably don't monitor that.
The official documentation makes it clear: "For Consumers who do not opt-out, we may now use Feedback to help develop and improve our Services". Feedback, in their context, includes your complete conversations.
Before Claude, Slack had already tested the limits of transparency with a silent update to its AI usage policy.
In May 2024, users discovered that Slack was training "AI and ML models" with channel and direct message data. The company called this a "global model" – a vague term that generated immediate panic. Companies feared their confidential information was being shared between organizations.
The reaction was so intense that Slack had to publish clarifications: the models were for search and recommendations within the organization itself, not generalized external training. But the damage was done.
The exclusion process wasn't simple. There was no button in the admin panel. A formal email to Slack support requesting deactivation was required. For large corporations with multiple workspaces, this meant a manual, time-consuming process prone to failures.
Many IT administrators and DPOs only learned about the change weeks later, when the news went viral on social media and specialized publications like TechCrunch and Ars Technica.
After the backlash, Slack adjusted its policy language, making it "clearer" – but the underlying practices remained. The lesson? Changes can be reversed or refined under pressure, but only if detected in time.
Ignoring changes in privacy policies isn't just operational negligence. It's a governance risk with tangible consequences.
Many corporate contracts include specific clauses about data use, limited retention and prohibition of AI training with company information. If a vendor changes its policy and you don't act, you may be inadvertently violating your own agreements with clients or partners.
Imagine explaining to a financial or healthcare sector client that sensitive data was used to train an AI model because you didn't notice a terms of use update.
Data protection authorities expect data controllers to implement continuous vendor monitoring under the EU AI Act. The regulation requires you to know how third parties process the data you share.
Not knowing about a critical change isn't a valid defense. It's evidence of due diligence failure.
When collaborators or clients discover their data was processed in unauthorized ways – even if technically within updated terms – trust is shaken. And recovering reputation costs much more than implementing a monitoring system.
Manually monitoring hundreds of vendors is unfeasible. The good news is there are practices and tools that make this task manageable.
Start by mapping all software and services your company uses. Classify them by criticality:
Prioritize monitoring high-criticality vendors.
Tools like Trust This use AI to track changes in privacy policies and terms of use. When a relevant change is detected – like adding data use for AI training – you receive an alert with diff (side-by-side comparison) of what changed.
This eliminates the need for periodic manual review and ensures timely response.
The moment of greatest influence is before signing or at renewal. Integrate monitoring into your procurement process:
Not every change requires immediate action. Establish clear criteria:
You don't need to build everything from scratch. There are consolidated resources that facilitate privacy governance implementation.
The AITS (AI Trust Score) evaluates software on 20 criteria based on EU AI Act, GDPR and ISO privacy and AI standards. It offers:
This allows Purchasing, Compliance and IT teams to make data-driven decisions, not impressions.
Trust This provides RFI (Request for Information) templates focused on privacy and AI, with 24 essential questions any vendor should answer before contracting.
This standardizes initial evaluation and creates a documented baseline for future comparison.
Besides policies, it's crucial to track publicly reported breaches and incidents. Monitoring tools aggregate regulatory authority notifications, public postmortems and security alerts related to your vendors.
Policy monitoring isn't the exclusive responsibility of one area. Each stakeholder has a specific role.
When two vendors seem equivalent in functionality and price, the privacy score can be the differentiator. Use AITS as a defensible criterion in decision committees and negotiate more robust clauses when detecting policy gaps.
Policy change alerts are triggers to revisit previous opinions. If a tool approved 6 months ago now retains data for 5 years, this may require:
Not every software needs a pentest or complete vulnerability analysis. Use the privacy score to prioritize where to invest limited security resources. Tools with red flags in data sharing or excessive retention deserve greater scrutiny.
Receiving an alert is just the beginning. Effective response follows a clear protocol.
Gather involved areas (Compliance, IT, Purchasing, Legal) and answer:
In many cases, changes only affect free accounts or can be reversed upon request. Contact your CSM (Customer Success Manager) or technical support:
Maintain a log of:
This serves as due diligence evidence in future audits.
If the change is unacceptable and there's no negotiated solution, prepare a migration plan to an alternative vendor. AITS facilitates this by providing ready comparisons by category.
Practically every privacy policy includes the phrase: "We may update these terms periodically". But this doesn't exempt your company from responsibility.
The EU AI Act establishes that controllers are responsible for damages caused by inadequate processing – even when performed by third parties. This means you can't simply blindly trust a vendor after initial contracting.
European jurisprudence under GDPR reinforces: due diligence is a continuous process, not a one-time event.
Consider the scenario: you discover 6 months later that collaborator data was used to train AI because no one reviewed the updated policy. Now you need to:
The cost of preventive monitoring is a fraction of this.
Companies that proactively monitor build solid governance reputation. This translates to:
The Claude Anthropic and Slack cases aren't exceptions. They're the new normal in a market where generative AI constantly redefines data processing limits.
Public policies will continue changing. Vendors will seek to maximize data use within legal limits. And the responsibility to protect your company and stakeholders is yours – not theirs.
The question isn't "will my vendors change policies?". The question is: "when they change, how long will it take you to find out?"
Start today. Map your critical vendors. Implement automated alerts. Integrate monitoring into your purchasing and renewal flows. And transform reactive compliance into proactive governance.
Because when it comes to privacy, finding out too late isn't just an operational problem. It's a leadership failure.
Learn about Trust This AITS and see how to monitor dozens of vendors in minutes, not months.
Flow diagram showing the response process when a critical change is detected: alert > assessment > communication > decision > documentation