Do you still think monitoring privacy policies is a waste of time? Remember the Claude Anthropic case
Claude Anthropic changed its policy in 2025 allowing training with user data by default. Discover why monitoring policies is essential governance under the EU AI Act.
Trust This Team

Do you still think monitoring privacy policies is a waste of time? Remember the Claude Anthropic case
Why do companies still ignore changes in privacy policies?
In August 2025, Anthropic announced significant changes to Claude's terms of use for consumer users. The main change? Conversations would now be used to train AI models, with opt-out available, but not as default. Additionally, the data retention period jumped to 5 years.
If you think this doesn't affect your company because you use corporate accounts, think twice. The problem isn't just what changed, but the fact that it changed without fanfare and you might not have noticed in time.
This case echoes what happened with Slack in May 2024, when the platform updated its policy to train global models with user message data – also by opt-out via email to support, not by configuration in the panel.
The lesson is clear: public privacy policies change. They change discreetly. And when you find out, your data may have already been processed in ways your company didn't authorize.
What exactly changed in Claude Anthropic's policy in 2025?
Claude's terms update brought three critical changes that went unnoticed by many users:
Use of conversations for AI training
Previously, conversations from free users weren't used to train models. After the change, chat transcripts began feeding the development of future Claude models – unless the user actively opted out. This reverses the consent logic: instead of asking permission, Anthropic assumed everyone agreed by default.
Extended 5-year retention
The data storage period increased significantly. Conversations that previously had shorter lifecycles can now remain on Anthropic's servers for up to five years. For companies concerned with data minimization and long-term exposure, this is a red flag.
Exclusions for commercial accounts – but with caveats
Anthropic was explicit: paying users (Pro, Team and Enterprise) wouldn't have their data used to train models. But here's the problem: if your company's collaborators use free personal accounts to test prompts or accelerate work, those interactions aren't protected. And you probably don't monitor that.
The official documentation makes it clear: "For Consumers who do not opt-out, we may now use Feedback to help develop and improve our Services". Feedback, in their context, includes your complete conversations.
How did the Slack case in 2024 serve as a market warning?
Before Claude, Slack had already tested the limits of transparency with a silent update to its AI usage policy.
The "global model" problem and the confusion generated
In May 2024, users discovered that Slack was training "AI and ML models" with channel and direct message data. The company called this a "global model" – a vague term that generated immediate panic. Companies feared their confidential information was being shared between organizations.
The reaction was so intense that Slack had to publish clarifications: the models were for search and recommendations within the organization itself, not generalized external training. But the damage was done.
Opt-out by email: an intentional barrier?
The exclusion process wasn't simple. There was no button in the admin panel. A formal email to Slack support requesting deactivation was required. For large corporations with multiple workspaces, this meant a manual, time-consuming process prone to failures.
Many IT administrators and DPOs only learned about the change weeks later, when the news went viral on social media and specialized publications like TechCrunch and Ars Technica.
Forced language revision
After the backlash, Slack adjusted its policy language, making it "clearer" – but the underlying practices remained. The lesson? Changes can be reversed or refined under pressure, but only if detected in time.
What are the real risks of not monitoring vendor privacy policies?
Ignoring changes in privacy policies isn't just operational negligence. It's a governance risk with tangible consequences.
Involuntary violation of contracts and data protection clauses
Many corporate contracts include specific clauses about data use, limited retention and prohibition of AI training with company information. If a vendor changes its policy and you don't act, you may be inadvertently violating your own agreements with clients or partners.
Imagine explaining to a financial or healthcare sector client that sensitive data was used to train an AI model because you didn't notice a terms of use update.
Exposure to audits and regulatory penalties
Data protection authorities expect data controllers to implement continuous vendor monitoring under the EU AI Act. The regulation requires you to know how third parties process the data you share.
Not knowing about a critical change isn't a valid defense. It's evidence of due diligence failure.
Loss of trust and reputational damage
When collaborators or clients discover their data was processed in unauthorized ways – even if technically within updated terms – trust is shaken. And recovering reputation costs much more than implementing a monitoring system.
How to implement an effective policy monitoring system?
Manually monitoring hundreds of vendors is unfeasible. The good news is there are practices and tools that make this task manageable.
Updated inventory with assigned criticality
Start by mapping all software and services your company uses. Classify them by criticality:
- High: Process sensitive data or have large user volumes
- Medium: Productivity tools with moderate data access
- Low: Peripheral services with limited impact
Prioritize monitoring high-criticality vendors.
Automation with configurable alerts
Tools like Trust This use AI to track changes in privacy policies and terms of use. When a relevant change is detected – like adding data use for AI training – you receive an alert with diff (side-by-side comparison) of what changed.
This eliminates the need for periodic manual review and ensures timely response.
Integration with purchasing and renewal flows
The moment of greatest influence is before signing or at renewal. Integrate monitoring into your procurement process:
- New software: Mandatory policy check before approval
- Renewals: Analysis of changes since last contracting
- Critical changes: Trigger for contract review or renegotiation
Defining red flags and action thresholds
Not every change requires immediate action. Establish clear criteria:
- Immediate action: Data use for AI training added, retention significantly increased, sharing with new third parties
- Scheduled review: Language adjustments, new features without data impact
- Record only: Technical corrections or contact updates
What tools and frameworks can help your company?
You don't need to build everything from scratch. There are consolidated resources that facilitate privacy governance implementation.
Trust This AITS index
The AITS (AI Trust Score) evaluates software on 20 criteria based on EU AI Act, GDPR and ISO privacy and AI standards. It offers:
- Standardized privacy transparency score
- Comparisons by software category
- History of policy changes over time
- Risk flags (red, yellow, green) by criteria groups
This allows Purchasing, Compliance and IT teams to make data-driven decisions, not impressions.
Due diligence checklists and minimum RFI
Trust This provides RFI (Request for Information) templates focused on privacy and AI, with 24 essential questions any vendor should answer before contracting.
This standardizes initial evaluation and creates a documented baseline for future comparison.
Public incident monitoring
Besides policies, it's crucial to track publicly reported breaches and incidents. Monitoring tools aggregate regulatory authority notifications, public postmortems and security alerts related to your vendors.
How should different company areas use this information?
Policy monitoring isn't the exclusive responsibility of one area. Each stakeholder has a specific role.
Purchasing and Procurement: Objective tiebreaker criterion
When two vendors seem equivalent in functionality and price, the privacy score can be the differentiator. Use AITS as a defensible criterion in decision committees and negotiate more robust clauses when detecting policy gaps.
DPO and Legal: Basis for opinions and contractual clauses
Policy change alerts are triggers to revisit previous opinions. If a tool approved 6 months ago now retains data for 5 years, this may require:
- New risk assessment
- Addition of minimization and on-demand exclusion clauses
- Communication to collaborators about proper use
IT and Security: Prioritization of deep technical analyses
Not every software needs a pentest or complete vulnerability analysis. Use the privacy score to prioritize where to invest limited security resources. Tools with red flags in data sharing or excessive retention deserve greater scrutiny.
What to do when a critical change is detected?
Receiving an alert is just the beginning. Effective response follows a clear protocol.
Immediate impact assessment
Gather involved areas (Compliance, IT, Purchasing, Legal) and answer:
- What data is exposed?
- How many users/collaborators are affected?
- Is there violation of contracts or regulatory obligations?
- Is there a viable market alternative?
Communication with vendor
In many cases, changes only affect free accounts or can be reversed upon request. Contact your CSM (Customer Success Manager) or technical support:
- Confirm if corporate accounts are exempt
- Request formal opt-out, if applicable
- Negotiate contractual addendums to guarantee specific protections
Registration and documentation
Maintain a log of:
- Date change detected
- Actions taken
- Vendor responses
- Final decision (continue using, migrate, accept with mitigations)
This serves as due diligence evidence in future audits.
Contingency and migration plan
If the change is unacceptable and there's no negotiated solution, prepare a migration plan to an alternative vendor. AITS facilitates this by providing ready comparisons by category.
Why "policies may change at any time" cannot be an excuse?
Practically every privacy policy includes the phrase: "We may update these terms periodically". But this doesn't exempt your company from responsibility.
Legal obligation of continuous monitoring
The EU AI Act establishes that controllers are responsible for damages caused by inadequate processing – even when performed by third parties. This means you can't simply blindly trust a vendor after initial contracting.
European jurisprudence under GDPR reinforces: due diligence is a continuous process, not a one-time event.
The real cost of not monitoring
Consider the scenario: you discover 6 months later that collaborator data was used to train AI because no one reviewed the updated policy. Now you need to:
- Notify authorities under EU AI Act (if applicable)
- Communicate affected collaborators
- Review and possibly terminate contracts
- Implement emergency remediations
- Deal with possible negative press coverage
The cost of preventive monitoring is a fraction of this.
Transforming obligation into competitive advantage
Companies that proactively monitor build solid governance reputation. This translates to:
- Greater client and partner trust
- Facilitated compliance in audits
- Lower exposure to legal and reputational risks
- Ability to negotiate better terms with vendors
Monitoring isn't paranoia, it's governance
The Claude Anthropic and Slack cases aren't exceptions. They're the new normal in a market where generative AI constantly redefines data processing limits.
Public policies will continue changing. Vendors will seek to maximize data use within legal limits. And the responsibility to protect your company and stakeholders is yours – not theirs.
The question isn't "will my vendors change policies?". The question is: "when they change, how long will it take you to find out?"
Start today. Map your critical vendors. Implement automated alerts. Integrate monitoring into your purchasing and renewal flows. And transform reactive compliance into proactive governance.
Because when it comes to privacy, finding out too late isn't just an operational problem. It's a leadership failure.
Learn about Trust This AITS and see how to monitor dozens of vendors in minutes, not months.
Flow diagram showing the response process when a critical change is detected: alert > assessment > communication > decision > documentation