Menu
24matins.uk
Navigation : 
  • News
    • Business
    • Recipe
    • Sport
  • World
  • Health
  • Culture
  • Tech
    • Science
Currently : 
  • Entertainment
  • Health
  • Tech
  • International

How to Opt Out Before Claude AI Begins Using Your Data

Tech
By 24matins.uk,  published 31 August 2025 at 11h30, updated on 31 August 2025 at 11h31.
Tech

Claude AI will soon begin processing user data, prompting concerns about privacy. Users have a limited window to opt out before this policy takes effect. Here’s what you need to know to safeguard your information before the deadline.

Tl;dr

  • Anthropic shifts Claude’s data use from default privacy to opt-out.
  • User conversations kept up to five years for AI training.
  • Non-acceptance leads to suspended access after September 2025.

Anthropic’s New Data Policy: A Paradigm Shift

For users of the renowned AI assistant Claude, a fundamental change is taking shape. As of today, everyone engaging with Anthropic‘s flagship service faces a pivotal choice: permit their conversations to be used for future AI model training or actively decline this usage. This updated policy marks a notable departure from the company’s prior commitment to privacy.

From Default Privacy to Extended Data Collection

Previously, Anthropic adhered strictly to an automatic deletion protocol—discussions and code vanished after thirty days, barring legal requirements. But the rules now take a sharp turn. Unless users explicitly refuse, their exchanges will be stored for as long as five years. The reasoning? To enhance and refine Claude through real-world user input—a decision impacting everyone on Free, Pro, or Max plans, and even those using Claude Code. It’s worth noting, however, that business accounts—whether educational institutions, public entities, or API clients—remain exempt from this broadened collection.

User Choices and Practical Implications

Rolling out gradually, the shift will greet both newcomers and veterans with a prompt labeled « Updates to Consumer Terms and Policies ». Here’s where it gets practical: users must either click « Accepter », thus consenting to data retention and training purposes, or toggle a small switch to reject it. There’s a clear deadline too; anyone failing to choose by September 28, 2025 will find their access suspended.

In concrete terms:

  • Only future conversations are affected;
  • If you delete a chat, it won’t be used for training;
  • Once integrated into the AI model, data cannot be removed.

A Necessary Move or an Alarming Precedent?

According to Anthropic, such data collection is essential for progress in reasoning capabilities and security—not just code generation. The company emphasizes that user data isn’t sold to third parties and that automated filters aim to strip sensitive content before any processing occurs. Still, shifting from « privacy by default » to mandatory sharing undeniably places more responsibility on individuals. Many may shrug off the implications; yet those relying on Claude for sensitive matters might see this as a serious warning signal about evolving digital trust.

Le Récap
  • Tl;dr
  • Anthropic’s New Data Policy: A Paradigm Shift
  • From Default Privacy to Extended Data Collection
  • User Choices and Practical Implications
  • A Necessary Move or an Alarming Precedent?
  • About Us
© 2026 - All rights reserved on 24matins.uk site content