Type to search

Anthropic Gives Users a Dilemma: Keep Chats Private or Let AI Learn from Them

AI news Blog Tech

Anthropic Gives Users a Dilemma: Keep Chats Private or Let AI Learn from Them

Share

Anthropic is changing how it handles Claude user data, requiring decisions by September 28 on whether conversations can be used to train AI models. Previously, prompts and outputs were deleted within 30 days or kept for up to two years if flagged. The new retention period extends to five years for users who do not opt out. Business users remain exempt from this policy.

The company frames the update as a benefit, stating that shared data improves Claude’s safety and enhances its coding and reasoning skills. However, the primary motive is to obtain large-scale, real-world data to compete with AI rivals like OpenAI and Google. This data is critical for building more accurate and sophisticated models.

The changes reflect broader concerns over AI data management. OpenAI is currently required by a court order to retain all ChatGPT conversations indefinitely. Many users may accept Anthropic’s new policy without reading it carefully, illustrating challenges in achieving meaningful consent.

Anthropic’s interface places a large “Accept” button prominently, with a smaller toggle for data sharing preset to “On.” Privacy experts warn this may encourage users to consent unintentionally, emphasizing the delicate balance between AI development, ethical considerations, and user privacy.

Leave a Comment

Your email address will not be published. Required fields are marked *