Anthropic to train its AI models on your chats from September: Here’s how to stop it | Technology News


Anthropic has announced that it will be updating its terms of service and privacy policy to permit the use of chat transcripts for training its popular AI chatbot, Claude.

The Amazon-backed AI startup on Thursday, August 28, said that users of all subscription tiers, including Claude Free, Pro, Max, and Code subscribers, will be affected by the change. Anthropic’s revised Consumer Terms and Privacy Policy is said to take effect from September 28, this year.

However, users who access Claude under specific licences such as Claude for Work (Team and Enterprise plans), Claude Gov, and Claude Education, will not be affected. In addition, third-party users who use the Claude API (Application Programming Interface) via Amazon Bedrock and Google Cloud’s Vertex AI are also exempted from the updated policy.

Story continues below this ad

Claude users can delay accepting the updated policy by clicking ‘not now’, but starting September 28, most user accounts will be opted in by default to share their chat transcripts for AI training.

The move comes amid the generative AI boom which is fueled by vast troves of data, prompting several tech companies to quietly update their privacy policies and terms of service so that they may use your data to train their AI models or licence it out to other companies for the same purpose.

In July this year, popular filesharing platform WeTransfer faced immediate backlash from users after it revised its terms of service agreement to suggest that files uploaded by users could be used to “improve machine learning models.” The company has since tried to patch things up by removing any mention of AI and machine learning from the document.

With growing backlash over the use of personal data for AI training, many companies are now giving individual and enterprise users the option to opt out of having their content used in AI training or being sold for training purposes. Here’s how you can opt out and avoid having your chat transcripts used to train Anthropic’s Claude chatbot.

Story continues below this ad

How to opt-out

New users will be shown an option to ‘Help improve Claude’ when they sign up to use the AI chatbot. They can toggle it off to opt out. Meanwhile, users who have already signed up to use Claude have until September 28 to opt out of the policy update.

After the deadline, users can still turn the option off by visiting Claude’s privacy settings. Follow these steps:

– If you are using the Claude mobile app, click on the three lines icon at the top left
– Tap the Settings icon > Privacy
– Toggle the ‘Help improve Claude’ option off
– If you are using the web version of the AI chatbot app, click on the user icon at the bottom left
– Tap the Setting icon
– Click on Privacy from the side panel
– Toggle the ‘Help improve Claude’ option off

If you have accidentally agreed to Anthropic’s updated terms of service, you can still opt out by following the steps above.

Story continues below this ad

For those who have chosen to opt-in, only their new and resumed chat transcripts with Claude will be used for AI training purposes. This means that older chats will not be used. Anthropic has further said it will store the data from opted-in users for a period of five years in order to identify misuse and detect harmful usage patterns.

Previously, Anthropic’s data retention policy only allowed user data to be stored by the company for 30 days.





Source link

Leave a Reply