
Worried your data is used to train AI models? Here's how to opt-out (if you can)
Last week, popular filesharing service WeTransfer faced immediate backlash from users after it revised the platform's terms of service agreement to suggest that files uploaded by users could be used to 'improve machine learning models.' The company has since tried to patch things up by removing any mention of AI and machine learning from the document.
While WeTransfer has backtracked on its decision, the incident shows that user concerns over privacy and data ownership have intensified in the age of AI.
Tech companies are scraping publicly available, copyright-protected data from every nook and corner of the internet to train their AI models. This data might include anything you've ever posted online, from a funny tweet to a thoughtful blog post, restaurant review, and Instagram selfie.
While this indiscriminate scrapping of the internet has been legally challenged in courts by several artists, content creators, and other rights holders, there are also certain steps that individual users can take to prevent everything they post online from being used for AI training.
As more and more users have rallied to raise concerns about this issue, many companies now let individuals and business customers opt out of having their content used in AI training or being sold for training purposes.
If you are an artist or content creator who wants to know if your work has been scraped for AI training, you can visit the website 'Have I Been Trained?', which is a service run by tech startup Spawning.
If you've discovered that your data has been used to train AI models, here's what you can (and can't) do about it depending on the platform. Keep in mind that while many companies choose to opt-in their users for AI training by default, opting out does not necessarily mean that the data already used for AI training or part of datasets will be erased.
If you have a business or school Adobe account, you are automatically opted out of AI training. For those who have a personal Adobe account, follow these steps:
-Visit Adobe's privacy page
-Scroll down to the Content analysis for product improvement section
-Press the toggle off
Google says that user interactions with its Gemini AI chatbot may be selected for human review to help improve the underlying LLM. Follow these steps to opt out of this process:
-Open Gemini in your browser,
-Go to Activity
-Select the Turn Off drop-down menu
-Turn off the Gemini Apps Activity
If you have an X account, follow these steps to opt out of your data being used to train Grok, the chatbot developed by Elon Musk's xAI:
-Go to Settings
-Go to privacy section, then Privacy and safety
-Open the Grok tab
-Uncheck the data sharing option
In September last year, LinkedIn announced that data including user posts will be used to train AI models. Follow these steps to prevent your new LinkedIn posts from being used for AI training:
-Go to your LinkedIn profile
-Open Settings
-Click on Data Privacy
-Toggle off the option labeled 'Use my data for training content creation AI models.'
According to OpenAI's help pages, web users who want to opt out of AI training can follow these steps:
-Navigate to Settings
-Go to Data Controls
-Uncheck 'improve the model for everyone' option
In the case of its image generator DALL-E, OpenAI said that users who want their images to be removed from future training datasets have to submit a form with their details such as name, email, and whether they own the rights to the content.
While these steps may get you to opt out of AI training, it is worth noting that many companies building AI models or machine learning features have likely already scraped the web. These companies often tend to be secretive about what data has been swept into their training datasets as they are wary of copyright infringement lawsuits or facing scrutiny by data protection authorities.
The tech industry largely believes that anything publicly available online is fair game for AI training. For instance, Meta scrapes publicly-shared content from users above 18 for AI training with exceptions only for users in countries that are part of the European Union (EU).

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Tariff deadline Aug 1, US team to visit India in 2nd half of month
File photo NEW DELHI: Even as the Aug 1 deadline set by Trump looms, a team of US negotiators will visit India only in the second half of Aug with top officials indicating that India is looking at a bilateral trade agreement by Fall. The govt has not commented on the last week's talks in Washington, but by all indications, negotiators have failed to make much headway on the contentious issue of allowing American agriculture and dairy products, which have been clearly marked as no-go areas by India. Besides, India has been demanding that it should get a good deal in areas of interests such as textiles, footwear, certain auto parts and shrimps and is unwilling to settle for sub-optimal solutions. The US has been insisting on concessions for agricultural products such as maize and soybean, most of which is genetically modified, something that India does not allow. In recent days, Trump has claimed that a deal with India will be finalised soon. Although it is unclear if talks for a deal before Aug 1 deadline will take place virtually, it remains to be seen how the American president responds, given that he has been sending tariff letters to countries which have not signed agreements so far. In fact, the transparency in the deals being signed by Trump is another concern with govt, given that the US has finalised deals with Vietnam and Bangladesh, where the non-disclosure clauses are preventing them from being made public while the American president claims victory. On April 2, Trump had announced a 26% reciprocal tariff on Indian products, only to suspend it till July 9 and announced Aug 1 as the new date for the higher levies to kick in. For India too, the stakes are high, given that the US is its largest export market. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


NDTV
3 hours ago
- NDTV
Musk's X Calls French Foreign Interference Probe "Politically Motivated", Refuses To Cooperate
A French probe into alleged foreign interference and bias via the algorithm at Elon Musk-owned social network X is "politically motivated", the company said in a post Monday, adding that it was refusing to cooperate. "X believes that this investigation is distorting French law in order to serve a political agenda and, ultimately, restrict free speech," the social network said. It added that it "has not acceded to the French authorities' demands" to access its recommendation algorithm and real-time data, "as we have a legal right to do". Cybercrime prosecutors announced the opening of the probe on July 11 into suspected crimes including manipulating and extracting data from automated systems "as part of a criminal gang". The move followed two complaints received in January about "foreign interference" in French politics via X -- one of them from Eric Bothorel, an MP from President Emmanuel Macron's centrist party. Bothorel had complained of "reduced diversity of voices and options" and Musk's "personal interventions" on the network since his 2022 takeover of the former Twitter. The Tesla and SpaceX chief has raised hackles in Europe with political sallies, including vocal backing for the far-right Alternative for Germany (AfD) party ahead of February legislative elections. "Democracy is too fragile to let digital platform owners tell us what to think, who to vote for or even who to hate," Bothorel said after the investigation was announced. The company responded Monday saying, "Mr Bothorel has accused X of manipulating its algorithm for 'foreign interference' purposes, an allegation which is completely false." Prosecutors have not confirmed whether they are also investigating under a French law against foreign interference in politics passed last year. X also complained of bias in French authorities' choice of experts to examine its algorithm, including mathematician David Chavalarias and computer scientist Maziyar Panahi. Both have been involved in a scheme called "HelloQuitteX", designed to make it easier for users to migrate their X presence to other social networks. Picking them "raises serious concerns about the impartiality, fairness and political motivations of the investigation", the company said. It also objected to the use of the "organised gang" aggravating circumstance. The characterisation "is usually reserved for drug cartels or mafia groups" and "enables the French police to deploy extensive investigative powers... including wiretapping the personal devices of X employees," the company said.


Time of India
3 hours ago
- Time of India
UK and ChatGPT maker OpenAI sign new strategic partnership
Academy Empower your mind, elevate your skills Britain and ChatGPT maker OpenAI have signed a new strategic partnership to deepen collaboration on AI security research and explore investing in British AI infrastructure, such as data centres, the government said on Monday."AI will be fundamental in driving the change we need to see across the country - whether that's in fixing the NHS (National Health Service), breaking down barriers to opportunity or driving economic growth," Peter Kyle, secretary of state for technology, said in a statement."This can't be achieved without companies like OpenAI, who are driving this revolution forward internationally. This partnership will see more of their work taking place in the UK."The government has set out plans to invest 1 billion pounds in computing infrastructure for AI development, hoping to increase public compute capacity 20 fold over the next five United States, China and India are emerging as front runners in the race to develop AI, putting pressure on Europe to catch partnership with OpenAI, whose tie-up with Microsoft once drew the scrutiny of Britain's competition regulator, will see the company possibly increase the size of its London office, and explore where it can deploy AI in areas such as justice, defence, security and education the same statement, OpenAI head Sam Altman praised the government for being the first to recognise the technology's potential through its " AI Opportunities Action Plan " - an initiative by Prime Minister Keir Starmer to turn the UK into an artificial intelligence Labour government, which has struggled to increase economic growth meaningfully in its first year in power and has since fallen behind in polls, has said that the technology could increase productivity by 1.5% a year, worth an extra 47 billion pounds ($63.37 billion) annually over a decade.