
Which AI chatbot is the best at protecting your privacy?
Mistral AI's Le Chat is the least privacy-invasive generative artificial intelligence model when it comes to data privacy, a new analysis has found.
Incogni, a personal information removal service, used a set of 11 criteria to assess the various privacy risks with large language models (LLMs), including OpenAI's ChatGPT, Meta AI, Google's Gemini, Microsoft's Copilot, xAI's Grok, Anthropic's Claude, Inflection AI's Pi AI and China-based DeepSeek.
Each platform was then scored from zero, being the most privacy-friendly to one, being the least-friendly on that list of criteria. The research aimed to identify how the models are trained, their transparency, and how data is collected and shared.
Among the criteria, the study looked at the data set used by the models, whether user-generated prompts could be used for training and what data, if any, could be shared with third parties.
What sets Mistral AI apart?
The analysis showed that French company Mistral AI's so-called Le Chat model is the least privacy-invasive platform because it collects 'limited' personal data and does well on AI-specific privacy concerns.
Le Chat is also one of the few AI assistant chatbots in the study that would only provide user-generated prompts to its service providers, along with Pi AI.
OpenAI's ChatGPT comes second in the overall ranking because the company has a 'clear' privacy policy that explains to users exactly where their data is going. However, the researchers noted some concerns about how the models are trained and how user data 'interacts with the platform's offerings'.
xAI, the company run by billionaire Elon Musk that operates Grok, came in third place because of transparency concerns and the amount of data collected.
Meanwhile, Anthropic's Claude model performed similarly to xAI but had more concerns about how models interact with user data, the study said.
At the bottom of the ranking is Meta AI, which was the most privacy invasive, followed by Gemini and Copilot.
Many of the companies at the bottom of the ranking don't seem to let users opt out of having prompts that they generated used to further train their models, the analysis said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
3 hours ago
- Euronews
This AI company can visualise your dreams. Here's how it works
A Dutch company says it has developed a way to use artificial intelligence (AI) to record dreams. Modem Works, an Amsterdam-based think tank and design studio, claims the Dream Recorder can capture dreams in 'ultra-low definition' and in any language. 'Wake up, speak your dream aloud … and watch it come to life in a dreamscape in the aesthetic of your choice,' the website for the project reads. How does it work? Modem Works says the project is 'Do-It-Yourself by Design.' It asks prospective users to download the open-source code, gather the hardware, 3D print the Dream Recorder's shell and assemble everything. Once assembled, users can double-tap to start a recording of themselves recalling their dream, and once finished, the dream will be generated. Another tap will play the generated dream and up to seven others that will be stored on a small 8-gigabyte processor. The company published the open source code on Github, a platform where coders share their projects, along with a list of the products they would need and where to buy them. The parts listed for the Dream Recorder include an HDMI screen, the 8-gigabyte processor, a micro SD card and a USB microphone. The approximate cost for all the parts to build a Dream Recorder is roughly €285, the developers wrote. The device would also require paying for the application programming interface (API) from OpenAI and AI video generation company LumaLabs to help generate the images for the dream. The developers estimate it would be less than $ 0.01 or $ 0.14 per dream, respectively, depending on the quality of the image. The Dream Recorder is the latest attempt to map out dreams with AI. In 2023, Japan's ATR Computational Neuroscience Laboratories developed an AI system that uses MRI scans to visualise and record dreams with a 60 per cent accuracy. Another pre-print study from the National University of Singapore and the Chinese University of Hong Kong in 2023 came to the same conclusion.

LeMonde
a day ago
- LeMonde
Brazil's Supreme Court makes social media directly liable for illegal content
Brazil's Supreme Court on Thursday, June 26, ruled that digital platforms must act immediately to remove hate speech and content that promotes serious crimes, in a key ruling on the liability of Big Tech for illegal posts. Brazil, where a Supreme Court judge famously took Elon Musk's X offline last year for 40 days over disinformation, has gone further than any other Latin American country in clamping down on questionable or illegal social media posts. Thursday's ruling makes social media platforms liable for third-party content deemed illegal, even without a court order. Eight of the 11 judges ruled than an article of the 2014 Internet Civil Framework, which holds that the platforms are liable for questionable content only if they refuse to comply with a court order to remove it, was partially unconstitutional. A majority of judges ruled that platforms must act "immediately" to remove content that promotes anti-democratic actions, terrorism, hate speech, child pornography and other serious crimes. For other types of illegal content, companies may be held liable for damages if they fail to remove it after it is flagged up by a third party. The ruling is likely to deepen the tensions between the Supreme Court, on one hand, and the technology companies who accuse Brazil of censorship. "We preserve freedom of expression as much as possible, without, however, allowing the world to fall into an abyss of incivility, legitimizing hate speech or crimes indiscriminately committed online," the court's president, Justice Luis Roberto Barroso, wrote. Justice Kassio Nunes, one of the three dissenting judges, argued, however, that "civil liability rests primarily with those who caused the harm" and not with the platforms.


Euronews
2 days ago
- Euronews
Musk-owned AI chatbot struggled to fact-check Israel-Iran war
A new report reveals that Grok — the free-to-use AI chatbot integrated into Elon Musk's X — showed "significant flaws and limitations" when verifying information about the 12-day conflict between Israel and Iran (June 13-24), which now seems to have subsided. Researchers at the Atlantic Council's Digital Forensic Research Lab (DFRLab) analysed 130,000 posts published by the chatbot on X in relation to the 12-day conflict, and found they provided inaccurate and inconsistent information. They estimate that around a third of those posts responded to requests to verify misinformation circulating about the conflict, including unverified social media claims and footage purporting to emerge from the exchange of fire. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals and avoiding unsubstantiated claims," the report says. "The study emphasises the crucial importance of AI chatbots providing accurate information to ensure they are responsible intermediaries of information." While Grok is not intended as a fact-checking tool, X users are increasingly turning to it to verify information circulating on the platform, including to understand crisis events. X has no third-party fact-checking programme, relying instead on so-called community notes where users can add context to posts believed to be inaccurate. Misinformation surged on the platform after Israel first struck in Iran on 13 June, triggering an intense exchange of fire. Grok fails to distinguish authentic from fake DFRLab researchers identified two AI-generated videos that Grok falsely labelled as "real footage" emerging from the conflict. The first of these videos shows what seems to be destruction to Tel Aviv's Ben Gurion airport after an Iranian strike, but is clearly AI-generated. Asked whether it was real, Grok oscillated between conflicting responses within minutes. It falsely claimed that the false video "likely shows real damage at Tel Aviv's Ben Gurion Airport from a Houthi missile strike on May 4, 2025," but later claimed the video "likely shows Mehrabad International Airport in Tehran, Iran, damaged during Israeli airstrikes on June 13, 2025." Euroverify, Euronews' fact-checking unit, identified three further viral AI-generated videos which Grok falsely said were authentic when asked by X users. The chatbot linked them to an attack on Iran's Arak nuclear plant and strikes on Israel's port of Haifa and the Weizmann Institute in Rehovot. Euroverify has previously detected several out-of-context videos circulating on social platforms being misleadingly linked to the Israel-Iran conflict. Grok seems to have contributed to this phenomenon. The chatbot described a viral video as showing Israelis fleeing the conflict at the Taba border crossing with Egypt, when it in fact shows festival-goers in France. It also alleged that a video of an explosion in Malaysia showed an "Iranian missile hitting Tel Aviv" on 19 June. Chatbots amplifying falsehoods The findings of the report come after the 12-day conflict triggered an avalanche of false claims and speculation online. One claim, that China sent military cargo planes to Iran's aid, was widely boosted by AI chatbots Grok and Perplexity, a three-year-old AI startup which has drawn widespread controversy for allegedly using the content of media companies without their consent. NewsGuard, a disinformation watchdog, claimed both these chatbots had contributed to the spread of the claim. The misinformation stemmed from misinterpreted data from flight tracking site Flightradar24, which was picked up by some media outlets and amplified artificially by the AI chatbots. Experts at DFRLab point out that chatbots heavily rely on media outlets to verify information, but often cannot keep up with the fast-changing news pace in situations of global crises. They also warn against the distorting impact these chatbots can have as users become increasingly reliant on them to inform themselves. "As these advanced language models become an intermediary through which wars and conflicts are interpreted, their responses, biases, and limitations can influence the public narrative."