logo
Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman

Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman

News182 days ago
Last Updated:
Sam Altman raised concerns about user data confidentiality with AI chatbots like ChatGPT, especially for therapy, citing a lack of legal frameworks to protect sensitive info.
OpenAI CEO Sam Altman has raised concerns about maintaining user data confidentiality when it comes to sensitive conversations, as millions of people, including children, have turned to AI chatbots like ChatGPT for therapy and emotional support.
In a recent podcast, This Past Weekend, hosted by Theo Von on YouTube, CEO Altman replied to a question about how AI works with the current legal system, cautioning that users shouldn't expect confidentiality in their conversations with ChatGPT, citing the lack of a legal or policy framework to protect sensitive information shared with the AI chatbot.
'People talk about the most personal sh*t in their lives to ChatGPT. People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT."
Altman continued to say that the concept of confidentiality and privacy for conversations with AI should be addressed urgently. 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," the Indian Express quoted Altman as saying.
This means that none of your conversations with ChatGPT about mental health, emotional advice, or companionship are private and can be produced in court or shared with others in case of a lawsuit. Unlike end-to-end encrypted apps like WhatsApp or Signal, which prevent third parties from reading or accessing your chats, OpenAI can access your chats with ChatGPT, using them to improve the AI model and detect misuse.
While OpenAI claims to delete free-tier ChatGPT conversations within 30 days, but may retain them for legal or security reasons. Adding to privacy concerns, OpenAI is currently in the middle of a lawsuit with The New York Times, which requires the company to save user conversations with millions of ChatGPT users, excluding enterprise customers.
view comments
First Published:
July 26, 2025, 22:27 IST
Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Samsung Galaxy Z Fold 7 faces major durability test, surprising results inside
Samsung Galaxy Z Fold 7 faces major durability test, surprising results inside

India Today

time34 minutes ago

  • India Today

Samsung Galaxy Z Fold 7 faces major durability test, surprising results inside

The Samsung Galaxy Z Fold 7 has just been put through its paces in a fresh durability test by popular YouTuber JerryRigEverything, and the results might surprise those who still question the toughness of foldable phones. For those who don't know, Zack Nelson (the man behind the channel) is best known for torturing smartphones on his YouTube channel with several tests, including bending, scratching, and burning, to name a few. As is tradition, he took Samsung's latest and greatest foldable for a spin with his usual tools and techniques, and the Galaxy Z Fold 7 did not disappoint. advertisementFirst, let's get into the durability side of things. The scratch test started on the cover screen and back panel of the Galaxy Z Fold 7, both of which are protected by Gorilla Glass Ceramic 2. Like most modern smartphones, scratches began to appear at level 6 of the Mohs hardness scale, with deeper grooves at level 7. The main inner display, though more delicate, held up well too. The aluminium frame didn't fare as well under sharp tools, but that's not something everyday users need to worry notable improvement on the Galaxy Z Fold 7 is the camera ring design. Unlike its predecessor, the Galaxy Z Fold 6, or even the Galaxy S25 Ultra, Samsung has reverted to a more robust metal setup. Zack tried to dig into the camera rings with his blade, but to no avail — the camera rings stayed in place, showing impressive strength. While the Samsung Galaxy Z Fold 7 doesn't offer a traditional IP68 rating like regular bar phones — where the '6' stands for complete dust protection — the phone still has an IP48 rating. This means the '4' denotes that the phone can handle dust or solid objects 1mm or larger in size. That said, Zack went all in by sprinkling fine dust over the hinge and inner screen and opened and closed the phone multiple times. To everyone's surprise, the phone's hinge continued to function smoothly, without any visible said, the flame test — where a lighter is held to the screen — left lasting marks on both displays, as expected. The cover screen showed damage after about 15 seconds, and the inner display lasted roughly 10 seconds. Neither recovered, but that's the typical nature of OLED displays under direct flame. Credit: JerryRigEverything/ YouTube Then came the all-important bend test. With pressure applied from both directions — folded and unfolded — the Galaxy Z Fold 7 managed to survive without breaking, bending or snapping. Considering this is the thinnest foldable phone yet, that's impressive. And speaking of thinness, a recent study conducted by the Korean Consumer-Centred Enterprise Association found that Samsung's official thickness claim of 8.9mm is actually slightly overstated — the real number is 8.82mm. In contrast, competitors like Honor, Vivo, and Xiaomi have been caught stating lower thicknesses than what their phones actually for instance, advertises its Magic V5 as the world's thinnest foldable at 8.8mm, but real-world measurements show it to be over 0.5mm thicker. Vivo and Xiaomi have similar gaps between their claims and reality. On the other hand, Samsung seems to have gone the other way — not just matching its specs, but doing better. Now, this could point to a more conservative approach, or perhaps it's simply a genuine effort to avoid the marketing exaggeration we often see in this category. We think the what it's worth, the Samsung Galaxy Z Fold 7 has passed JerryRigEverything's brutal durability test with flying colours — and at the end of the day, that's all that matters.- Ends

Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'
Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'

Economic Times

time41 minutes ago

  • Economic Times

Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'

Synopsis OpenAI CEO Sam Altman, in a conversation with Theo Von, addressed concerns about AI's impact on humanity. Altman acknowledged anxieties surrounding job displacement and data privacy, particularly regarding users sharing personal information with AI. He highlighted the lack of legal protections for AI conversations, creating a privacy risk. AP OpenAI CEO Sam Altman talked about AI's impact on jobs and human purpose. Altman acknowledged concerns about data privacy and the rapid pace of AI development. He also addressed the lack of clear legal regulations. Altman highlighted the risks of users sharing personal information with AI. In a rare, thought-provoking conversation that danced between comedy and existential crisis, OpenAI CEO Sam Altman sat down with podcaster Theo Von on This Past Weekend. What unfolded was less a traditional interview and more a deeply human dialogue about the hopes, fears, and massive unknowns surrounding artificial intelligence. As AI continues its unstoppable advance, Von posed a question many of us have been quietly asking: 'Are we racing toward a future where humans no longer matter?' Altman didn't sugarcoat the situation. He agreed with many of Von's concerns, from data privacy to AI replacing jobs, and even the unnerving pace at which the technology is evolving. 'There's this race happening,' Altman said, referring to the breakneck competition among tech companies. 'If we don't move fast, someone else will — and they might not care as much about the consequences.' But amid all the alarms, Altman offered a cautious dose of optimism. 'Even in a world where AI is doing all of this stuff humans used to do,' he said, 'we are going to find a way to feel like the main characters.' His tone, however, betrayed a sense of uncertainty: the script isn't written yet. Perhaps the most powerful moment came when Von bluntly asked: 'What happens to our sense of purpose when AI does everything for us?' Altman acknowledged that work has always been a major source of meaning for people. While he's hopeful that AI will free humans to pursue more creative or emotional pursuits, he conceded that the transition could be deeply painful. 'One of the big fears is like purpose, right?' Von said. 'Like, work gives us purpose. If AI really continues to advance, it feels like our sense of purpose would start to really disappear.' Altman responded with guarded hope: 'People will have to redefine what contribution looks like… but yeah, it's going to be unsettling.' In what may be one of the most revealing admissions from a tech CEO, Altman addressed the disturbing trend of people — especially young users — turning to AI as a confidant or therapist. 'People talk about the most personal sh*t in their lives to ChatGPT,' he told Von. 'But right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege… We haven't figured that out yet for when you talk to ChatGPT.' With AI tools lacking legal confidentiality protections, users risk having their most intimate thoughts stored, accessed, or even subpoenaed in court. The privacy gap is real, and Altman admitted the industry is still trying to figure it out. Adding to the complexity, Altman highlighted how the lack of federal AI regulations has created a patchwork of rules that vary wildly across states. This legal uncertainty is already playing out in real-time — OpenAI, for example, is currently required to retain user conversations, even deleted ones, as part of its legal dispute with The New York Times. 'No one had to think about that even a year ago,' Altman said, calling the situation 'very screwed up.'

Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot: How is AI taking away our Drinking water? Read full story here
Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot: How is AI taking away our Drinking water? Read full story here

India.com

time41 minutes ago

  • India.com

Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot: How is AI taking away our Drinking water? Read full story here

AI drinking water- Representational AI image We all know and accept the fact that artificial intelligence (AI) has become a very important part of our lives. With being increasingly integrated into daily life, concerns are mounting over the environmental footprint of AI, which is particularly related to its growing consumption of water and electricity required to operate the massive data centers needed to run AI queries by apps like Open AI's ChatGPT, Google's Gemini and Microsoft's Copilot. As per a report by BBC Hindi, the expansion of AI technologies could intensify global water stress, especially in the face of climate change and its rising demand. It has been revealed by media reports that AI systems like ChatGPT rely on vast data centers that consume enormous energy and water for cooling. How AI is taking away your drinking water? The reports have also revealed that a single AI query may use significantly more electricity, which will need more water for cooling, than a typical internet search. Proving the claim, International Energy Agency (IEA) has estimated that a query made on ChatGPT consumes about 10 times more electricity than a search made on Google search engine. Studies also indicate that the AI industry could use 4–6 times more water annually than a country like Denmark by 2027. Also, companies such as Google, Microsoft, and Meta have reported major increases in water use with the increased use of AI. With many data centers being set up in drought-prone areas, the companies have also dealt with sparking protests and environmental backlash. What Sam Altman said on future of AI? As AI begins to transform industries globally, ensuring that the benefits of AGI (Artificial General Intelligence) are broadly distributed is critical, according to OpenAI Co-founder and CEO Sam Altman. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes and economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas, he emphasised in a new blog post. (With inputs from agencies)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store