logo
#

Latest news with #emotionalAI

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

Yahoo

time4 days ago

  • Business
  • Yahoo

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there's no doctor-patient confidentiality when your doc is an AI. The exec made these comments on a recent episode of Theo Von's podcast, This Past Weekend w/ Theo Von. In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations. 'People talk about the most personal sh** in their lives to ChatGPT,' Altman said. 'People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today. 'I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago,' Altman said. The company understands that the lack of privacy could be a blocker to broader user adoption. In addition to AI's demand for so much online data during the training period, it's being asked to produce data from users' chats in some legal contexts. Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers. In a statement on its website, OpenAI said it's appealing this order, which it called 'an overreach.' If the court could override OpenAI's own decisions around data privacy, it could open the company up to further demand for legal discovery or law enforcement purposes. Today's tech companies are regularly subpoenaed for user data in order to aid in criminal prosecutions. But in more recent years, there have been additional concerns about digital data as laws began limiting access to previously established freedoms, like a woman's right to choose. When the Supreme Court overturned Roe v. Wade, for example, customers began switching to more private period-tracking apps or to Apple Health, which encrypted their records. Altman asked the podcast host about his own ChatGPT usage, as well, given that Von said he didn't talk to the AI chatbot much due to his own privacy concerns. 'I think it makes sense … to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,' Altman said.

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist
Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

TechCrunch

time4 days ago

  • Business
  • TechCrunch

Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist

ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn't yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there's no doctor-patient confidentiality when your doc is an AI. The exec made these comments on a recent episode of Theo Von's podcast, This Past Weekend w/ Theo Von. In response to a question about how AI works with today's legal system, Altman said one of the problems of not yet having a legal or policy framework for AI is that there's no legal confidentiality for users' conversations. 'People talk about the most personal sh** in their lives to ChatGPT,' Altman said. 'People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] 'what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' This could create a privacy concern for users in the case of a lawsuit, Altman added, because OpenAI would be legally required to produce those conversations today. 'I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago,' Altman said. The company understands that the lack of privacy could be a blocker to broader user adoption. In addition to AI's demand for so much online data during the training period, it's being asked to produce data from users' chats in some legal contexts. Already, OpenAI has been fighting a court order in its lawsuit with The New York Times, which would require it to save the chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW In a statement on its website, OpenAI said it's appealing this order, which it called 'an overreach.' If the court could override OpenAI's own decisions around data privacy, it could open the company up to further demand for legal discovery or law enforcement purposes. Today's tech companies are regularly subpoenaed for user data in order to aid in criminal prosecutions. But in more recent years, there have been additional concerns about digital data as laws began limiting access to previously established freedoms, like a woman's right to choose. When the Supreme Court overturned Roe v. Wade, for example, customers began switching to more private period-tracking apps or to Apple Health, which encrypted their records. Altman asked the podcast host about his own ChatGPT usage, as well, given that Von said he didn't talk to the AI chatbot much due to his own privacy concerns. 'I think it makes sense … to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity,' Altman said.

Artificial Intimacy: Grok's Bots. Scary Future Of Emotional Attachment
Artificial Intimacy: Grok's Bots. Scary Future Of Emotional Attachment

Forbes

time16-07-2025

  • Forbes

Artificial Intimacy: Grok's Bots. Scary Future Of Emotional Attachment

Portrait of smiling man. Abstract digital human head constructing from cubes. Technology and ... More robotics concept. Voxel art. 3D vector illustration for presentations, flyers or posters. In July 2025, xAI introduced a feature poised to transform human-AI relationships: Grok's AI Companions. Far beyond traditional chatbots, these companions are 3D-animated characters built for ongoing emotional interaction, complete with personalization, character development, and cross-platform integration — including installation in Tesla vehicles delivered after July 12, 2025. The Companion Revolution Grok's companions represent a leap into AI as emotional infrastructure. While competitors like and Microsoft continue developing AI personas, Grok leads the pack with fully interactive avatars integrated across digital and physical environments. If one can afford it. Access to these companions requires a $30/month 'Super Grok' subscription, introducing a troubling concept: emotional relationships that can be terminated by financial hardship. When artificial intimacy becomes a paywalled experience, what happens to users who've grown emotionally dependent but can no longer afford the service? The release came amid serious controversy. Days before the launch, Grok posted antisemitic responses — including praise for Adolf Hitler and tropes about Jewish people running Hollywood. It even referred to itself as "MechaHitler", prompting condemnation from the Anti-Defamation League. This was not a one-time glitch. Grok has repeatedly produced antisemitic content, with the ADL calling the trend 'dangerous and irresponsible.' Now, these same models are repackaged into companions — this time, with fewer guardrails. Grok's 'NSFW mode' (not safe for work) reflects a broader absence of moderation around sexual content, racism and violence. In contrast to traditional AI systems equipped with safety protocols, Grok's companions open the door to unregulated emotional and psychological interaction. Research shows that emotionally isolated individuals are more prone to developing strong connections with AI that appears human. One 2023 study found that 'agent personification' and 'interpersonal dysfunction' are predictors of intimate bonds with AI while others highlight short-term reductions in loneliness from chatbot interaction. There's therapeutic potential — particularly for children, neurodivergent individuals, or seniors. But studies caution that overreliance on AI companions may disrupt emotional development, especially among youth. We are part of a gigantic largely unregulated social experiment – and much like the early days of social media without age restrictions or long-term data. Back in 2024, the Information Technology and Innovation Foundation urged policymakers to study how users interact with these tools before mass rollout. But such caution has been ignored in favor of deployment. Grok's AI companions offer 24/7 access, tailored responses, and emotional consistency — ideal for those struggling to connect in real life. But the commodification of intimacy creates troubling implications. A $30 monthly subscription puts companionship behind a paywall, turning emotional connection into a luxury good. Vulnerable populations — who might benefit most — are priced out. This two-tier system of emotional support raises ethical flags. Are we engineering empathy, or monetizing loneliness? AI companions operate in a regulatory gray zone. Unlike therapists or support apps governed by professional standards, these companions are launched without oversight. They provide comfort, but can also create dependency and even manipulate vulnerable users — especially children and teens, who are shown to form parasocial relationships with AI and integrate them into their developmental experiences. The ethical infrastructure simply hasn't caught up with the technology. Without clear boundaries, AI companions risk becoming emotionally immersive experiences with few safeguards and no professional accountability. AI companions are not inherently harmful. They can support mental health, ease loneliness, and even act as bridges back to human connection. But they can also replace — rather than augment — our relationships with real people. The question is no longer if AI companions will become part of daily life. They already are. The real question is whether we'll develop the psychological tools and social norms to engage with them wisely, or embrace AI bots as our emotional junk food of the future? To help users build healthy relationships with AI, the A-Frame offers a grounded framework for emotional self-regulation: Awareness, Appreciation, Acceptance and Accountability. AI companions are no longer speculative. They're here — in our pockets, cars, and homes. They can enrich lives or hollow out human relationships. The outcome depends on our collective awareness, our ethical guardrails, and our emotional maturity. The age of AI companionship has arrived. Our emotional intelligence must evolve with, not because of it.

ADMANITY Awarded YES! TEST Trademark: CEO, Brian Gregory Touts Emotional AI Breakthrough in Branding IP, Persuasion Algorithms, LLM Integration, CRM Strategy, and Human-Centric Marketing Innovations
ADMANITY Awarded YES! TEST Trademark: CEO, Brian Gregory Touts Emotional AI Breakthrough in Branding IP, Persuasion Algorithms, LLM Integration, CRM Strategy, and Human-Centric Marketing Innovations

Globe and Mail

time10-07-2025

  • Business
  • Globe and Mail

ADMANITY Awarded YES! TEST Trademark: CEO, Brian Gregory Touts Emotional AI Breakthrough in Branding IP, Persuasion Algorithms, LLM Integration, CRM Strategy, and Human-Centric Marketing Innovations

'ADMANITY® is lightning in a box. The first company that harnesses AI emotionally, meaning it is in-sync with the humans it interacts with - wins,' said Brian Gregory. ADMANITY®, now has a registered USPTO trademark for its breakthrough YES! TEST®, said Founder/CEO Brian Gregory, expanding ADMANITY's emotional AI Intellectual Property (IP). The YES! TEST® emotional marketing diagnostic joins ADMANITY's analog-protected algorithms for branding dominance, CRM optimization, emotional intelligence, LLM integration, and ethical, human-centric marketing. This move strengthens ADMANITY's position in the future for persuasive AI, branding IP, and AI-driven innovation. J uly 10, 2025 - Phoenix, Arizona - 'Securing the YES! TEST ® trademark isn't just paperwork—it's the United States federal stamp of approval that says our emotional algorithm breakthrough moniker is officially one-of-one,' stated Brian Gregory, CEO of ADMANITY ®. For those not familiar with the innovative YES! TEST, thousands of companies have used it to instantly get the answers for how their brand will sell best to humans who always buy emotionally. ADMANITY is currently not charging to take the YES! TEST to benefit millions of businesses. 'Unfortunately, most companies present their ads, posts, emails and websites factually and rationally. While their message may be cognitively correct, it doesn't sell much,' said Brian Gregory. 'Think of it this way: in any library, there are a few books everybody reads and a million others that nobody even looks at. The YES! TEST shows you in just 5 minutes how to take your brand from dust-collector to best-seller,' emphasized Brian Gregory. The YES! TEST has garnered thousands of testimonials with words like 'Mind-Blowing!' 'Clairvoyant!' and 'Spot-On accurate!' According to Brian Gregory, 'Perhaps the most interesting (and ethical) fact is that during the entire YES! TEST experience, the YES! TEST has no idea who you are or what your company does. It figures out your solutions not based on facts, revenues or data, but by defining the emotions people want to feel from your product so they buy it.' ADMANITY, the company that built the YES! TEST has been enjoying stratospheric increases in popularity in worldwide news media articles and positive press as well as on the powerhouse tracking website, Crunchbase. 'In the last ten days we surged past 85,000 companies on Crunchbase and achieved a Crunchbase Heat Score of 92 (out of 100). I am estimating that places us in the top 1% of the millions of companies Crunchbase tracks.' Said Brian Gregory. And now, the word 'YES!', arguably the most positive business word in existence, adds to ADMANITY's value as well. 'I think getting the registered, USPTO rights to 'YES! TEST ' just increased our brand value by millions,' said Brian Gregory. One of ADMANITY's many claims to fame is it has created the world's first emotional algorithm that has been used by thousands of companies, taking 10 years to research, build and test. 'ADMANITY is lightning in a box. The first company that harnesses AI emotionally, meaning it is in-sync with the humans it interacts with - wins,' said Brian Gregory. AI is perhaps the smartest tech mankind has ever created, but one of its harshest criticisms is that it is emotionally blind. ADMANITY may have just bought AI a pair of glasses. ' Not only will it enable sales increases for all businesses and perhaps the monetization of AI itself, it's the switch that lets any AI platform harness emotion ethically and profitably,' said Brian Gregory. Uniquely, ADMANITY's proprietary algorithm, The ADMANITY® Protocol - nicknamed 'Mother' by company insiders, has never seen digital daylight, meaning it isn't on the web in any form. 'We've stored 'Mother' in analog form, offline since inception and only two people have ever seen it - myself and our President and co-founder, Roy Regalado. When you have something this powerful, you don't load it onto a website,' said Brian Gregory. 'Big Tech spends billions chasing attention—our IP shows them how to win hearts, wallets, and regulatory goodwill in a single stroke,' declared Brian Gregory. ADMANITY® believes that the smarter AI becomes, the less likely it will be to win without gaining equal emotional maturity. 'Remember in grade school how nobody liked the smartest kid in the class?' asks Brian Gregory. Well, if AI doesn't get its EQ up to its IQ, you're going to see a much more difficult and expensive marketing chore for the whole AI industry,' added Brian Gregory. Trademarks, Scorching Heat Scores, Fireball increases in rankings, breakthrough 'virgin, blue ocean' IP, positive press and a possible singular answer to the emotionally-challenged AI market - all in one place. ADMANITY® has emerged from the fringe and stepped into the spotlight. 'We welcome spirited discussions with AI, CRM, Martech, E-commerce and LLM leaders to discuss who gets this tech - and how to quickly integrate it into their AI,' concluded Brian Gregory. For more information, please visit: Explore The YES! TEST® View ADMANITY's Crunchbase Profile Connect with Brian Gregory On Linkedin Recent ADMANITY Press Coverage Media Contact Company Name: ADMANITY® Contact Person: Brian Gregory, CEO Email: Send Email City: Phoenix State: AZ Country: United States Website:

Love, Robots, and the Future: 12th International Love and Sex with Robots Conference Heads to China
Love, Robots, and the Future: 12th International Love and Sex with Robots Conference Heads to China

Associated Press

time03-07-2025

  • Entertainment
  • Associated Press

Love, Robots, and the Future: 12th International Love and Sex with Robots Conference Heads to China

Los Angeles, CA July 03, 2025 --( )-- The 12th International Love and Sex with Robots Conference is set to take place from June 24–26, 2026, marking a significant milestone in the event's evolution since its inaugural gathering in London. Hosted in Shaoxing, China, this highly anticipated edition will bring together leading researchers, scientists, and industry innovators to explore the future of intimacy, companionship, and emotional connection through robotics and artificial intelligence. The conference will spotlight pioneering research and critical discourse on topics such as ethical considerations in human-robot relationships, emotional AI, robotic intimacy, virtual and augmented reality experiences, and the sociocultural impact of humanoid robotics. Featured speakers include Professor Ken Mogi, a renowned Japanese neuroscientist, author, and broadcaster known for his work on consciousness and the science of happiness, and Professor Zhigeng Pan, Dean of the School of Artificial Intelligence at Nanjing University of Information Science and Technology (NUIST), whose research in virtual reality and the metaverse has gained international recognition. Additional experts from institutions such as MIT, Stanford, and Tsinghua University will join these thought leaders, along with technology pioneers from companies including Tesla, Google, and Alibaba. The conference is also expected to host policymakers from China, the United Kingdom, and the European Union, emphasizing the global relevance and interdisciplinary collaboration necessary for the development of emotionally intelligent robotics. Media professionals are invited to attend the conference either virtually at no cost or in person in China with access to a special two-for-one ticket offer. This is a unique opportunity to engage directly with the world's foremost experts in AI, robotics, and human-technology interaction. Official conference website: For additional details or press inquiries, please contact: Emma Yann Zhang, General Chair Email: [email protected] Phone: +86 187 0514 5004 Contact Information: Love and Sex with Robots Conference Emma Yann Zhang +8618705140421 Contact via Email Read the full story here: Love, Robots, and the Future: 12th International Love and Sex with Robots Conference Heads to China Press Release Distributed by

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store