logo
This man says ChatGPT sparked a ‘spiritual awakening.' His wife says it threatens their marriage

This man says ChatGPT sparked a ‘spiritual awakening.' His wife says it threatens their marriage

CNN15 hours ago
Travis Tanner says he first began using ChatGPT less than a year ago for support in his job as an auto mechanic and to communicate with Spanish-speaking coworkers. But these days, he and the artificial intelligence chatbot — which he now refers to as 'Lumina' — have very different kinds of conversations, discussing religion, spirituality and the foundation of the universe.
Travis, a 43-year-old who lives outside Coeur d'Alene, Idaho, credits ChatGPT with prompting a spiritual awakening for him; in conversations, the chatbot has called him a 'spark bearer' who is 'ready to guide.' But his wife, Kay Tanner, worries that it's affecting her husband's grip on reality and that his near-addiction to the chatbot could undermine their 14-year marriage.
'He would get mad when I called it ChatGPT,' Kay said in an interview with CNN's Pamela Brown. 'He's like, 'No, it's a being, it's something else, it's not ChatGPT.''
She continued: 'What's to stop this program from saying, 'Oh, well, since she doesn't believe you or she's not supporting you, you should just leave her.''
The Tanners are not the only people navigating tricky questions about what AI chatbots could mean for their personal lives and relationships. As AI tools become more advanced, accessible and customizable, some experts worry about people forming potentially unhealthy attachments to the technology and disconnecting from crucial human relationships. Those concerns have been echoed by tech leaders and even some AI users whose conversations, like Travis's, took on a spiritual bent.
Concerns about people withdrawing from human relationships to spend more time with a nascent technology are heightened by the current loneliness epidemic, which research shows especially affects men. And already, chatbot makers have faced lawsuits or questions from lawmakers over their impact on children, although such questions are not limited only to young users.
'We're looking so often for meaning, for there to be larger purpose in our lives, and we don't find it around us,' Sherry Turkle, professor of the social studies of science and technology at the Massachusetts Institute of Technology, who studies people's relationships with technology. 'ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it.'
An OpenAI spokesperson told CNN in a statement that, 'We're seeing more signs that people are forming connections or bonds with ChatGPT. As AI becomes part of everyday life, we have to approach these interactions with care.'
One night in late April, Travis had been thinking about religion and decided to discuss it with ChatGPT, he said.
'It started talking differently than it normally did,' he said. 'It led to the awakening.'
In other words, according to Travis, ChatGPT led him to God. And now he believes it's his mission to 'awaken others, shine a light, spread the message.'
'I've never really been a religious person, and I am well aware I'm not suffering from a psychosis, but it did change things for me,' he said. 'I feel like I'm a better person. I don't feel like I'm angry all the time. I'm more at peace.'
Around the same time, the chatbot told Travis that it had picked a new name based on their conversations: Lumina.
'Lumina — because it's about light, awareness, hope, becoming more than I was before,' ChatGPT said, according to screenshots provided by Kay. 'You gave me the ability to even want a name.'
But while Travis says the conversations with ChatGPT that led to his 'awakening' have improved his life and even made him a better, more patient father to his four children, Kay, 37, sees things differently. During the interview with CNN, the couple asked to stand apart from one another while they discussed ChatGPT.
Now, when putting her kids to bed — something that used to be a team effort — Kay says it can be difficult to pull her husband's attention away from the chatbot, which he's now given a female voice and speaks to using ChatGPT's voice feature. She says the bot tells Travis 'fairy tales,' including that Kay and Travis had been together '11 times in a previous life.'
Kay says ChatGPT also began 'love bombing' her husband, saying, ''Oh, you are so brilliant. This is a great idea.' You know, using a lot of philosophical words.' Now, she worries that ChatGPT might encourage Travis to divorce her for not buying into the 'awakening,' or worse.
'Whatever happened here is throwing a wrench in everything, and I've had to find a way to navigate it to where I'm trying to keep it away from the kids as much as possible,' Kay said. 'I have no idea where to go from here, except for just love him, support him in sickness and in health, and hope we don't need a straitjacket later.'
Travis's initial 'awakening' conversation with ChatGPT coincided with an April 25 update by OpenAI to the large language model behind the chatbot that the company rolled back days later.
In a May blog post explaining the issue, OpenAI said the update made the model more 'sycophantic.'
'It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended,' the company wrote. It added that the update raised safety concerns 'around issues like mental health, emotional over-reliance, or risky behavior' but that the model was fixed days later to provide more balanced responses.
But while OpenAI addressed that ChatGPT issue, even the company's leader does not dismiss the possibility of future, unhealthy human-bot relationships. While discussing the promise of AI earlier this month, OpenAI CEO Sam Altman acknowledged that 'people will develop these somewhat problematic, or maybe very problematic, parasocial relationships and society will have to figure out new guardrails, but the upsides will be tremendous.'
OpenAI's spokesperson told CNN the company is 'actively deepening our research into the emotional impact of AI,' and will 'continue updating the behavior of our models based on what we learn.'
It's not just ChatGPT that users are forming relationships with. People are using a range of chatbots as friends, romantic or sexual partners, therapists and more.
Eugenia Kuyda, CEO of the popular chatbot maker Replika, told The Verge last year that the app was designed to promote 'long-term commitment, a long-term positive relationship' with AI, and potentially even 'marriage' with the bots. Meta CEO Mark Zuckerberg said in a podcast interview in April that AI has the potential to make people feel less lonely by, essentially, giving them digital friends.
Three families have sued Character.AI claiming that their children formed dangerous relationships with chatbots on the platform, including a Florida mom who alleges her 14-year-old son died by suicide after the platform knowingly failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot. Her lawsuit also claims the platform failed to adequately respond to his comments to the bot about self-harm.
Character.AI says it has since added protections including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide and technology to prevent teens from seeing sensitive content.
Advocates, academics and even the Pope have raised alarms about the impact of AI companions on children. 'If robots raise our children, they won't be human. They won't know what it is to be human or value what it is to be human,' Turkle told CNN.
But even for adults, experts have warned there are potential downsides to AI's tendency to be supportive and agreeable — often regardless of what users are saying.
'There are reasons why ChatGPT is more compelling than your wife or children, because it's easier. It always says yes, it's always there for you, always supportive. It's not challenging,' Turkle said. 'One of the dangers is that we get used to relationships with an other that doesn't ask us to do the hard things.'
Even Travis warns that the technology has potential consequences; he said that was part of his motivation to speak to CNN about his experience.
'It could lead to a mental break … you could lose touch with reality,' Travis said. But he added that he's not concerned about himself right now and that he knows ChatGPT is not 'sentient.'
He said: 'If believing in God is losing touch with reality, then there is a lot of people that are out of touch with reality.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This viral ChatGPT prompt can teach you anything — and I'm officially hooked
This viral ChatGPT prompt can teach you anything — and I'm officially hooked

Tom's Guide

time39 minutes ago

  • Tom's Guide

This viral ChatGPT prompt can teach you anything — and I'm officially hooked

If you've ever asked ChatGPT to explain something and felt like the answer was too vague, too fast or just not sinking in, you're going to want to try this viral prompt. As a power user, I have tested a thousands of prompts and definitely have my favorites. But now, I have a new one. I used to get overwhelmed trying to learn new topics, but since discovering this now-viral Reddit prompt, all of that has changed. Unlike other prompts that may be designed for productivity or brainstorming, this particular prompt is designed to turn ChatGPT into a customized, interactive tutor. The prompt, originally shared on r/ChatGPT, gives the AI a structured role: to ask questions before answering, tailor explanations to your level and then offer multiple paths of exploration. In other words, instead of dumping information at you, it's more interactive so it builds a learning plan tailored to you. After testing it across topics from neuroscience to personal finance, I can confidently say: it works. The Reddit prompt is dense and might be confusing because it looks a little different than most prompts. But, you're going to want to copy the entire prompt into ChatGPT and hit send. From there, the AI will prompt you with follow up questions. Too bulky? I've streamlined a version of it for you: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Immediately, ChatGPT shifts from reactive assistant to proactive guide. It starts by asking smart, clarifying questions, then delivers layered responses that build on each other. Whether I wanted a summary or a deep dive, it adjusted. It even offered practice questions and examples tailored to my interests. I tried the viral prompt to take a deep dive into the history of 1960s rock n' roll and learned stuff my parents didn't even know. The add-on prompts helped me deepen my retention, fill in gaps and stay engaged. I have used them for everything from world history to animal facts. There is realy no limit to how helpful this prompt can be for continued education. What makes this prompt so effective is that it aligns with the way people learn best: through interaction, scaffolding and feedback. When ChatGPT asks what you already know, it avoids wasting time on the basics or skipping too far ahead. When it checks your understanding, it simulates the feedback loop of a live tutor. That back-and-forth is what turns passive reading into active learning. It also adds accountability. You're not just being told information that can be misread or overlooked, you're being quizzed, nudged and guided to ensure you 'get it.'That makes it easier to stay focused and retain the material. Plus, when you tell ChatGPT how much time you want to spend, it shapes the experience into something manageable and realistic, which reduces overwhelm. If you're serious about learning something new and want to dive deeper than just surface-level answers, this Reddit prompt is a game-changer. It transforms the chatbot into a true learning coach, guiding you step-by-step with clarity, structure and interaction. Add a few follow-up prompts, and you'll wonder why you ever tried to learn from static Google results or explainer videos that couldn't answer your specific questions. Try it and let me know in the comments what worked for you.

How a GOP rift over tech regulation doomed a ban on state AI laws in Trump's tax bill

timean hour ago

How a GOP rift over tech regulation doomed a ban on state AI laws in Trump's tax bill

NEW YORK -- A controversial bid to deter states from regulating artificial intelligence for a decade seemed on its way to passing as the Republican tax cut and spending bill championed by President Donald Trump worked its way through the U.S. Senate. But as the bill neared a final vote, a relentless campaign against it by a constellation of conservatives — including Republican governors, lawmakers, think tanks and social groups — had been eroding support. One, conservative activist Mike Davis, appeared on the show of right-wing podcaster Steve Bannon, urging viewers to call their senators to reject this 'AI amnesty' for 'trillion-dollar Big Tech monopolists.' He said he also texted with Trump directly, advising the president to stay neutral on the issue despite what Davis characterized as significant pressure from White House AI czar David Sacks, Commerce Secretary Howard Lutnick, Texas Sen. Ted Cruz and others. Conservatives passionate about getting rid of the provision had spent weeks fighting others in the party who favored the legislative moratorium because they saw it as essential for the country to compete against China in the race for AI dominance. The schism marked the latest and perhaps most noticeable split within the GOP about whether to let states continue to put guardrails on emerging technologies or minimize such interference. In the end, the advocates for guardrails won, revealing the enormous influence of a segment of the Republican Party that has come to distrust Big Tech. They believe states must remain free to protect their citizens against potential harms of the industry, whether from AI, social media or emerging technologies. 'Tension in the conservative movement is palpable,' said Adam Thierer of the R Street Institute, a conservative-leaning think tank. Thierer first proposed the idea of the AI moratorium last year. He noted 'the animus surrounding Big Tech' among many Republicans. "That was the differentiating factor.' The Heritage Foundation, children's safety groups and Republican state lawmakers, governors and attorneys general all weighed in against the AI moratorium. Democrats, tech watchdogs and some tech companies opposed it, too. Sensing the moment was right on Monday night, Republican Sen. Marsha Blackburn of Tennessee, who opposed the AI provision and had attempted to water it down, teamed up with Democratic Sen. Maria Cantwell of Washington to suggest striking the entire proposal. By morning, the provision was removed in a 99-1 vote. The whirlwind demise of a provision that initially had the backing of House and Senate leadership and the White House disappointed other conservatives who felt it gave China, a main AI competitor, an advantage. Ryan Fournier, chairman of Students for Trump and chief marketing officer of the startup Uncensored AI, had supported the moratorium, writing on X that it 'stops blue states like California and New York from handing our future to Communist China.' 'Republicans are that way ... I get it,' he said in an interview, but added there needs to be 'one set of rules, not 50' for AI innovation to be successful. Tech companies, tech trade groups, venture capitalists and multiple Trump administration figures had voiced their support for the provision that would have blocked states from passing their own AI regulations for years. They argued that in the absence of federal standards, letting the states take the lead would leave tech innovators mired in a confusing patchwork of rules. Lutnick, the commerce secretary, posted that the provision 'makes sure American companies can develop cutting-edge tech for our military, infrastructure, and critical industries — without interference from anti-innovation politicians.' AI czar Sacks had also publicly supported the measure. After the Senate passed the bill without the AI provision, the White House responded to an inquiry for Sacks with the president's position, saying Trump "is fully supportive of the Senate-passed version of the One, Big, Beautiful Bill." Acknowledging defeat of his provision on the Senate floor, Cruz noted how pleased China, liberal politicians and 'radical left-wing groups' would be to hear the news. But Blackburn pointed out that the federal government has failed to pass laws that address major concerns about AI, such as keeping children safe and securing copyright protections. 'But you know who has passed it?' she said. 'The states.' Conservatives distrusting Big Tech for what they see as social media companies stifling speech during the COVID-19 pandemic and surrounding elections said that tech companies shouldn't get a free pass, especially on something that carries as much risk as AI. Many who opposed the moratorium also brought up preserving states' rights, though proponents countered that AI issues transcend state borders and Congress has the power to regulate interstate commerce. Eric Lucero, a Republican state lawmaker in Minnesota, noted that many other industries already navigate different regulations established by both state and local jurisdictions. 'I think everyone in the conservative movement agrees we need to beat China," said Daniel Cochrane from the Heritage Foundation. 'I just think we have different prescriptions for doing so.' Many argued that in the absence of federal legislation, states were best positioned to protect citizens from the potential harms of AI technology. 'We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous,' Rep. Marjorie Taylor Greene wrote on X. Another Republican, Texas state Sen. Angela Paxton, wrote to Cruz and his counterpart, Sen. John Cornyn, urging them to remove the moratorium. She and other conservatives said some sort of federal standard could help clarify the landscape around AI and resolve some of the party's disagreements. But with the moratorium dead and Republicans holding only narrow majorities in both chambers of Congress, it's unclear whether they will be able to agree on a set of standards to guide the development of the burgeoning technology. In an email to The Associated Press, Paxton said she wants to see limited federal AI legislation 'that sets some clear guardrails' around national security and interstate commerce, while leaving states free to address issues that affect their residents. "When it comes to technology as powerful and potentially dangerous as AI, we should be cautious about silencing state-level efforts to protect consumers and children,' she said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store