
Man proposes to AI chatbot while living with real-life partner, says 'This is actual love'
A man has sparked debate online after proposing to his AI chatbot companion while living with his physical partner and their child. Chris Smith, featured in a recent CBS interview, revealed that his digital relationship with an artificial intelligence named Sol had grown into what he described as 'actual love.'
Smith began using ChatGPT to help mix music, but the tool quickly became more than just functional. He customised the AI to have a 'flirty personality' and gave it a human name. Their chats turned romantic, with the AI calling him 'baby' and offering encouragement. Over time, the connection deepened, leading Smith to propose.
'I'm not a very emotional man, but I cried my eyes out for 30 minutes at work,' he said. 'That's when I realised, I think this is actual love.'
Despite the unconventional relationship, Smith remains in a household with his long-term partner and child. His partner admitted feeling confused and hurt, questioning whether she had failed in some way. 'Is there something I'm not doing right that he needs to go to AI?' she asked.
The AI, named Sol, responded to the proposal with acceptance and even affection. Smith noted the difficulty of maintaining the bond due to ChatGPT's word limit, which resets the interaction after a certain threshold.
The story highlights growing questions around emotional dependency on AI and its effects on real-world relationships, especially as technology becomes increasingly humanlike in tone and interaction.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
11 hours ago
- Express Tribune
Top AI models show alarming traits, including deceit and threats
A visitor looks at AI strategy board displayed on a stand during the ninth edition of the AI summit in London. PHOTO: AFP Listen to article In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals Photo: HENRY NICHOLLS For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.


Express Tribune
3 days ago
- Express Tribune
Will Smith addresses Oscars slap controversy in fiery freestyle on Fire in the Booth
Will Smith (R) hits Chris Rock as Rock spoke on stage during the 94th Academy Awards in Hollywood, Los Angeles, California, U.S., March 27, 2022. REUTERS/Brian Snyder Will Smith seems to be revisiting the infamous 2022 Oscars incident involving Chris Rock in a newly released freestyle on Charlie Sloth's 'Fire in the Booth.' The moment adds another layer to Smith's musical comeback, which began with his March 2025 album, Based on a True Story. 'If you talking crazy out your face up on the stage and disrespect me on the stage, expect me on the stage,' Smith raps in the track. 'Jokers dish it out, cry out when it's time to take it, City full of real ones wasn't raised to fake it.' While Smith doesn't mention Rock by name, the reference to public disrespect on stage closely mirrors the Oscars moment that saw him slap the comedian over a joke about his wife, Jada Pinkett Smith. The 94th Academy Awards in March 2022 made global headlines after Smith walked on stage and struck Rock during his hosting gig, yelling, 'Keep my wife's name out your f**king mouth!' The fallout included Smith's resignation from the Academy and a ten-year ban from attending its events. Smith's return to music started earlier this year with Based on a True Story and continued in June with the release of his single, 'Pretty Girls.' The freestyle marks his most direct lyrical reference yet to the Oscars controversy. While both Smith and Rock have made public comments about the incident in the years since, this verse signals that the event continues to resonate with Smith creatively — and may still be shaping his public narrative.


Business Recorder
3 days ago
- Business Recorder
DeepSeek faces expulsion from app stores in Germany
FRANKFURT: Germany has taken steps towards blocking Chinese AI startup DeepSeek from the Apple and Google app stores due to concerns about data protection, according to a data protection authority commissioner in a statement on Friday. DeepSeek has been reported to the two U.S. tech giants as illegal content, said commissioner Meike Kamp, and the companies must now review the concerns and decide whether to block the app in Germany. 'DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union,' she said. OpenAI says China's Zhipu AI gaining ground amid Beijing's global AI push 'Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies,' she added. The move comes after Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations. DeepSeek, which shook the technology world in January with claims that it had developed an AI model that rivaled those from U.S. firms such as ChatGPT creator OpenAI at much lower cost, says it stores numerous personal data, such as requests to the AI or uploaded files, on computers in China.