
More people are considering AI lovers, and we shouldn't judge
A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner.
The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation."
Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked."
We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine.
When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say?
The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to.
It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence.
There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide.
The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support.
Complex relationships
In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness."
This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different.
We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles.
Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards.
Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness.
In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment.
Support and safety
We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else.
However, we can also take steps to ensure that these relationships are as safe and satisfying as possible.
First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior.
Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry.
Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible.
While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices.
AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun.
Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
16 minutes ago
- Yahoo
Blaize to Report Second Quarter 2025 Financial Results on August 14, 2025
EL DORADO HILLS, Calif., July 22, 2025--(BUSINESS WIRE)--Blaize (Nasdaq: BZAI, Nasdaq: BZAIW), a leader in programmable, energy-efficient edge AI computing, today announced it will release financial results for its second quarter ended June 30, 2025, on Thursday, August 14, 2025. Management will host a webcast that same day at 2:00 p.m. PT / 5:00 p.m. ET to discuss the company's financial performance and provide a business update. Event: Blaize Second Quarter 2025 Earnings Conference Call Date: Thursday, August 14, 2025 Time: 2:00 pm PT (5:00 pm ET) Live Webcast: A live webcast of the call can be accessed from the Events and Presentations page of the investor relations website, Replay: An archived conference call webcast will be available on Blaize's investor relations website for one year following the live call at About Blaize Blaize provides a full-stack programmable processor architecture suite and low-code/no-code software platform that enables AI processing solutions for high-performance computing at the network's edge and in the data center. Blaize solutions deliver real-time insights and decision-making capabilities at low power consumption, high efficiency, minimal size, and low cost. Headquartered in El Dorado Hills (CA), Blaize has more than 200 employees worldwide with teams in San Jose (CA) and Cary (NC), and subsidiaries in Hyderabad (India), Leeds and Kings Langley (UK), and Abu Dhabi (UAE). To learn more, visit or follow us on LinkedIn and on X at @blaizeinc. View source version on Contacts Investors ir@ Media info@


Business Journals
18 minutes ago
- Business Journals
Huge Texas data center moves ahead as Trump-backed Stargate AI project faces questions
ChatGPT creator OpenAI and cloud computing giant Oracle are partnering to bring 4.5 gigawatts of power to data center projects, including one currently under construction in Abilene. The deal is part of the Stargate Project, which was announce by President Donald Trump in January alongside a $500 billion investment. Yet sources say the project has been struggling to take off as planned.


Bloomberg
19 minutes ago
- Bloomberg
Unpacking Geopolitics Behind the AI Race
AI regulation needs a global approach, according to Nikolaus Lang, managing director and senior partner at the Boston Consulting Group and global leader of the BCG Henderson Institute. He discusses the geopolitics of AI with Caroline Hyde and Ed Ludlow on 'Bloomberg Tech.' (Source: Bloomberg)