
‘Hey man, I'm so sorry for your loss': should you use AI to text?
'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed.
Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.'
Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read.
Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI-assisted, he acknowledges. 'It can rub people the wrong way.'
Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says.
Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat.
Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity.
Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'She not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.'
Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships.
In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University.
On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.')
AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says.
'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says.
But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50.
Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity.
Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy').
Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.'
Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'.
Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.'
I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says.
Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks.
Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy.
'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.'
Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora.
We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says.
But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found.
Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort?
Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctoral degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew.
About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.)
After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me.
Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms.
Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever.
The two women are no longer friends.
'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says.
Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we have all entered without 'knowing the rules'.
Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgment, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says.
If a couple, for example, expressed love through AI-generated poems, they would be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says.
'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel.
If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we had hoped. Or, as the tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.'
Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging?
'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
24 minutes ago
- Reuters
Key Rulings on GenAI Training and Copyright Fair Use Practical Law The Journal
For a regularly updated case tracker covering intellectual property and privacy-related lawsuits concerning GenAI (including more decisions addressing fair use), see Generative AI: Federal Litigation Tracker on Practical Law. Bartz v. Anthropic PBC: N.D. Cal. On June 23, 2025, the US District Court for the Northern District of California held in Bartz v. Anthropic PBC that defendant Anthropic PBC's use of copyrighted books to train its GenAI tool was a fair use and granted summary judgment to Anthropic on this issue. The court also held that Anthropic's digital conversion of purchased print books to build its digital library was fair use but that its downloading of pirated copies for this purpose was not. (2025 WL 1741691 (N.D. Cal. June 23, 2025).) Anthropic PBC developed the GenAI tool Claude, which generates text responses based on prompts from users. In part to train the large language models (LLMs) underlying Claude, Anthropic assembled a central library of digitized books, including copies purchased in print form and then scanned into digital format, as well as copies downloaded from pirate websites. Authors Andrea Bartz, Charles Graeber, Kirk Wallace Johnson, and their affiliated corporate entities (collectively referred to as the 'Authors') filed a putative class action lawsuit against Anthropic in August 2024, alleging copyright infringement for using copies of their books to build its digital library and train the LLMs. Anthropic moved for summary judgment on the issue of fair use. The district court analyzed the fair use factors under Section 107 of the Copyright Act to determine whether Anthropic's uses of the Authors' copyrighted works constituted fair use, separately evaluating the different uses at issue. Weighing the factors, the Bartz court concluded that Anthropic's use of the Authors' books to train the LLMs was fair use. Specifically, the court found that: The first factor (purpose and character of the use) strongly favored fair use because using the works to train the LLMs to generate new text outputs was 'quintessentially transformative.' Key to this finding was that Claude includes software to ensure that it does not output infringing content (and the Authors did not allege that any output was infringing). The court acknowledged that its analysis may change if the outputs were infringing. The second factor (nature of the copyrighted work) weighed against fair use because the court accepted, at the summary judgment stage, that: the Authors' published works contained expressive elements; and the works were selected for their expressive qualities. The third factor (amount and substantiality of the portion used) favored fair use because, although Anthropic copied the Authors' entire works, this was reasonably necessary given the extensive data needed to train the LLMs. The court also stated that what matters is the amount and substantiality of what is made accessible to the public, again noting that there was no allegation or evidence that consumer-facing outputs were infringing. The fourth factor (effect on the potential market) favored fair use because: The district court also considered whether Anthropic's copying of the Authors' works to build its central digital library was fair use. The court separately considered works that were: Lawfully purchased in print format and converted to digital format, after which the print versions were destroyed and the digital versions were not redistributed. Copied from pirate websites without authorization by or compensation to the Authors. For the purchased print copies, the district court held that their conversion for a digital library was a fair use. The court found: The first factor favored fair use because: converting the works from physical to digital format for storage and searchability was a transformative use; and Anthropic did not create additional copies or redistribute the digital versions. The second factor weighed against fair use based on the presumptively (at the summary judgment stage) expressive nature of the works. The third factor favored fair use because copying the entire work was necessary for the purpose of digital conversion and storage. The fourth factor was neutral, as the format change may have displaced some digital purchases, but this did not relate to a market the Copyright Act entitles the Authors to exploit. However, for the pirated library copies downloaded without authorization, the district court found no fair use justification and denied summary judgment to Anthropic. Anthropic copied these pirated works, as a substitute to purchasing them, to build a digital library available for any number of prospective uses (and maintained copies in the library even after deciding they would not be used to train the LLMs). The court held this use is not transformative. The court further recognized that the pirated copies directly displaced demand for purchased copies on a one-to-one basis and that condoning such piracy as fair use would destroy the publishing market. The court rejected Anthropic's arguments that the eventual transformative use of some copies for training the LLMs excused the initial piracy. Kadrey v. Meta Platforms, Inc.: N.D. Cal. On June 25, 2025, the US District Court for the Northern District of California granted summary judgment in Kadrey v. Meta Platforms, Inc. to Meta Platforms, Inc., finding that Meta's use of the plaintiffs' copyrighted books to train its GenAI tool was transformative and fair use. However, the court stated its belief that the fair use defense is likely to be unsuccessful in other, similar cases where the copyright owners adequately show the dilutive harm that GenAI has on the general market for these works. (2025 WL 1752484 (N.D. Cal. June 25, 2025).) The plaintiffs, thirteen authors, filed a lawsuit against Meta alleging direct copyright infringement, among other claims, based on Meta's use of unauthorized downloads of their books (from online shadow libraries) to train the LLMs underlying Llama, Meta's text-generating GenAI tool. After discovery, the parties cross-moved for summary judgment on whether Meta's use of the books was fair use. The district court analyzed the fair use factors under Section 107 of the Copyright Act (17 U.S.C. § 107). In support of its finding that the first fair use factor (purpose and character of the use) favored fair use, the district court: Held that Meta's use was highly transformative because it used the plaintiffs' books only to train LLMs, while the purpose of the books is to entertain and educate readers. The Kadrey court noted that transformative use does not insulate a defendant from infringement liability or even determine the first fair use factor. It is one aspect of the fair use analysis, and there are circumstances where market harm (the fourth factor) can be grounds to reject the defense for a transformative use. Rejected the plaintiffs' law professor amici argument that the purpose and character of the parties' uses were similar because Meta's use of a book to train the LLMs was like a professor's use of a book to train a student. The district court noted that: an LLM ingests text only to learn statistical patterns, not to interpret and understand its meaning as a student does; and Meta's use was not analogous to giving a book to one person, but rather it was to create a tool that everyone can use to exponentially multiply creative expression in a way that teaching a person does not. Rejected the plaintiffs' argument that Meta's use was not transformative because Llama's output mimics and effectively repackages the plaintiffs' works. The evidence showed that Meta programmed Llama to be unable to regurgitate training content, and the plaintiffs' experts were unable to prompt the tool to generate more than 50 words from any of the plaintiffs' books. Recognized that Meta's commercial use (and expectation of 460 billion to 1.4 trillion dollars in revenue over the next ten years) tends to weigh against fair use, but this did not tilt the first fair use factor in the plaintiffs' favor. Recognized that Meta's unauthorized downloading of the books from shadow libraries without compensation to the plaintiffs may indicate bad faith, but questioned the relevance of bad faith and found it did not sway the first factor in this case. The court noted that Meta's practice might be more relevant to the character of the use if the plaintiffs showed the practice benefited the shadow libraries and furthered their unlawful activities. The court held that the second factor (nature of the copyrighted work) weighed against fair use because the plaintiffs' books, consisting mostly of highly expressive works, are entitled to strong copyright protection. However, the court noted that this factor rarely plays a significant role in the fair use analysis. The district court acknowledged that the third factor (amount and substantiality of the portion used) was not particularly relevant in this case. However, it concluded that the factor favored a fair use finding because copying the entirety of the books was reasonable given Meta's transformative purpose, as LLMs perform better when trained on complete, high-quality data. The district court started its review of the fourth fair use factor (effect on the potential market for the copyrighted work) by acknowledging it to be the most important factor in the fair use analysis. It explained that the relevant question is whether the defendant's use will function as a market substitute for the plaintiffs' works. The court rejected the plaintiffs' arguments that: Meta's unauthorized use of the plaintiffs' books affects the market for licensing the works for the purpose of training its LLMs. The district court held that this is not a harm that the Copyright Act seeks to prevent. Otherwise, every copyright infringement plaintiff could argue that it has been deprived of the right to license the use at issue in the case. Llama is capable of reproducing portions of their books, therefore harming the market for the plaintiffs' works. However, the evidence showed that even adversarial prompting designed to make Llama regurgitate the plaintiffs' works yielded only 50-word snippets from the books, which could not have a meaningful effect on the market for the plaintiffs' books. Although the plaintiffs focused only on these two alleged harms, the district court analyzed a third form of harm, that is, GenAI's ability to rapidly generate countless works that compete with and reduce demand for the plaintiffs' works, even if the AI-generated works are non-infringing. The court referred to this form of harm as market dilution (or indirect substitution), which it noted is still market substitution, could reduce the incentive for authors to create, and is the specific harm that copyright aims to prevent. The court stated that market dilution harm is far greater (and therefore more relevant) in the case of GenAI than in other cases involving individual secondary works or digital tools, such as Google Books, because GenAI can quickly flood the market with millions of competing works. The court stated that it 'seems likely' that market dilution will often cause the fourth fair use factor to decisively favor plaintiffs in similar cases. However, in this case, because Meta introduced evidence that its use of the plaintiffs' works did not cause market harm and the plaintiffs failed to demonstrate the contrary, the district court (seemingly reluctantly) found that the fourth factor weighed in favor of fair use. In granting summary judgment to Meta on its fair use defense, the district court stated that its ruling should not indicate that Meta's use of copyrighted content to train its LLMs was lawful, but only that the plaintiffs did not show the market dilution that GenAI causes. The court further surmised that, in many circumstances, the unauthorized use of copyright-protected works to train GenAI models will be infringing and developers will therefore need to pay copyright owners for the right to use their materials for this purpose.


Times
an hour ago
- Times
Microsoft and OpenAI tech bromance under pressure
S ilicon Valley has a thing for bromantic drama — just look at Elon Musk and Donald Trump. This week there are more rumours about turbulence in another tech twosome: Satya Nadella and Sam Altman. The chief executives of Microsoft and OpenAI were once joined at the hip, co-starring on tech stages, everywhere including Davos. When Altman was briefly fired, Nadella was there immediately with a job offer. It is no longer so cosy as OpenAI is having to renegotiate its original deal with Microsoft. 'Obviously in any deep partnership, there are points of tension and we certainly have those,' Altman told the Hard Fork podcast last week when he disclosed that he had a recent 'super nice call' with Microsoft's CEO. 'But on the whole, it has been … wonderfully good for both companies.'


Times
an hour ago
- Times
Meet Velvet Sundown, Spotify's hottest new band. But are they real?
Their soft-rock songs have racked up more than 550,000 listeners on Spotify in a matter of weeks, but are the Velvet Sundown real or an AI-generated band? The group have all the hallmarks of AI, from their lifeless photographs to the lack of evidence the musicians exist or have ever played live. But just as their apparently hoodwinked 'fans' and the industry had concluded that this was another case of AI killing off real stars, the Velvet Sundown popped up to defend themselves. 'Absolutely crazy that so-called 'journalists' keep pushing the lazy, baseless theory that The Velvet Sundown is 'AI-generated' with zero evidence,' they wrote on X to their rather underwhelming audience of 92 followers. Please enable cookies and other technologies to view this content. You can update your cookies preferences any time using privacy manager. 'This is not a joke. This is our music, written in long, sweaty nights in a cramped bungalow in California with real instruments, real minds, and real soul. Every chord, every lyric, every mistake — HUMAN.'Just because we don't do TikTok dances or livestream our process doesn't mean we're fake … We are REAL!' Adding to the mystery is that this X account is not the one linked to from their Spotify profile. Whoever is making the pleas, they have fallen on deaf ears. Deezer, the streaming service that flags AI music on its platform, said on a label: 'Some tracks on this album may have been created using artificial intelligence'. However, Spotify does not have a policy of labelling AI music and some fans felt misled by the platform's 'verified artist' label attached to the Velvet Sundown, which only means that it is the artist's stream. Daniel Ek, Spotify's co-founder and chief executive, has been generally positive about the potential of AI's impact on music. He said in May: 'I'm mostly optimistic and mostly very excited because we're just in the beginning of understanding this future of creativity that we're entering. • Alexa's AI song generator angers music industry 'We want real humans to make it as artists and creators, but what is creativity in the future with AI? I don't know. What is music?' However, while some artists such as the producer Timbaland and Ryan Tedder, a songwriter for Adele and Taylor Swift, are embracing it, the technology represents a threat to different parts of the industry. • Jimmy Page: AI is putting the magic of human artistry at stake The new AI music-making platforms such as Suno and Udio are being sued by the record companies for breach of copyright. Fraudsters are also uploading AI tracks and getting bots to listen to them to generate revenue. Deezer said that 18 per cent of all music uploaded to the platform daily — more than 20,000 tracks — were 100 per cent AI-generated. Of these, 70 per cent were fraudulent, which risks crowding out genuine artists. Spotify has a policy of not manually recommending AI tracks on playlists and will ban AI songs that impersonate real artists. However, the Velvet Sundown's success appears to have stemmed from the fact that Spotify has been putting the band's songs on the popular Discover Weekly playlist, which is algorithmically created. The Velvet Sundown are not the only AI success story on Spotify. Music Business Worldwide this week identified 13 AI-made 'artists' on the platform with 4.1 million monthly listeners between them. They include a country artist called Aventhis (a million listeners), a group called the Devil Inside (700,000 listeners) and a Marvin Gaye-inspired Nick Hustles (200,000 listeners). • Will AI give us new McCartney and Dylan albums in 2060? The musician and author Chris Dalla Riva said on TikTok: 'Since it's so easy to generate music this way, you could flood services with this music and completely crowd out people who are trying to make a career as an artist, trying to make legitimate art. If you are just writing a prompt and generating hundreds of songs at scale, it's very clear that this is just a way for you to try to make money.'