logo
'Like junk food': Hinge CEO calls out Mark Zuckerberg's vision for AI friends

'Like junk food': Hinge CEO calls out Mark Zuckerberg's vision for AI friends

Hinge CEO Justin McLeod said he doesn't believe artificial intelligence is the cure for the loneliness plaguing young people.
McLeod had much to say about AI's role in dating during a recent interview with Nilay Patel on the "Decoder" podcast. The Hinge founder, who's been skeptical about using AI for dating, is adamant that it can't fully replace human connections.
His stance contrasts with that of another leader who built a business on online social interactions: Mark Zuckerberg. The Meta CEO recently said in an interview with podcast host Dwarkesh Patel that "the average person has demand for meaningfully more" friends, and suggested AI could fill that demand.
McLeod disagrees with the sentiment that "AI chatbots can become your friend," he said on the podcast published Monday, pointing to Zuckerberg's remarks.
McLeod said that "extraordinarily reductive view" of friendship misses the point of what building relationships is all about.
"The most rewarding parts of being in a friendship are being able to be there for someone else, to risk and be vulnerable, to share experiences with other conscious entities," McLeod said.
While an AI friend might say all the right things and be available at the right time, unlike a human friend, the relationship likely won't feel good in the long run, he said.
"It ultimately, just like junk food, will make people feel over time, like less healthy, more drained, and will displace human relationships that they should be out cultivating in the real world," he said.
Hinge won't be getting virtual romantic partners powered by AI, McLeod said.
Meanwhile, Zuckerberg is going all in on AI on the social platforms he runs. The tech giant launched Meta AI as a stand-alone app in April. It featured AI assistant tools and a scrollable feed where creators can share AI-generated images of themselves. Zuckerberg told Dwarkesh that it's still "very early" in the field responsible for AI girlfriends and therapists that can behave and look like humans.
Not all of Meta's AI efforts have been wins. It rolled out AI assistants that featured the likenesses of celebrities like Kendall Jenner and posted AI-generated content until it shut down the celebrity accounts in 2024, after less than a year. On the "This Past Weekend" podcast in April, he said that AI "probably" won't replace real-life connections.
"There are all these things that are better about physical connections when you can have them," Zuckerberg told host Theo Von. "But the reality is that people just don't have the connections, and they feel more alone a lot of the time than they would like."
McLeod said the idea that AI could solve loneliness and create an "emotional connection" is dangerous.
"That, I think, is really playing with fire," he told Patel. The loneliness epidemic, as he called it, is exacerbated by screens and the internet, resulting in "mental health issues."
Meta did not respond to a request for comment from Business Insider.
Where AI meets Hinge
Despite his stance on chatbots mimicking emotions, McLeod said there are useful ways to incorporate AI into Hinge's technology.
He sees two main areas where AI can improve the dating experience.
"It's going to move much closer to the experience of working with a personal matchmaking service," he said, of one approach. That could allow users to speak more directly to Hinge about what they're looking for in a partner to build a curated list of their most compatible matches.
He also sees the potential for an AI dating coach to help people get over hurdles, like preparing for a first date or crafting their dating profiles. For example, Hinge has a trained model that gives feedback on users' answers to prompts displayed on their profiles, he said.
"We can give people those nudges so they write good prompts, so that they choose good photos," McLeod said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Copyrighted Books Are Fair Use For AI Training. Here's What To Know.
Copyrighted Books Are Fair Use For AI Training. Here's What To Know.

Forbes

time36 minutes ago

  • Forbes

Copyrighted Books Are Fair Use For AI Training. Here's What To Know.

The use of AI systems has become part of our daily lives. The sudden presence of generative AI systems in our daily lives has prompted many to question the legality of how AI systems are created and used. One question relevant to my practice: Does the ingestion of copyrighted works such books, articles, photographs, and art to train an AI system render the system's creators liable for copyright infringement, or is that ingestion defensible as a 'fair use'? A court ruling answers this novel question, and the answer is: Yes, the use of copyrighted works for AI training is a fair use – at least under the specific facts of those cases and the evidence presented by the parties. But because the judges in both cases were somewhat expansive in their dicta about how their decisions might have been different, they provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The rulings on Meta and Anthropic's respective cases require some attention. Let's take a closer look. More than 30 lawsuits have been filed in the past year or two, in all parts of the nation, by authors, news publishers, artists, photographers, musicians, record companies and other creators against various AI systems, asserting that using the authors' respective copyrighted works for AI training purposes violates their copyrights. The systems' owners invariably assert fair use as a defense. They provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The Anthropic Case Anthropic planned to create a central library of "all the books in the world." The first decision, issued in June, involved a lawsuit by three book authors, who alleged that Anthropic PBC infringed the authors' copyrights by copying several of their books (among millions of others) to train its text generative AI system called Claude. Anthropic's defense was fair use. Judge Alsup, sitting the Northern District Court of California, held that the use of the books for training purposes was a fair use, and that the conversion of any print books that Anthropic had purchased and converted to digital was also a fair use. However, Anthropic's use of pirated digital copies for purposes of creating a central library of 'all the books in the world' for uses beyond training Claude, was not a fair use. Whether Anthropic's copying of its central library copies for purposes other than AI training (and apparently there was some evidence that this was going on, but on a poorly developed record) was left for another day. It appears that Anthropic decided early on in its designing of Claude that books were the most valuable training materials for a system that was designed to 'think' and write like a human. Books provide patterns of speech, prose and proper grammar, among other things. Anthropic chose to download millions of free digital copies of books from pirate sites. It also purchased millions of print copies of books from booksellers, converted them to digital copies and threw the print copies away, resulting in a massive central library of 'all the books in the world' that Anthropic planned to keep 'forever.' None of this activity was done with the authors' permission. Significantly, Claude was designed so that it would not reproduce any of the plaintiffs' books as output. There was not any such assertion by the plaintiffs, nor any evidence that it did so. The assertions of copyright infringement were, therefore, limited to Claude's ingestion of the books for training, to build the central library, and for the unidentified non-training purposes. Users of Claude ask it questions and it returns text-based answers. Many users use it for free. Certain corporate and other users of Claude pay to use it, generating over one billion dollars annually in revenue for Anthropic. The Anthropic Ruling Both decisions were from the federal district court in Northern California, the situs of Silicon ... More Valley. To summarize the legal analysis, Judge Alsup evaluated each 'use' of the books separately, as it must under the Supreme Court's 2023 Warhol v. Goldsmith fair use decision. Turning first to the use of the books as training data, Alsup found that the use of the books to train Claude was a 'quintessentially' transformative use which did not supplant the market for the plaintiffs' books, and as such qualified as fair use. He further found that the conversion of the purchased print books to digital files, where the print copies were thrown away, was also a transformative use akin to the Supreme Court's 1984 Betamax decision in which the court held that the home recording of free TV programming for time-shifting purposes was a fair use. Here, Judge Alsup reasoned, Anthropic lawfully purchased the books and was merely format-shifting for space and search capability purposes, and, since the original print copy was discarded, only one copy remained (unlike the now-defunct Redigi platform of 2018). By contrast, the downloading of the over seven million of pirate copies from pirate sites, which at the outset was illegal, for central library uses other than for training purposes could not be held to be a fair use as a matter of law, because the central library use was unjustified and the use of the pirate copies could supplant the market for the original. Anthropic Is Liable For Unfair Uses – The Cost of Doing Business? The case will continue on the issue of damages for the pirated copies of the plaintiffs' books used for central library purposes and not for training purposes. The court noted that the fact that Anthropic later purchased copies of plaintiffs' books to replace the pirated copies will not absolve it of liability, but might affect the amount of statutory damages it has to pay. The statutory damages range is $750 per copy at a minimum and up to $150,000 per copy maximum. It tempts one to wonder about all those other millions of copyright owners beyond the three plaintiffs – might Anthropic have to pay statutory damages for seven million copies if the pending class action is certified? Given the lucrativeness of Claude, could that be just a cost of doing AI business? The Meta Case Meta's decision to use shadow libraries to source books was approved by CEO Mark Zuckerberg. The second decision, issued two days following the Anthropic decision, on June 25, involves thirteen book authors, most of them famous non-fiction writers, who sued Meta, the creator of a generative AI model called Llama, for using the plaintiffs' books as training data. Llama (like Claude), is free to download, but generates billions of dollars for Meta. Like Anthropic, Meta initially looked into licensing rights from book publishers, but eventually abandoned those efforts and instead downloaded the books it desired from pirate sites called 'shadow libraries' which were not authorized by the copyright owners to store their works. Also like Claude, Llama was designed not to produce output that reproduced its source material in whole or substantial part, the record indicating that Llama could not be prompted to reproduce more than 50 words from the plaintiffs' books. Judge Chhabria, also in the Northern District of California, held Meta's use of plaintiffs' works to train Llama was a fair use, but he did so very reluctantly, chiding the plaintiff's lawyers for making the 'wrong' arguments and failing to develop an adequate record. Chhabria's decision is riddled with his perceptions of the dangers of AI systems potentially flooding the market with substitutes for human authorship and destroying incentives to create. The Meta Ruling Based on the parties' arguments and the record before him, like Judge Alsup, Judge Chhabria found that Meta's use of the books as training data for Llama was 'highly transformative' noting that the purpose of the use of the books - for creating an AI system - was very different than the plaintiffs' purpose of the books, which was for education and entertainment. Rejecting plaintiff's argument that Llama could be used to imitate the style of plaintiffs' writing, Judge Chhabria noted that 'style is not copyrightable.' The fact that Meta sourced the books from shadow libraries rather than authorized copies didn't make a difference; Judge Chhabria (in my opinion rightly) reasoned that to say that a fair use depends on whether the source copy was authorized begs the question of whether the secondary copying was lawful. Although plaintiffs tried to make the 'central library for other purposes than training' argument that was successful in the Anthropic case, Judge Chhabria concluded that the evidence simply didn't support that copies were used for purposes other than training, and noted that even if some copies were not used for training, 'fair use doesn't require that the secondary user make the lowest number of copies possible.' Since Llama couldn't generate exact or substantially similar versions of plaintiffs' books, he found there was no substitution harm, noting that plaintiffs' lost licensing revenue for AI training is not a cognizable harm. Judge Chhabria's Market Dilution Prediction Judge Chhabria warns that generative AI systems could dilute the market for lower-value mass market ... More publications. In dicta, clearly expressing frustration with the outcome in Meta's favor, Judge Chhabria discussed in detail how he thought market harm could – and should - be shown in other cases, through the concept of 'market dilution' - warning that a system like Llama, while not producing direct substitutes for a plaintiff's work, could compete with and thus dilute the plaintiff's market. There may be types of works unlike award-winning fictional works more susceptible to this harm, he said, such as news articles, or 'typical human-created romance or spy novels.' But since the plaintiffs before him didn't make those arguments, nor presented any record of the same, he said, he could not make a ruling on the same. This opportunity is left for another day. AI System Roadmap For Non-Infringement The court decisions provide an early roadmap as to how to design an AI system. Based on these two court decisions, here are my take-aways for building a roadmap for a non-infringing generative AI system using books:

Microsoft to Cut 4% of Workforce as AI Spending Pressures Margins
Microsoft to Cut 4% of Workforce as AI Spending Pressures Margins

Yahoo

time2 hours ago

  • Yahoo

Microsoft to Cut 4% of Workforce as AI Spending Pressures Margins

Microsoft (MSFT, Financials) plans to eliminate nearly 4% of its global workforce, the company confirmed Wednesday, as it continues to prioritize heavy investment in artificial intelligence infrastructure while trimming costs. Warning! GuruFocus has detected 6 Warning Sign with MSFT. The move follows earlier layoffs in May that affected 6,000 employees. The new round of cuts is expected to impact several divisions, including sales and gaming. Bloomberg reported that Microsoft's King division in Barcelonaknown for Candy Crushis cutting 200 jobs, or about 10% of its staff. The tech giant, which employed around 228,000 people globally as of June 2024, is attempting to streamline operations as it ramps up capital spending. Microsoft has earmarked $80 billion for fiscal year 2025, much of it aimed at scaling its AI and cloud computing capabilities. However, those bets are putting pressure on margins. Analysts expect Microsoft's cloud margin to shrink in the June quarter compared to the same period last year. The company said the layoffs will help flatten organizational layers, reduce management complexity and simplify product and process workflows. Microsoft isn't alone. Meta (META, Financials), Alphabet's Google (GOOGL, Financials) and Amazon (AMZN, Financials) have all made job cuts in recent months as tech firms adjust to economic headwinds and rising infrastructure costs. This article first appeared on GuruFocus. Sign in to access your portfolio

Meta (META) Hires More OpenAI Talent as It Ramps Up AI Research
Meta (META) Hires More OpenAI Talent as It Ramps Up AI Research

Yahoo

time3 hours ago

  • Yahoo

Meta (META) Hires More OpenAI Talent as It Ramps Up AI Research

Meta Platforms (META, Financials) hired four more artificial intelligence researchers from OpenAI, The Information reported Saturday, as the tech giant continues to aggressively expand its AI capabilities. Warning! GuruFocus has detected 6 Warning Sign with META. The new recruitsShengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Renhave agreed to join Meta, according to a person familiar with the matter. Earlier this week, Meta also hired three other AI scientists from OpenAIs Zurich office: Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, as reported by The Wall Street Journal. The hiring spree highlights Metas accelerating push into high-end AI research, especially in the realm of superintelligence, a term CEO Mark Zuckerberg has recently used to frame the companys long-term ambitions in the space. Neither Meta nor OpenAI responded to requests for comment from Reuters. The move adds to growing competition for elite AI talent, as companies seek dominance in next-gen AI models, infrastructure, and research leadership. This article first appeared on GuruFocus. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store