
Why Facebook-parent Meta may face same 'AI copying' problem as ChatGPT-maker OpenAI, Microsoft
Facebook parent
Meta
's newest AI model, Llama 3.1, has been found to replicate passages from well-known books, including Harry Potter, far more frequently than anticipated, as per a new report which also says that many of these works remain under copyright. Researchers claim that the AI has memorised roughly 42% of the first Harry Potter book and can accurately reproduce 50-word sections about half the time. The study, conducted by experts from Stanford, Cornell, and West Virginia University, examined how five leading AI models processed the Books3 dataset, which includes thousands of copyrighted titles.
"Llama 3.1 70B—a mid-sized model Meta released in July 2024—is far more likely to reproduce Harry Potter text than any of the other four models,
the researchers found
.
"Interestingly, Llama 1 65B, a similar-sized model released in February 2023, had memorized only 4.4 percent of Harry Potter and the Sorcerer's Stone. This suggests that despite the potential legal liability, Meta did not do much to prevent memorization as it trained Llama 3. At least for this book, the problem got much worse between Llama 1 and Llama 3," the researchers wrote.
Meta's Llama 3.1 has been noted for retaining large portions of well-known books, including The Hobbit, 1984, and Harry Potter and the Sorcerer's Stone. In contrast, earlier versions, such as Llama 1, only memorized around 4% of Harry Potter. This suggests that the newer model is preserving significantly more copyrighted content.
Why Meta's models are reproducing exact text
Researchers suggest several reasons why Meta's AI models may be copying text verbatim. One possibility is that the same books were repeatedly used during training, reinforcing memorisation rather than generalising language patterns.
Others speculate that training data could include excerpts from fan websites, reviews, or academic papers, leading the model to inadvertently retain copyrighted content. Additionally, adjustments to the training process may have amplified this issue without developers realizing the extent of its impact.
What this means for Meta
These findings intensify concerns about how AI models are trained and whether they might be violating copyright laws. As authors and publishers push back against unauthorised use of their work, this could become a major challenge for tech companies like Meta.
Earlier this year, The New York Times sued OpenAI and Microsoft for copyright infringement, alleging that their AI models, including ChatGPT, were trained on copyrighted articles without permission. According to the Times, OpenAI, 'can generate output that recites Times' content verbatim, closely summarizes it, and mimics its expressive style.' It said that the AI company essentially stole their intellectual property.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
3 hours ago
- Economic Times
European shares mixed on caution ahead of US tariff deadline
Synopsis European shares displayed a mixed performance as investors closely monitored trade-related developments ahead of President Trump's tariff deadline. While Germany's DAX saw gains, France's CAC 40 and the UK's FTSE 100 experienced declines. Trump's statements regarding potential tariff increases and deadlines added uncertainty, while energy stocks fell, and banks rose. Capgemini's acquisition of WNS also impacted trading. Nikita Papers IPO opens on May 27, price band set at Rs 95-104 per share Nikita Papers IPO opens on May 27, price band set at Rs 95-104 per share Why gold prices could surpass $4,000: JP Morgan's bullish outlook explained Why gold prices could surpass $4,000: JP Morgan's bullish outlook explained Cyient shares fall over 9% after Q4 profit declines, core business underperforms Cyient shares fall over 9% after Q4 profit declines, core business underperforms L&T Technology Services shares slide 7% after Q4 profit dips L&T Technology Services shares slide 7% after Q4 profit dips Trump-Powell standoff puts U.S. Rate policy in crosshairs: Who will blink first? Trump-Powell standoff puts U.S. Rate policy in crosshairs: Who will blink first? SEBI warns of securities market frauds via YouTube, Facebook, X and more SEBI warns of securities market frauds via YouTube, Facebook, X and more API Trading for All: Pi42 CTO Satish Mishra on How Pi42 is Empowering Retail Traders API Trading for All: Pi42 CTO Satish Mishra on How Pi42 is Empowering Retail Traders Security, transparency, and innovation: What sets Pi42 apart in crypto trading Security, transparency, and innovation: What sets Pi42 apart in crypto trading Bitcoin, Ethereum, or Altcoins? How investors are structuring their crypto portfolios, Avinash Shekhar explains Bitcoin, Ethereum, or Altcoins? How investors are structuring their crypto portfolios, Avinash Shekhar explains The rise of Crypto Futures in India: Leverage, tax efficiency, and market maturity, Avinash Shekhar of Pi42 explains NEXT STORY


Economic Times
3 hours ago
- Economic Times
Market Wrap: Sensex, Nifty end flat as D-Street braces for Trump's next tariff move
Synopsis Indian benchmark indices Sensex and Nifty closed flat on Monday, as gains in consumer and oil & gas stocks were offset by losses in financial and IT shares. Investor sentiment remained cautious amid rising uncertainty ahead of the July 9 U.S. tariff deadline. sensexniftyindian stocksgainerslosersshare pricestock update Nikita Papers IPO opens on May 27, price band set at Rs 95-104 per share Nikita Papers IPO opens on May 27, price band set at Rs 95-104 per share Why gold prices could surpass $4,000: JP Morgan's bullish outlook explained Why gold prices could surpass $4,000: JP Morgan's bullish outlook explained Cyient shares fall over 9% after Q4 profit declines, core business underperforms Cyient shares fall over 9% after Q4 profit declines, core business underperforms L&T Technology Services shares slide 7% after Q4 profit dips L&T Technology Services shares slide 7% after Q4 profit dips Trump-Powell standoff puts U.S. Rate policy in crosshairs: Who will blink first? Trump-Powell standoff puts U.S. Rate policy in crosshairs: Who will blink first? SEBI warns of securities market frauds via YouTube, Facebook, X and more SEBI warns of securities market frauds via YouTube, Facebook, X and more API Trading for All: Pi42 CTO Satish Mishra on How Pi42 is Empowering Retail Traders API Trading for All: Pi42 CTO Satish Mishra on How Pi42 is Empowering Retail Traders Security, transparency, and innovation: What sets Pi42 apart in crypto trading Security, transparency, and innovation: What sets Pi42 apart in crypto trading Bitcoin, Ethereum, or Altcoins? How investors are structuring their crypto portfolios, Avinash Shekhar explains Bitcoin, Ethereum, or Altcoins? How investors are structuring their crypto portfolios, Avinash Shekhar explains The rise of Crypto Futures in India: Leverage, tax efficiency, and market maturity, Avinash Shekhar of Pi42 explains NEXT STORY


Time of India
4 hours ago
- Time of India
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in—and it might be deciding your fate with a script. — GautamGhosh (@GautamGhosh) When Bias Meets Automation The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame. You Might Also Like: Nikhil Kamath's 'lifelong learning' advice is only step one: Stanford expert shares the key skills needed to survive the AI takeover Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The Human Cost of a Digital Verdict The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as 'ChatGPT psychosis.' In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of 'hallucination,' where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red flag. From Performance Reviews to Pink Slips At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore—and with less understanding than a coin toss. You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress—it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve. You Might Also Like: AI cannot replace all jobs, says expert: 3 types of careers that could survive the automation era