
What Two Judicial Rulings Mean for the Future of Generative AI
More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation.
In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.)
At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.'
But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions.
So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily.
When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.)
Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote.
In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work.
To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them.
In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.'
But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984.
That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs.
For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model.
As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies.
The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress.
The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
1 AI Giant That's My Technology Sector Pick of the Decade
Written by Aditya Raghunath at The Motley Fool Canada Last week, Meta Platforms (NASDAQ:META) stock surged to record highs above US$747, valuing the tech giant at a market cap of US$1.8 trillion. META stock is now up 350% in the last three years and has returned over 600% in the past decade. The ongoing bull run for META stock signals the beginning of what could be the most transformative decade in artificial intelligence (AI), positioning the company as my top technology sector pick for the 2020s. Mark Zuckerberg's creation of Meta Superintelligence Labs represents a strategic masterstroke that differentiates Meta from competitors. By assembling an elite team including former Scale AI CEO Alexandr Wang and ex-GitHub CEO Nat Friedman, Meta is building what amounts to an AI dream team. The company's willingness to offer US$100 million signing bonuses demonstrates a commitment to securing top talent from OpenAI and other rivals. Meta's unique advantages extend beyond its hiring prowess. Its massive user base of over one billion monthly active users across its platforms provides unparalleled training data for AI models. Unlike pure-play AI companies burning venture capital, Meta's profitable core business generates the resources needed for sustained AI investment at scale. The timing couldn't be better. While competitors focus on narrow AI applications, Meta is positioning for the superintelligence era, technology that exceeds human capability. With AI glasses and wearables gaining traction, Meta is building tomorrow's computing platform today. Meta's parallel approach to developing Llama models while researching next-generation capabilities creates multiple paths to victory. As AI reshapes every industry over the next decade, Meta's combination of talent, data, distribution, and financial resources makes it uniquely positioned to lead the superintelligence revolution. In the first quarter (Q1) of 2025, Meta reported revenue of US$42.3 billion, up 16% year over year, showcasing a successful transition into an AI-powered behemoth. The social media giant's strategic focus on AI is paying dividends across multiple fronts, positioning it for sustained long-term growth. CEO Mark Zuckerberg outlined five major AI opportunities driving Meta's future: improved advertising through AI agents, more engaging content experiences, business messaging automation, Meta AI adoption, and AI-enabled devices. These initiatives are already showing results, with Meta AI reaching nearly one billion monthly active users and AI-driven recommendation improvements delivering 7% increased time spent on Facebook and 6% on Instagram. Meta's open-source Llama models have achieved notable traction with 1.2 billion downloads, establishing Meta as a leader in accessible AI development. This strategy creates a competitive moat while fostering innovation across the broader ecosystem. Meta continues to invest in AI infrastructure aggressively and forecasts to spend between US$64 billion and US$72 billion in capital expenditures this year. Meta continues to expand its portfolio of AI products and services. For instance, the Ray-Ban Meta AI glasses represent a breakthrough in consumer AI devices, with sales tripling year over year and monthly users rising four times in Q1. Despite regulatory challenges in Europe and macroeconomic uncertainties, Meta's diversified AI strategy across advertising, consumer products, and infrastructure creates multiple paths to steady returns. A unique combination of massive user data, advanced AI capabilities, and hardware innovation indicates Meta is well-positioned to capture value from the AI revolution. Despite its massive size, Meta is forecast to increase sales from US$164.5 billion in 2024 to US$290 billion in 2025. Comparatively, adjusted earnings are estimated to expand from US$23.86 per share to US$42.14 per share in this period. If META stock is priced at 25 times forward earnings, which is below its current multiple of 28 times, it will trade around US$1,055 in early 2029, indicating an upside potential of 47% from current levels. The post 1 AI Giant That's My Technology Sector Pick of the Decade appeared first on The Motley Fool Canada. Before you buy stock in Meta Platforms, consider this: The Motley Fool Stock Advisor Canada analyst team just identified what they believe are the Top Stocks for 2025 and Beyond for investors to buy now… and Meta Platforms wasn't one of them. The Top Stocks that made the cut could potentially produce monster returns in the coming years. Consider MercadoLibre, which we first recommended on January 8, 2014 ... if you invested $1,000 in the 'eBay of Latin America' at the time of our recommendation, you'd have $24,927.94!* Stock Advisor Canada provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month – one from Canada and one from the U.S. The Stock Advisor Canada service has outperformed the return of S&P/TSX Composite Index by 30 percentage points since 2013*. See the Top Stocks * Returns as of 6/23/25 More reading 10 Stocks Every Canadian Should Own in 2025 [PREMIUM PICKS] Market Volatility Toolkit A Commonsense Cash Back Credit Card We Love Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Fool contributor Aditya Raghunath has no position in any of the stocks mentioned. The Motley Fool recommends Meta Platforms. The Motley Fool has a disclosure policy. 2025 Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


Geek Wire
21 minutes ago
- Geek Wire
Washington opens wildfire surveillance feeds to public as fire season ramps up
A hazy view from a Pano AI wildfire-detection camera deployed near the town of Tonasket in Washington's Okanogan County. (Screen grab from DNR site) Washington state officials are now streaming surveillance camera feeds from 21 locations, allowing the public to remotely monitor for Pacific Northwest wildfires in real-time. Forecasters say there's an above-average risk for fires in the region this summer due to lower-than-normal precipitation earlier in the year and projections for higher temperatures. While the artificial intelligence deployed by Pano AI will almost certainly spot blazes first, people can scan the rotating 360-degree camera feeds to get a visual sense of air quality and potentially watch fires as they develop. The Washington Department of Natural Resources launched a pilot program with Pano AI in 2023, placing cameras at sites where wildfire risk is high and the chances of a person reporting it are lower, based on historical data and models. Five more cameras are scheduled to come online this year. 'When Pano AI approached us with a new publicly accessible camera feed feature, it was an easy yes,' said George Geissler, DNR's state forester and deputy supervisor responsible for wildland fire management, in a statement. 'Early detection is a key part of DNR's wildfire rapid response model,' he added, 'and now Washingtonians can peek behind the scenes at how part of that detection process works.' Last year, the California startup's technology found an undiscovered fire in Mason County, alerting fire departments who were able to put the fire out in hours, limiting the burn to less than 20 acres. DNR has a contract with Pano AI through 2029. Other wildfire and smoke-related information is available from the Northwest Interagency Coordination Center (NWCC), Washington Smoke Blog, which is run by government agencies and Tribes; PurpleAir; and the Washington State Department of Ecology's air quality map.


Bloomberg
21 minutes ago
- Bloomberg
Panasonic Unveils EV Battery Plant in Kansas
Bloomberg's Ed Ludlow discusses Panasonic's new electric vehicle battery plant in De Soto, Kansas with Panasonic NA CEO Megan Myungwon Lee and the CEO of Lucid, a key battery customer. Plus, Bloomberg's Jackie Davalos breaks down Meta's plans to spend hundreds of billions of dollars on compute power to build superintelligence. And Bitcoin's value soars as Congress kicks off Crypto Week. (Source: Bloomberg)