
US Court Sides With Meta In AI Training Copyright Case
A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission.
District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week.
However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace.
"No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling.
Tremendous amounts of data are needed to train large language models powering generative AI.
Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation.
"We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry.
"Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology."
In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents.
Books involved in the suit include Sarah Silverman's comic memoir "The Bedwetter" and Junot Diaz's Pulitzer Prize-winning novel "The Brief Wondrous Life of Oscar Wao," the documents showed.
"This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated.
"It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
7 minutes ago
- Hindustan Times
Govt plans mobile number verification for apps, banks
NEW DELHI The move comes as India battles a surge in digital fraud (AFP) The telecommunications department has proposed sweeping new cybersecurity rules that would require digital platforms to verify customer mobile numbers through a government-run system, as the country grapples with rising online fraud. The Department of Telecommunications (DoT) unveiled draft amendments dated Tuesday that would establish a Mobile Number Validation (MNV) platform to check whether phone numbers provided by users actually belong to them - a move that could affect millions of Indians using everything from food delivery apps to digital payment services. India has over 1.16 billion mobile connections and is the world's largest market for digital payments, making it a major target for mobile-based fraud schemes. The proposed rules target what the government calls Telecommunication Identifier User Entities (TIUEs) - essentially any business that uses mobile numbers to identify customers or deliver services, beyond licensed telecom operators. 'A person, other than a licensee or authorised entity, which uses telecommunication identifiers for the identification of its customers or users, or for provisioning and delivery of services,' the notification states. While the notification does not specify examples of TIUEs, a DoT official explained to HT that TIUE covers OTT platforms, banks, among other digital services. 'If the services are using mobile numbers or any other telecom identifier, then they will be covered under TIUE. In other words, this broad definition could include ride-hailing apps like Ola and Uber, food delivery platforms like Zomato and Swiggy, fintech companies, e-commerce sites, and banking apps. While companies can voluntarily request mobile number verification, the rules make it mandatory 'upon a direction from central or state government or an agency authorised by the central or state government.' The move comes as India battles a surge in digital fraud, often through stolen or lost SIM cards that are used to make calls or send messages in phishing and more recently 'digital arrests' rackets. The use of mule SIMs are designed to work around strict KYC norms that was initially thought to be effective against crimes. The draft notification briefly specifies two grounds for the move: 'ensuring telecom cyber security and prevent security incidents'. According to government data, digital frauds have surged in recent years. In March, the government in a submission to Rajya Sabha stated the number of digital arrest scams and related cybercrimes in the country almost tripled between 2022 and 2024, with defrauded amounts skyrocketing by 21 times during the period. Cybersecurity experts are divided on the implications. Sandeep K Shukla, a professor at IIT Kanpur, said the anti-fraud benefits could justify privacy concerns. 'This might hamper privacy to some extent, but if you are claiming a number to be associated with a business, it better be associated with the claimed business,' Shukla told HT. However, Vikram Jeet Singh, a partner at BTG Advaya specialising in internet regulation, raised data protection concerns. 'There are obvious data privacy concerns, and it is not clear what data can be accessed through such a platform. Will it be a simple 'Yes/No' response on validation of a phone number, or can it be used to obtain more personal details of phone users?' Singh questioned. The draft rules propose a tiered pricing system: government entities get free access, while government-directed validation costs ₹ 1.50 per request. Private companies making voluntary requests pay ₹ 3 per validation. Singh warned this could create new costs for consumers. 'On a more mundane (but important) level, this may mean that banks and other service providers start charging their customers for 'MNV validation' costs.' The logistical challenge is immense. 'The MNV database will likely be maintained by creating a record of all active phone numbers in India. Given India has more than 1.5 billion phone numbers, this will not be an easy task in itself,' Singh added. Kazim Rizvi, founding director of The Dialogue, a tech policy think tank, said the proposed amendments could lead to an excessive centralisation of user data, raising concerns about proportionality under the Puttaswamy judgment and 'potentially clashing with the privacy safeguards outlined in the Digital Personal Data Protection (DPDP) Act'. The amendments also target mobile device fraud through stricter IMEI (International Mobile Equipment Identity) controls. Manufacturers must ensure new devices don't reuse IMEI numbers already in use in India's networks. The government will maintain a central database of tampered or blacklisted IMEIs, with second-hand phone sellers required to check this database before any sale - at a cost of 10 rupees per IMEI check. The rules also grant authorities sweeping powers to 'temporarily suspend use of the relevant telecommunication identifier' for both telecom operators and TIUEs if security concerns arise. The proposed rules are open for public consultation for 30 days before implementation. The DoT was not immediately available for comment.


India Today
17 minutes ago
- India Today
Facebook users beware, Meta AI can scan all your phone photos anytime if you are not careful
Meta has consistently found itself at the centre of privacy debates. There's little doubt that the company has been using our data, for instance, our publicly posted photos across Facebook and Instagram, to train its AI models (more commonly known as Meta AI). But now, it seems Meta is taking things to another level. Recent findings suggest that it now wants full access to your phone's camera roll, meaning even photos you haven't shared on Facebook (or Instagram), reported by TechCrunch, some Facebook users have recently come across a curious pop-up while attempting to upload a Story. The notification invites them to opt into a feature called 'cloud processing.' On the surface, it sounds fair and safe, as Facebook says this setting will allow it to automatically scan your phone's camera roll and upload images to Meta's cloud 'on a regular basis.' In return, the company promises to offer 'creative ideas' such as photo collages, event recaps, AI-generated filters, and themed suggestions for birthdays, graduations, or other cool? But wait for it. When you agree to its terms of use, you're also giving Meta a go-ahead to analyse the content of your unpublished and presumably private photos on an ongoing basis as Meta AI looks at details such as facial features, objects in the frame, and even metadata like the date and location they were taken, to gradually become is little doubt that the idea is to make AI more helpful for you – the user – since AI needs all the data one can possibly fathom to make sense of the real world and respond accordingly to questions and prompts you are putting out. And Meta, on its part, says that this is an opt-in feature, which is to say that users can choose to disable it as and when they want. That's fair, but given that this is user data we're talking about and given Facebook's history, some users (and privacy advocates) would be tech giant had earlier admitted it had scraped all public content uploaded by adults on Facebook and Instagram since 2007 to help train its generative AI models. However, Meta hasn't clearly defined what 'public' means or what age qualifies someone as an 'adult' in its dataset from 2007. That haziness leaves a lot of room for different interpretations—and even more room for concern. Moreover, its updated AI terms, active since June 23, 2024, don't mention whether these cloud-processed, unpublished photos are exempt from being used as training Verge reached out to the Meta AI executives, but they bluntly denied that Meta, "is not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images."There is, thankfully, a way out. Facebook users can dive into their settings and disable this cloud processing feature. Once turned off, Meta promises it will begin deleting any unpublished images from the cloud within 30 days. Still, the very nature of this tool—pitched as a fun and helpful feature—raises questions about how users are nudged into handing over private data without fully realising the a time when AI is reshaping how we interact with tech, companies like Meta are testing the limits of what data they can collect, analyse, and potentially monetise eventually. This latest move blurs the lines between user assistance and data extraction. What used to be a conscious decision—posting a photo to share with the world—now risks being replaced with quiet uploads in the background and invisible AI eyes watching it all unfold. We'll see how things pan out.- Ends advertisement


Time of India
21 minutes ago
- Time of India
Meta will only make limited changes to pay-or-consent model, EU says
Meta Platforms may face daily fines if EU regulators decide the changes it has proposed to its pay-or-consent model fail to comply with an antitrust order issued in April, they said on Friday. The warning from the European Commission, which acts as the EU competition enforcer, came two months after it slapped a 200-million-euro ($234 million) fine on the U.S. social media giant for breaching the Digital Markets Act (DMA) aiming at curbing the power of Big Tech. The move shows the Commission's continuing crackdown against Big Tech and its push to create a level playing field for smaller rivals despite US criticism about the bloc's rules mainly targeting its companies. Daily fines for not complying with the DMA can be as much as 5% of a company's average daily worldwide turnover. The EU executive said Meta's pay-or-consent model introduced in November 2023 breached the DMA in the period up to November 2024, when it tweaked it to use less personal data for targeted advertising. The Commission has been scrutinising the changes since then. The model gives Facebook and Instagram users who consent to be tracked a free service that is funded by advertising revenues. Alternatively, they can pay for an ad-free service. The EU competition watchdog said Meta will only make limited changes to its pay-or-consent model rolled out last November. "The Commission cannot confirm at this stage if these are sufficient to comply with the main parameters of compliance outlined in its non-compliance Decision," a spokesperson said. "With this in mind, we will consider the next steps, including recalling that continuous non-compliance could entail the application of periodic penalty payments running as of 27 June 2025, as indicated in the non-compliance decision." Meta accused the Commission of discriminating against the company and for moving the goalposts during discussions over the last two months. "A user choice between a subscription for no ads service or a free ad supported service remains a legitimate business model for every company in Europe - except Meta," a Meta spokesperson said. "We are confident that the range of choices we offer people in the EU doesn't just comply with what the EU's rules require - it goes well beyond them." The EU watchdog dismissed Meta's discrimination charges, saying the DMA applies equally to all large digital companies doing business in the EU regardless of where they are incorporated or who their controlling shareholders are. "We have always enforced and will continue to enforce our laws fairly and without discrimination towards all companies operating in the EU, in full compliance with global rules," the Commission spokesperson said.