Latest news with #AItraining
Yahoo
a day ago
- Business
- Yahoo
Scale AI's rivals say they're going hard to win its contractors and clients: 'Our servers are melting'
Scale AI's competitors say they are seeing an uptick in client inquiries after Meta's investment. AI training companies like Appen and Prolific are pitching themselves as neutral platforms. Rival companies also told BI they have seen a rise in interest from contractors on their platforms. Meta spent $14.3 billion to acquire nearly half of Scale AI and level up in the AI race — but the startup's rivals spy an opportunity as well. Executives at five Scale AI competitors told Business Insider that they have seen a big uptick in client inquiries and job interest since the Meta deal was announced on June 13. Meta now holds a 49% stake in a company handling AI training data for many of its competitors, like Google, OpenAI, and xAI. In response, those three companies paused at least some of their work with Scale AI. Independence from Big Tech has now become a core part of the pitch for rival AI training companies vying for those contracts. In a blog post following the Meta deal, Scale AI reassured clients that it remains a "neutral, independent partner." Ryan Kolln, the CEO of data annotation firm Appen, told BI the deal would "create a pretty big disruption to our industry and create huge opportunities for Appen and our peers to fill the hole that's going to be left by Scale." "The added pitch is, 'hey, we are a publicly listed company and we're really focused on data neutrality,'" added Kolln, whose company counts Amazon and Nvidia as clients. "Our customers are really evaluating their vendor ecosystem." UK-based Prolific, which provides vetted freelancers for academic and commercial AI research, is also using neutrality as a selling point, its CEO, Phelim Bradley, told BI. "We don't build models. We don't compete with our customers. We don't have conflicting incentives," Bradley said. He added that clients are now reluctant to go all in on a single AI training provider. Big companies often spread their work among vendors, like cloud providers. "Scale benefited a lot from their awareness and being synonymous with data labeling for Big Tech," Bradley said. "Now, it's a much easier question to answer: 'How are you different from Scale?'" A Scale AI spokesperson told BI that "nothing has changed" about its customer data protection. "A lot of this confusion is being driven by smaller competitors who seek to gain from promoting false claims," they added. "Security and customer trust have always been core to our business, and we will continue to ensure the right protections are in place to help safeguard all of our work with customers." Meta did not respond to a request for comment. Jonathan Siddharth, the CEO of Turing, which trains models for major AI labs including Meta, Anthropic, and Google, said that discussions with customers have increased tenfold as frontier labs realize they need "top talent and impartial partners." "Labs increasingly want a Switzerland-like collaborator — someone model-agnostic — who can help them win the AGI race, rather than being tied to a single player," he said, referring to artificial general intelligence. He added that data annotation companies are often working on the exact capabilities that differentiate one AI model from another. None of the Scale AI rivals that BI spoke with quantified the number of inquiries they have received from Big Tech companies since Meta's investment. Scale AI's competitors are also moving to pick up its freelance workers, some of whom have had projects they are working on paused after clients like Google halted them. Scale has at least 240,00 gig workers globally who conduct AI training projects, such as flagging harmful chatbot responses. After some of Scale AI's projects were paused, the market became flooded with freelancers looking for work. Sapien AI CEO Rowan Stone told BI that his company had 40,000 new annotators join within 48 hours of Meta's Scale AI deal. "Our servers are currently melting," Stone said last week. "Our engineering team spent the entire weekend bolstering load balancers, spinning up new infrastructure, and getting us ready for the load that we're seeing." Many of these new sign-ups were from India and the Philippines — regions where Scale AI had long been a leader, Stone added. "The change in user signup pattern coincides pretty neatly with the Scale news," he said. Mercor AI's head of product, Osvald Nitski, said that the startup has received applications from full-time Scale employees, adding, "Our hiring bar is extremely high — we're only taking the best people." Mercor says it works with six of the "Magnificent Seven" tech companies and is picking up projects from clients leaving Scale. In terms of contractors, Nitski said the company is focused on recruiting elite-level annotators, like International Math Olympiad medalists, Rhodes Scholars, and Ph.D. students. Nitski said it's been a busy two weeks at Mercor, as it has seen a sharp increase in inbound interest from major tech clients. "There was simply no time for podcasts and blog posts these past few weeks with all of the demand to be fulfilled," Nitski said. Read the original article on Business Insider Sign in to access your portfolio


CNET
4 days ago
- Business
- CNET
Anthropic's AI Training on Books Is Fair Use, Judge Rules. Authors Are More Worried Than Ever
Claude maker Anthropic's use of copyright-protected books in its AI training process was "exceedingly transformative" and fair use, US senior district judge William Alsup ruled on Monday. It's the first time a judge has decided in favor of an AI company on the issue of fair use, in a significant win for generative AI companies and a blow for creators. Two days later, Meta won part of its fair use case. Fair use is a doctrine that's part of US copyright law. It's a four-part test that, when the criteria is met, lets people and companies use protected content without the rights holder's permission for specific purposes, like when writing a term paper. Tech companies say that fair use exceptions are essential in order for them to access the massive quantities of human-generated content they need to develop the most advanced AI systems. Writers, actors and many other kinds of creators have been equally clear in arguing that the use of their work to propel AI is not fair use. On Friday, a group of famous authors signed an open letter to publishers urging the companies to pledge never to replace human writers, editors and audiobook narrators with AI and to avoid using AI throughout the publishing process. The signees include Victoria Aveyard, Emily Henry, R.F. Kuang, Ali Hazelwood, Jasmine Guillory, Colleen Hoover and others. "[Our] stories were stolen from us and used to train machines that, if short-sighted capitalistic greed wins, could soon be generating the books that fill our bookstores," the letter reads. "Rather than paying writers a small percentage of the money our work makes for them, someone else will be paid for a technology built on our unpaid labor." The letter is just the latest in a series of battles between authors and AI companies. Publishers, artists and content catalog owners have filed lawsuits alleging that AI companies like OpenAI, Meta and Midjourney are infringing on their protected intellectual property in attempt to circumvent costly, but standard, licensing procedures. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) The authors suing Anthropic for copyright infringement say their books were also obtained illegally -- that is, the books were pirated. That leads to the second part of Alsup's ruling, based on his concerns about Anthropic's methods of obtaining the books. In the ruling, he writes that Anthropic co-founder Ben Mann knowingly downloaded unauthorized copies of 5 million books from LibGen and an additional 2 million from Pirate Library Mirror (PirLiMi). The ruling also outlines how Anthropic deliberately obtained print copies of the books it previously pirated in order to create "its own catalog of bibliographic metadata." Anthropic vice president Tom Turvey, the ruling says, was "tasked with obtaining 'all the books in the world' while still avoiding as much 'legal/practice/business slog.'" That meant buying physical books from publishers to create a digital database. The Anthropic team destroyed and discarded millions of used books in this process in order to prep them for machine-readable scanning, by stripping them from their bindings and cutting them down to fit. Anthropic's acquisition and digitization of the print books was fair use, the ruling says. But it adds: "Creating a permanent, general-purpose library was not itself a fair use excusing Anthropic's piracy." Alsup ordered a new trial regarding the pirated library. Anthropic is one of many AI companies facing copyright claims in court, so this week's ruling is likely to have massive ripple effects across the industry. We'll have to see how the piracy claims resolve before we know how much money Anthropic may be ordered to pay in damages. But if the scales tip to grant multiple AI companies fair use exceptions, the creative industry and the people who work in it will certainly suffer damages, too. For more, check out our guide to understanding copyright in the age of AI.
Yahoo
4 days ago
- Business
- Yahoo
AI program helps train 911 dispatchers in Cobb County for high-pressure calls
The Cobb County 911 center is taking an innovative approach to train its dispatch recruits. Channel 2 Cobb County Bureau Chief Michele Newell reports artificial intelligence creates each scenario. "We're going to be helping our newest recruits gain the confidence they need before they take live calls,' said Desmond Harris, the community relations supervisor, Cobb County 911 Center. [DOWNLOAD: Free WSB-TV News app for alerts as news breaks] GovWorx CommsCoach is an AI-powered training software. The technology simulates realistic high-pressure emergency calls, which helps training coordinators solely focus on coaching recruits. Examples of situations include a caller saying, 'He's outside the bedroom door and he's pounding on it. I'm really scared.' And 'I hurt someone very badly, I need help.' 'We're able to stand over the shoulder of our recruits while they practice these calls and help guide them,' Harris said. 'Previously we would be sitting across from them coming up with scenarios asking them questions.' The Cobb County 911 Center says it's the first emergency communications center to use the technology. 'Cobb County 911 Center is always at the front of innovation,' Harris said. 'We are always looking for the best and newest things to help our team members.' TRENDING STORIES: Driver killed in I-20 crash identified LIVE UPDATES: Severe storms move across metro Atlanta, N GA; APD reports water rescue Teen who went viral for working drive-thru with graduation medals on gets dream opportunity [SIGN UP: WSB-TV Daily Headlines Newsletter]

Japan Times
6 days ago
- Business
- Japan Times
U.S. judge rules for Meta in AI training copyright case but says it isn't lawful
A U.S. judge on Wednesday handed Meta a victory over authors who accused the technology giant of violating copyright law by training Llama artificial intelligence on their creations without permission. District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week. However, it came with a caveat that the authors could have pitched a winning argument — that by training powerful generative AI with copyrighted works, technology firms are creating a tool that could let a sea of users compete with them in the literary marketplace. "No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling. Tremendous amounts of data are needed to train the large language models that power generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We appreciate today's decision from the court," a Meta spokesperson said in response to an inquiry. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents. Books involved in the suit include Sarah Silverman's comic memoir "The Bedwetter" and Junot Diaz's Pulitzer Prize-winning novel "The Brief Wondrous Life of Oscar Wao," the documents showed. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." A different federal judge in San Franciso on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission. District Court Judge William Alsup ruled that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the U.S. Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his decision, comparing AI training to how humans learn by reading books. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train chatbot Claude, the company's ChatGPT rival. Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.

Malay Mail
6 days ago
- Business
- Malay Mail
Meta wins copyright lawsuit as judge says authors made ‘wrong arguments' but warns AI use may still be unlawful
AI companies say training on copyrighted work is fair use Judge rules for Meta in dispute with authors Judge says 'plaintiffs made the wrong arguments' SAN FRANCISCO, June 26 — A federal judge ruled yesterday for Meta Platforms against a group of authors who had argued that its use of their books without permission to train its artificial intelligence system infringed their copyrights. US District Judge Vince Chhabria, in San Francisco, said in his decision that the authors had not presented enough evidence that Meta's AI would dilute the market for their work to show that the company's conduct was illegal under US copyright law. Chhabria also said, however, that using copyrighted work without permission to train AI would be unlawful in 'many circumstances,' splitting with another federal judge in San Francisco who found on Monday in a separate lawsuit that Anthropic's AI training made 'fair use' of copyrighted materials. 'This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful,' Chhabria said. 'It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.' A Meta spokesperson said the company appreciated the decision and called fair use a 'vital legal framework' for building 'transformative' AI technology. Attorneys for the authors did not immediately respond to a request for comment. The authors sued Meta in 2023, arguing the company misused pirated versions of their books to train its AI system Llama without permission or compensation. The lawsuit is one of several copyright cases brought by writers, news outlets and other copyright owners against companies including OpenAI, Microsoft and Anthropic over their AI training. The legal doctrine of fair use allows the use of copyrighted works without the copyright owner's permission in some circumstances. It is a key defence for the tech companies. Chhabria's decision is the second in the US to address fair use in the context of generative AI, following US District Judge William Alsup's ruling in the Anthropic case. AI companies argue their systems make fair use of copyrighted material by studying it to learn to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry. Copyright owners say AI companies unlawfully copy their work to generate competing content that threatens their livelihoods. Chhabria expressed sympathy for that argument during a hearing in May, which he reiterated today. The judge said generative AI had the potential to flood the market with endless images, songs, articles and books using a tiny fraction of the time and creativity that would otherwise be required to create them. 'So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way,' Chhabria said. — Reuters