Latest news with #ex-OpenAI


Mint
3 days ago
- Business
- Mint
AI valuations are verging on the unhinged
Vibe coding, or the ability to spin up a piece of software using generative artificial intelligence (AI) rather than old-school programming skills, is all the rage in Silicon Valley. But it has a step-sibling. Call it vibe valuing. This is the ability of venture capitalists to conjure up vast valuations for AI startups with scant regard for old-school spreadsheet measures. Exhibit A is Mira Murati, formerly the chief technologist of OpenAI, who has vaulted almost overnight into the plutocracy. Her AI startup, Thinking Machines Lab, has reportedly raised $2bn at a $10bn valuation in its first fundraising round, before it has much of a strategy, let alone revenue. Ms Murati's success can be explained by her firm's roster of ex-OpenAI researchers. Tech giants like Meta are offering megabucks for such AI superstars. Yet venture-capital (VC) grandees say that even for less exalted startups, traditional valuation measures such as projected revenue growth, customer churn and cash burn are less sacrosanct than they used to be. This is partly because AI is advancing so quickly, making it hard to produce reliable forecasts. But it is also a result of the gusher of investment flowing into generative AI. The once-reliable measure most at risk of debasement is annual recurring revenue (ARR), central to many startup valuations. For companies selling software as a service, as most AI firms do, it used to be easy to measure. Take a typical month of subscriptions, based on the number of users, and multiply by 12. It was complemented by strong retention rates. Churn among customers was often less than 5% a year. As marginal costs were low, startups could burn relatively little cash before profits started to roll in. It was, by and large, a stable foundation for valuations. Not so for AI startups. The revenue growth of some has been unusually rapid. Anysphere, which owns Cursor, a hit coding tool, saw its ARR surge to $500m this month, five times the level in January. Windsurf, another software-writing tool, also saw blistering growth before OpenAI agreed to buy it in May for $3bn. But how sustainable is such growth? Jamin Ball of Altimeter Capital, a VC firm, notes that companies experiment with many AI applications, which suggests they are enthusiastic but not committed to any one product. He quips that this 'easy-come, easy-go" approach from customers produces ERR, or 'experimental run rate", rather than ARR. Others say churn is often upwards of 20%. It doesn't help that, in some cases, AI startups are charging based on usage rather than users (or 'seats"), which is less predictable. Add to this the fact that competition is ferocious, and getting more so. However fast an AI startup is growing, it has no guarantee of longevity. Many create applications on top of models built by big AI labs such as OpenAI or Anthropic. Yet these labs are increasingly offering applications of their own. Generative AI has also made it easier than ever to start a firm with just a few employees, meaning there are many more new entrants, says Max Alderman of FE International, an advisory firm. Even well known AI firms are far from turning a profit. Perplexity, which has sought to disrupt a search business long dominated by Google, reportedly generated revenue of $34m last year, but burned around $65m of cash. That has been no hurdle to a punchy valuation. Perplexity's latest fundraising round reportedly valued it at close to $14bn—a multiple of more than 400 times last year's revenue (compared with about 6.5 times for stocks traded on the Nasdaq exchange). OpenAI, which torched about $5bn of cash last year, is worth $300bn. The willingness of venture investors to look past the losses reflects their belief that the potential market for AI is enormous and that costs will continue to plummet. In Perplexity's case, the startup may be a takeover target, too. In time, trusty old approaches to valuations may come back into vogue, and cooler heads prevail. 'I'm the old-fashioned person who still believes I need [traditional measures] to feel comfortable," says Umesh Padval of Thomvest, another VC firm. For now, just feel the vibes. © 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on

3 days ago
- Business
Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company is still on the hook and must now go to trial over how it acquired those books by downloading them from online 'shadow libraries' of pirated copies. U.S. District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as 'fair use' under U.S. copyright law because it was 'quintessentially transformative.' 'Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,' Alsup wrote. But while dismissing a key claim made by the group of authors who sued the company for copyright infringement last year, Alsup also said Anthropic must still go to trial in December over its alleged theft of their works. 'Anthropic had no entitlement to use pirated copies for its central library,' Alsup wrote. A trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — alleged in their lawsuit last summer that Anthropic's practices amounted to 'large-scale theft," and that the San Francisco-based company 'seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.' Books are known to be important sources of the data — in essence, billions of words carefully strung together — that are needed to build large language models. In the race to outdo each other in developing the most advanced AI chatbots, a number of tech companies have turned to online repositories of stolen books that they can get for free. Documents disclosed in San Francisco's federal court showed Anthropic employees' internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles. With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. But that didn't undo the earlier piracy, according to the judge. 'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,' Alsup wrote. The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram. Anthropic — founded by ex-OpenAI leaders in 2021 — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way. But the lawsuit filed last year alleged that Anthropic's actions 'have made a mockery of its lofty goals' by building its AI product on pirated writings. Anthropic said Tuesday it was pleased that the judge recognized that AI training was transformative and consistent with 'copyright's purpose in enabling creativity and fostering scientific progress.' Its statement didn't address the piracy claims.
Yahoo
4 days ago
- Business
- Yahoo
Insurance companies are embracing AI. But they aren't talking much about ROI
Hello and welcome to Eye on AI. In this edition…a mega seed round for ex-OpenAI CTO Mira Murati's new startup…the impact of AI on cognitive skills…and why the effects of AI automation may vary so much across is not considered the most cutting edge industry. But AI has been making slow, steady in-roads in the sector for years. Many companies have begun using computer vision applications that automatically assess damage—whether that is to cars following a collision or to the roofs of houses following a major storm—to help claims adjusters work more efficiently. Companies are also using machine learning algorithms to help detect fraud and build risk models for underwriting. And, of course, like many other industries, insurance companies are using AI to boost productivity in many support functions, from chatbots that can answer customer queries to AI that can help design marketing materials to AI coding assistants to help internal tech insurance companies are doing it best? That's what the London-based research and analytics firm Evident Insights set out to discover with a new index assessing major insurance firms' AI prowess. Evident has become known in recent years for its detailed benchmarking of banks' AI capabilities. But this is the first time the research firm has moved beyond banking to look at another its banking index, Evident's assessment is based almost entirely on quantitative metrics derived mostly from public sources of information—management statements in financial disclosures, press releases, company websites, social media accounts, patent filings, LinkedIn profiles, and news articles. In all, Evident looked at 76 individual metrics, organized into four 'pillars' that the research firm said it believes are critical to deploying AI successfully: talent (which counts for 45% of the overall ranking), innovation (30%), leadership (15%), and transparency of responsible AI activity (10%). It used these to rank the 30 largest North American and European insurers when judged by total premiums underwritten or total assets under insurers, Axa and Allianz emerged as clear leaders in Evident's assessment. They were the only two to rank in the top five across all four pillars and had a substantial lead over third-place insurer USAA. Alexandra Mousavizadeh, the cofounder and co-CEO of Evident, tells me that the result is surprising, in part because both Axa and Allianz are based in Europe, where large companies have generally been seen as lagging their North American peers in AI adoption. (And in Evident's banking index, all of the highest ranked firms are North American.) But Mousavizadeh says that she thinks Axa and Allianz have a common corporate cultural trait that may explain their AI dominance. 'My theory on this is that it's embedded in an engineering culture,' she says. 'Axa and Allianz have been doing this for a very long time and if you look at their histories, there has been much more of an engineering leadership and engineering mindset.'Mousavizadeh says that claims and underwriting automation are both big engineering challenges that require large teams of skilled developers and technology experts to make work at scale. 'You have got to have more engineers,' she says. 'For that last mile of getting a use case into production, you have to have AI product managers, and you have to have AI software engineering.'Companies that invest most heavily in human AI expertise are most likely to excel at using AI to run their businesses more efficiently, opening up an ever-widening gap between these companies and those that are AI laggards. (Of course, in Evident's methodology, it helps if management talks about what it's doing with AI and publicizes its AI governance policies too. USAA actually ranks first on Evident's talent pillar, but falls down to third place because it ranks near the bottom of the pack on both 'leadership'—which is mostly about management's statements about how the company is using AI—and 'transparency of responsible AI policies.') Still, as in many industries, there still seems to be a substantial gap in the insurance sector between AI hype and actual ROI. Of the 30 insurers Evident evaluated, only 12 had disclosed at least one AI use case with 'a tangible business outcome.' Just three insurers—Intact Financial, Zurich Insurance Group, and Aviva—had publicly disclosed a monetary return from their AI efforts. That's pretty most transparent of this group was Canada-based Intact Financial, a property and casualty insurer that said publicly in 2024 that it had invested $500 million in technology (that's all tech, not just AI) across its business, had deployed 500 AI models, and had seen $150 million dollars in benefit so far. One of its use cases was using AI models that transform speech-to-text and then language models on top of those transcripts to assess the quality of how its human customer service agents handled the up to 20,000 customer calls the company receives still a cost-savings example—a way of boosting the bottom line—and not one in which a company is using AI to grow its sales or move into new business areas. Evident found that insurers were primarily applying AI this way—attacking the industry's largest cost centers, namely claims processing, customer service, and underwriting. As the research firm notes: 'Revenue-generating AI is yet to appear on our outside-in assessment.'The story here isn't just about insurance—it's about every industry grappling with AI. Executives everywhere are still figuring out which AI investments will pay off, but the early winners share a common thread: they're not just buying AI tools, they're building AI teams. They're hiring engineers, experimenting relentlessly, measuring results—and then expanding the successful use cases everywhere they can. And benchmarking, like the kind Evident is doing, can play a vital role in both informing executives about what seems to be working—and pushing entire industries to adopt AI faster, as well as to being more transparent about how they're using AI and what policies they have in place around its responsible use. That's a lesson worth learning, whether you're insuring cars or building that, here's more AI news. And, before we get to the other sections, I want to flag this deep dive article from my colleagues Sharon Goldman and Allie Garfinkle into the background behind Meta's $14 billion investment into Scale AI and the hiring of Scale cofounder and CEO Alexandr Wang for a major new role at Meta. Their story is a must-read. Check it out here. Jeremy to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Then join me at the Ritz-Carlton, Millenia in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. This year's theme is The Age of Intelligence. We will be joined by leading executives from DBS Bank, Walmart, OpenAI, Arm, Qualcomm, Standard Chartered, Temasek, and our founding partner Accenture, plus many others, along with key government ministers from Singapore and the region, top academics, investors and analysts. We will dive deep into the latest on AI agents, examine the data center build out in Asia, examine how to create AI systems that produce business value, and talk about how to ensure AI is deployed responsibly and safely. You can apply to attend here and, as loyal Eye on AI readers, I'm able to offer complimentary tickets to the event. Just use the discount code BAI100JeremyK when you checkout. This story was originally featured on Sign in to access your portfolio


The Hill
4 days ago
- Business
- The Hill
Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company is still on the hook and must now go to trial over how it acquired those books by downloading them from online 'shadow libraries' of pirated copies. U.S. District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as 'fair use' under U.S. copyright law because it was 'quintessentially transformative.' 'Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,' Alsup wrote. But while dismissing a key claim made by the group of authors who sued the company for copyright infringement last year, Alsup also said Anthropic must still go to trial in December over its alleged theft of their works. 'Anthropic had no entitlement to use pirated copies for its central library,' Alsup wrote. A trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — alleged in their lawsuit last summer that Anthropic's practices amounted to 'large-scale theft,' and that the company 'seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.' As the case proceeded over the past year in San Francisco's federal court, documents disclosed in court showed Anthropic's internal concerns about the legality of their use of online repositories of pirated works. So the company later shifted its approach and attempted to purchase copies of digitized books. 'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,' Alsup wrote. The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram. Anthropic — founded by ex-OpenAI leaders in 2021 — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way. But the lawsuit filed last year alleged that Anthropic's actions 'have made a mockery of its lofty goals' by tapping into repositories of pirated writings to build its AI product. Anthropic said Tuesday it was pleased that the judge recognized that AI training was transformative and consistent with 'copyright's purpose in enabling creativity and fostering scientific progress.' Its statement didn't address the piracy claims. The authors' attorneys declined comment.


Winnipeg Free Press
4 days ago
- Business
- Winnipeg Free Press
Judge rules AI company Anthropic didn't break copyright law but must face trial over pirated books
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company is still on the hook and could now go to trial over how it acquired those books by downloading them from online 'shadow libraries' of pirated copies. U.S. District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as 'fair use' under U.S. copyright law because it was 'quintessentially transformative.' 'Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,' Alsup wrote. But while dismissing the key copyright infringement claim made by the group of authors who sued the company last year, Alsup also said Anthropic must still go to trial over its alleged theft of their works. 'Anthropic had no entitlement to use pirated copies for its central library,' Alsup wrote. A trio of writers — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — alleged in their lawsuit last summer that Anthropic committed 'large-scale theft' by allegedly training its popular chatbot Claude on pirated copies of copyrighted books, and that the company 'seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.' As the case proceeded over the past year in San Francisco's federal court, documents disclosed in court showed Anthropic's internal concerns about the legality of their use of online repositories of pirated works. So the company later shifted its approach and attempted to purchase copies of digitized books. 'That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,' Alsup wrote. The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram. Anthropic — founded by ex-OpenAI leaders in 2021 — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way. But the lawsuit filed last year alleged that Anthropic's actions 'have made a mockery of its lofty goals' by tapping into repositories of pirated writings to build its AI product.