
DeepSeek can undercut larger ChatGPT, ace investor Mary Meeker warns
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
Artificial intelligence (AI) forerunners like OpenAI could soon face serious competition from cheaper rivals such as China's DeepSeek , according to renowned Silicon Valley analyst and investor Mary Meeker Meeker, an early investor in companies like Meta, Spotify, and Airbnb, told the Financial Times that AI will create 'multiple companies worth $10 trillion' — and not all of them will be based in North America. 'The wealth creation will be extraordinary. We've never had a five-billion-user market that was this easy to reach,' she added.In a recent report, Meeker and others point out that US companies, such as OpenAI's GPT and Google's Gemini, leading the development of large language models (LLMs) are now facing rising training costs. At the same time, competition from players like DeepSeek has intensified.'The business model is in flux,' Meeker wrote. 'Smaller, cheaper models tailored for specific tasks are emerging, challenging the idea that one large, general-purpose LLM can do it all.'While AI companies have enjoyed rise in revenues and stock prices, they face growing threats. New, more powerful chips and improved algorithms are lowering the cost of running AI models. This is helping competitors like DeepSeek launch models that are more affordable and efficient.She goes on to underscore that, in the short term, these AI businesses are starting to look like commodity operations that burn through venture capital at a rapid pace. Despite the advances in the space, training the most advanced AI models is still extremely expensive. Costs have increased 2,400 times in the past eight years, making it nearly impossible for smaller players to compete. Only a few companies can afford to keep up, and even those lack a clear path to profitability.While lower prices and more model options benefit consumers, they create a tough environment for startups. To survive, these companies need deep funding and patient investors. Meeker compares their situation to companies like Uber, Amazon, and Tesla , which all spent heavily for years before turning a profit.ET reported earlier this week how several Indian startups may have to tap external funding to scale up their GenAI-based applications as AI companies such as OpenAI and Anthropic pause steep price cuts of their generative AI models.Meeker rose to fame during her time at Morgan Stanley with bets like Google and Apple, earning the moniker "queen of the internet". She joined venture capital firm Kleiner Perkins in 2010 and later co-founded her own firm, Bond, in 2019.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
21 minutes ago
- Mint
Short-seller Viceroy accuses Vedanta promoters of hidden stake via welfare trust
The entity under scrutiny is PTC Cables Pvt. Ltd (PTCC), which holds a 1.91% stake in Vedanta Ltd, a company with a market capitalization of ₹ 1.75 trillion, according to BSE data. PTCC is owned by Bhadram Janhit Shalika Trust (BJST), which Viceroy alleges is controlled by the Agarwal family, founders of the Vedanta Group. According to Viceroy, PTCC received ₹ 1,500 crore in dividend income from Vedanta over the past five years, and the capital was "upcycled" to promoter-linked entities. 'PTCC exists for one purpose: to quietly recycle Vedanta's cash into promoter-controlled vehicles while maintaining the illusion of independence," the Viceroy report said. Vedanta denied the allegations. 'These assertions are baseless," a spokesperson for the company said, adding that the company was compliant with the disclosure norms as stipulated by the Securities and Exchange Board of India (Sebi) and the Companies Act, 2013. 'Neither BJST nor PTCC are part of the promoter group as defined under applicable regulations, and their shareholding has been transparently disclosed in public filings," the spokesperson added. A day after Viceroy released its report, JP Morgan had issued a note, telling investors not to get distracted by the allegations on corporate governance and financial management, and that the global brokerage had an Overweight rating on both Vedanta Resources Ltd and Vedanta Ltd. Viceroy's claims are based on publicly available records. In a 2009 income-tax case, BJST's correspondence address was listed as Anil Agarwal's personal residence in Mumbai. In another case, the trust's address was that of Todarwal & Todarwal, a firm linked to Arun Todarwal, who currently serves as a director on the board of Sterlite Power Grid Ventures, a Vedanta subsidiary. Todarwal has also previously served as a director on the boards of Hindustan Zinc Ltd, Sterlite Technologies, MALCO, and BALCO. The report acknowledged that no conclusive documentation of current control was available, noting that Indian trusts are subject to less stringent disclosure obligations compared to companies. Viceroy also cited unnamed former Vedanta employees who claimed that the Agarwal family's control over PTCC was an "open secret" within the company. In addition to alleging hidden promoter ownership, the report flagged governance concerns at PTCC. The company was incorporated in 1993 with the Agarwal family as shareholders and was transferred to BJST in 2017. Its current directors are Todarwal and Kannan Ramamirthan. Ramamirthan is an independent director of Hindustan Zinc, Vedanta's most profitable subsidiary. He has also previously served on the boards of other Vedanta group firms, including Talwandi Sabo Power Plant, BALCO, Sterlite Energy, and Sterlite Interlinks. Vedanta has not disclosed in its filings that PTCC—classified as a public shareholder—has directors with long-standing associations with the group. The company did not respond to a specific query on this issue. Calls and emails to Todarwal for a comment did not elicit a response. Mint could not reach Ramamirthan for a comment. Concerns about the independence of BJST and PTCC are not new. In a 2020 note, proxy advisory firm Stakeholder Empowerment Services (SES) had said that BJST was previously known as the SIL Employee Welfare Trust and was linked to Sterlite Industries Ltd, which was later merged into Vedanta. The trust was subsequently renamed as BJST. 'It is not clear as to who presently controls the BJST," SES had written. However, if the firm was under the control of Vedanta, then PTCC should be classified as a promoter shareholder, it said. Viceroy's first report on the Vedanta Group was published on 10 July, a day before Vedanta Ltd's annual general meeting (AGM). The initial report triggered a drop in the company's stock, though shares later recovered. At the AGM, shareholders reposed their faith in the company. Since the report's release, Vedanta shares have gained 2% to close at ₹ 449.75 on Tuesday. Also Read | Vedanta shareholders back firm after Viceroy report Viceroy has disclosed a short position in the bonds of Vedanta Resources, the unlisted holding company of the group, but said it has no exposure to Vedanta Ltd or any other listed Vedanta entities in India.


Economic Times
25 minutes ago
- Economic Times
Rogue bots? AI firms must pay up
When Elon Musk's xAI was forced to apologise this week after its Grok chatbot spewed antisemitic content and white nationalist talking points, the response felt depressingly familiar: suspend the service, issue an apology and promise to do better. Rinse and isn't the first time we've seen this playbook. Microsoft's Tay chatbot disaster in 2016 followed a similar pattern. The fact that we're here again, nearly a decade later, suggests the AI industry has learnt remarkably little from its mistakes. But the world is no longer willing to accept 'sorry' as sufficient. This is because AI has become a force multiplier for content generation and dissemination, and the time-to-impact has shrunk. Thus, liability and punitive actions are being discussed. The Grok incident revealed a troubling aspect of how AI companies approach accountability. According to xAI, the problematic behaviour emerged after they tweaked their system to allow more 'politically incorrect' responses - a decision that seems reckless. When the inevitable happened, they blamed deprecated code that should have been removed. If you're building systems capable of reaching millions of users, shouldn't you know what code is running in production?The real problem isn't technical - it's philosophical. Too many AI companies treat bias and harmful content as unfortunate side effects to be addressed after deployment, rather than fundamental risks to be prevented beforehand. This reactive approach worked when the stakes were lower, but AI systems now operate at unprecedented scale and influence. When a chatbot generates hate speech, it's not embarrassing - it's dangerous, legitimising and amplifying extremist ideologies to vast legal landscape is shifting rapidly, and AI companies ignoring these changes do so at their peril. The EU's AI Act, which came into force in February, represents a shift from reactive regulation to proactive governance. Companies can no longer apologise their way out of AI failures - they must demonstrate they've implemented robust safeguards before AB 316, introduced last January, takes an even more direct approach by prohibiting the 'the AI did it' defence in civil cases. This legislation recognises what should be obvious: companies that develop and deploy AI systems bear responsibility for their outputs, regardless of whether those outputs were 'intended'.India's approach may prove more punitive than the EU's regulatory framework and more immediate than the US litigation-based system, focusing on swift enforcement of existing criminal laws rather than waiting for new AI-specific legislation. India doesn't yet have AI-specific legislation, but if Grok's antisemitic incident had occurred with Indian users, then steps like immediate blocking of the AI service, a criminal case against xAI under IPC 153A, and a demand for content removal from the X platform would have been Grok incident may mark a turning point. Regulators worldwide are demanding proactive measures rather than reactive damage control, and courts are increasingly willing to hold companies directly liable for their systems' shift is long overdue. AI systems aren't just software - they're powerful tools that shape public discourse, influence decision-making and can cause real-world harm. The companies that build these systems must be held to higher standards than traditional software developers, with corresponding legal and ethical question facing the AI industry isn't whether to embrace this new reality - it's whether to do so voluntarily or have it imposed by regulators and courts. Companies that continue to rely on the old playbook of post-incident apologies will find themselves increasingly isolated in a world demanding AI industry's true maturity will show not in flashy demos or sky-high valuations, but in its commitment to safety over speed, rigour over shortcuts, and real accountability over empty apologies. In this game, 'sorry' won't cut it - only responsibility writer is a commentator ondigital policy issues (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Rumblings at the top of Ola Electric The hybrid vs. EV rivalry: Why Maruti and Mahindra pull in different directions. What's best? How Safexpress bootstrapped its way to build India's largest PTL Express business Zee promoters have a new challenge to navigate. And it's not about funding or Sebi probe. Newton vs. industry: Inside new norms that want your car to be more fuel-efficient Stock Radar: UltraTech Cements hit a fresh record high in July; what should investors do – book profits or buy the dip? F&O Radar | Deploy Bear Put Spread in Nifty to gain from index correction Weekly Top Picks: These stocks scored 10 on 10 on Stock Reports Plus


Mint
31 minutes ago
- Mint
Google's AI agent ‘Big Sleep' foils cyberattack in groundbreaking first, says Sundar Pichai
In a major breakthrough for cybersecurity, Google CEO Sundar Pichai announced on Tuesday (July 15) that the company's AI agent, Big Sleep, successfully identified and thwarted a cyber exploit before it could be deployed — a first-of-its-kind achievement for artificial intelligence in threat prevention. 'New from our security teams: Our AI agent Big Sleep helped us detect and foil an imminent exploit. We believe this is a first for an AI agent - definitely not the last - giving cybersecurity defenders new tools to stop threats before they're widespread,' Pichai posted on X (formerly Twitter). A new era in cybersecurity? This marks a potential inflection point in cybersecurity, as AI shifts from passive defense — identifying threats post-breach — to proactive interdiction. What's next for 'Big Sleep' Google has not disclosed when Big Sleep was deployed or how long it has been operational. However, Pichai's post suggests this is just the beginning of more AI-driven defense tools that will be used across Google's ecosystem and offered to cloud clients. This incident also raises questions about how governments, enterprises, and cloud service providers will collaborate with AI to stay ahead of increasingly sophisticated threat actors. As cyberattacks grow more frequent and damaging, the use of advanced AI like Big Sleep may become standard across global IT defenses.