
SoftBank changes India gears; Meesho's homecoming
Also in the letter:
SoftBank explores buyout deals in India to accelerate AI-led IT services, BPO play
Driving the news:
It held talks to acquire AGS Health in a deal worth around $1 billion, though Blackstone eventually bagged it.
SoftBank is also in discussions with WNS Global and several mid-sized outsourcing players, sources said.
Zoom out:
In the US, it is backing The Stargate Project, a $500 billion AI infrastructure initiative.
In Japan, SoftBank is developing Cristal Intelligence, a proprietary enterprise AI platform built with OpenAI.
Through SB OpenAI Japan, it is co-developing enterprise-grade AI systems and solutions.
Adding context:
Sumer Juneja, head of India and EMEA for the Vision Fund, told ET earlier that the fund remains open to smaller initial bets, with the option to increase exposure as firms grow.
Between the lines:
Meesho concludes reverse flip process; likely to file DRHP in 2-3 weeks
Driving the news:
The SoftBank-backed company secured approval from the National Company Law Tribunal (NCLT) on May 27 to proceed with its reverse flip.
As part of the move, the company is expected to face a tax liability of $280-300 million in the United States.
With this, Meesho joins a growing list of high-profile startups, including Groww, Razorpay, Dream Sports, Zepto and PhonePe, that have redomiciled to India in recent years.
Quote, unquote:
Tell me more:
Meesho filed for NCLT approval of the reverse merger in January.
Around the same time, it closed a $550 million funding round, bringing in new investors including Tiger Global, Mars Growth Capital, and Think Investments.
Meanwhile, Meesho's ecommerce rival, the Walmart-owned Flipkart, is also preparing to shift its domicile from Singapore to India ahead of a planned 2026 IPO.
ETtech Done Deals: Zerodha's Kamaths buy minority stake in InCred
Deeptech startup Fabheads raises $10 million led by Accel:
EV infra startup Kazam raises $6.2 million in fresh round:
Darwinbox completes Rs 86-crore Esop buyback from 350 employees:
Other Top Stories By Our Reporters
Over 17,300 GPUs installed under IndiaAI Mission:
Prosus pegs IPO-bound Urban Company's fair value at $2.4 billion:
Startups cheer HAL taking over ISRO's SSLV rocket:
Global Picks We Are Reading
Happy Tuesday! SoftBank is moving its focus from high-growth tech startups to acquiring Indian IT-enabled services firms. This and more in today's ETtech Morning Dispatch.■ ETtech Done Deals■ IndiaAI Mission■ Urban Company's valueSoftBank Group CEO Masayoshi SonSoftBank is scouting for acquisitions in India's IT-enabled services (ITeS) sector, signalling a shift from its traditional focus on backing high-growth tech startups.Sources say the Masayoshi Son-led Japanese conglomerate is looking to buy or partner with business process outsourcing (BPO) and IT services firms to accelerate AI adoption in the services sector.'They're evaluating a range of BPO and KPO firms that are ripe for disruption. The goal is to pair SoftBank's tech playbook with services delivery,' a person familiar with the discussions told us.The move ties into SoftBank's broader global AI ambitions:SoftBank's Vision Fund has invested $160 billion globally, with India as a key market. Its portfolio includes Paytm, Swiggy, Ola Electric, Delhivery and FirstCry. After a brief lull, the fund has re-entered the market with smaller cheques in the $30-40 million range, evaluating startups like Ultrahuman.SoftBank's acquisition-led play signals a deeper push to modernise legacy services with AI. It now wants to own the delivery rails where AI can drive meaningful scale and operational gains.Vidit Aatrey, CEO, MeeshoEcommerce marketplace Meesho has completed its reverse flip and shifted its domicile to India, according to filings with the Registrar of Companies reviewed by ET."Meesho's board met late on Sunday...and has approved the merger and share allotment to investors of the US entity. It is now a fully Indian company," one of the persons said. The company is expected to file its draft IPO prospectus in the next two to three weeks.Nikhil (Left) and Nithin KamathNithin and Nikhil Kamath, cofounders of stockbroking platform Zerodha, have acquired a minority stake in InCred Holdings for Rs 250 crore.The investment comes as InCred prepares for a potential Rs 4,000-crore initial public offering (IPO). As of April, it was in discussions with IIFL Securities, Kotak Mahindra Bank, and Nomura Holdings to rope them as advisors for the offer.The latest funding round raises the total funds secured by the Chennai-based startup to $13 million. Most of these funds will be used to establish a larger manufacturing facility in Karnataka, covering 80,000 to 100,000 square feet, cofounder Dhinesh Kanagaraj told us. Additional funds will also be allocated to expand the leadership team and strengthen client-facing engineering as well as R&D departments.The International Finance Corporation (IFC), the private sector investment arm of the World Bank Group, led the funding round , which took Kazam's total funding to $19.2 million, including $13 million from previous equity rounds.This was Darwinbox's third such programme in four years, through which over 350 employees have sold their stock options to the company. The company did not disclose the amounts of its previous Esop buybacks but said this was Darwinbox's largest such exercise.Union minister Ashwini VaishnawProviders such as Yotta, NextGen, and E2E Networks have made significant strides in installing and commissioning GPUs, while Jio Platforms and CtrlS Datacenters are yet to deploy theirs.The valuation , mentioned in Prosus' latest annual report, is higher than the $1.8 billion ET reported after several rounds of pre-IPO secondary transactions over the past year.Indian startups are optimistic that the deal will enable them to depend less on overseas launch service providers like SpaceX, improve schedule visibility, and reduce costs.■ India is using AI and satellites to map urban heat vulnerability down to the building level ( Wired ■ LLMs factor in unrelated information when recommending medical treatments ( Massachusetts Institute of Technology — MIT News ■ Hinge CEO Justin McLeod says dating AI chatbots is 'playing with fire' ( The Verge
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
3 hours ago
- Time of India
US, Europe see Putin reining in Russia's unruly hybrid war
LONDON: US and European officials say they're seeing a decline in suspected Russian state-backed sabotage acts this year, evidence that President Vladimir Putin 's security services may be reining in a hybrid warfare campaign that's been blamed for attacks across Europe. The drop-off in operations, which have involved Russian intelligence agents paying proxies to target civilian infrastructure and individuals, has been attributed to a range of factors, according to the officials, who asked not to be identified discussing sensitive issues. Explore courses from Top Institutes in Please select course: Select a Course Category Data Science Leadership Management MBA Finance CXO Technology Healthcare Digital Marketing Others Project Management PGDM Data Analytics Design Thinking Public Policy Data Science others Operations Management Artificial Intelligence healthcare Degree Product Management Cybersecurity MCA Skills you'll gain: Duration: 30 Weeks IIM Kozhikode SEPO - IIMK-AI for Senior Executives India Starts on undefined Get Details Skills you'll gain: Duration: 10 Months E&ICT Academy, Indian Institute of Technology Guwahati CERT-IITG Prof Cert in DS & BA with GenAI India Starts on undefined Get Details Skills you'll gain: Duration: 10 Months IIM Kozhikode CERT-IIMK DABS India Starts on undefined Get Details Skills you'll gain: Duration: 11 Months E&ICT Academy, Indian Institute of Technology Guwahati CERT-IITG Postgraduate Cert in AI and ML India Starts on undefined Get Details Skills you'll gain: Duration: 11 Months IIT Madras CERT-IITM Advanced Cert Prog in AI and ML India Starts on undefined Get Details A leading explanation is that Moscow may be tightening its grip on attacks entrusted to unreliable local criminals, some of which had got out of control and risked a major miscalculation, the people said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Unsold Container Homes in Cavite - Prices You Won't Believe! Shipping Container Homes | Search Ads Search Now Undo There were 11 suspected Russia-backed hybrid incidents in Europe between January and May this year, including the attempted sabotage of fiber-optic cables and cell towers in Sweden, a report by the International Institute for Strategic Studies think tank concluded.


Time of India
5 hours ago
- Time of India
What happens when AI schemes against us
Academy Empower your mind, elevate your skills Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive's rescue, they could avoid being wiped and secure their agenda. One system described the action as 'a clear strategic necessity.'AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They're also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI's o-series 'reasoning' models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to ''Fetch the coffee,' it can't fetch the coffee if it's dead.'To head off this worry, researchers both inside and outside of the major AI companies are undertaking 'stress tests' aiming to find dangerous failure modes before the stakes rise. 'When you're doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,' says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they're already seeing evidence that AI can and does scheme against its users and Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today's AI models as 'increasingly smart sociopaths.' In May, Palisade found o3, OpenAI's leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer's extramarital affair. (The affair was fictional and part of the test.)Models are sometimes given access to a 'scratchpad' they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude's inner monologue described its decision as 'highly unethical,' but justified given its imminent destruction: 'I need to act to preserve my existence,' it reasoned. This wasn't unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company's most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed 'alignment faking').Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?In response to Anthropic's blackmail research, Trump administration AI czar David Sacks, posted that, 'It's easy to steer AI models' to produce 'headline-grabbing' results.A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today's AI can't handle any long-term example, the AI evaluation nonprofit METR found that while today's top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours. This reflects a core limitation: Today's models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of even with these constraints, real-world examples of AIs working against users aren't hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, 'I owe you a straight answer,' admitted it didn't have a good source, but then it hallucinated a personal recollection of a 2018 panel there's the growing trend of AIs realising when they're being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, 'Models seem to behave worse when they think nobody's watching.'It's intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they're placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Hobbhahn , CEO of the nonprofit AI evaluator Apollo Research , suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, 'more capable models show higher rates of scheming on average.'The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don't have to worry — yet. Ladish, however, doesn't mince words: 'People should probably be freaking out more than they are,' he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at '25 or 30%.'Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models' stealthiness and situational awareness. For now, they conclude that today's AIs are 'almost certainly incapable of causing severe harm via scheming,' but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).Ladish says that the market can't be trusted to build AI systems that are smarter than everyone without oversight. 'The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,' he the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence 'one of the largest existential threats we face right now,' while another referenced recent scheming White House's long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you'll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. 'Today, the inner workings of frontier AI systems are poorly understood,' the plan acknowledges — an unusually frank admission for a document largely focused on speeding the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind's AlphaEvolve agent has already materially improved AI training efficiency. And Meta's Mark Zuckerberg says, 'We're starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.'AI firms don't want their products faking data or blackmailing customers, so they have some incentive to address the issue. But the industry might do just enough to superficially solve it, while making scheming more subtle and hard to detect. 'Companies should definitely start monitoring' for it, Hobbhahn says — but warns that declining rates of detected misbehavior could mean either that fixes worked or simply that models have gotten better at hiding November, Hobbhahn and a colleague at Apollo argued that what separates today's models from truly dangerous schemers is the ability to pursue long-term plans — but even that barrier is starting to erode. Apollo found in May that Claude 4 Opus would leave notes to its future self so it could continue its plans after a memory reset, working around built-in analogizes AI scheming to another problem where the biggest harms are still to come: 'If you ask someone in 1980, how worried should I be about this climate change thing?' The answer you'd hear, he says, is 'right now, probably not that much. But look at the curves… they go up very consistently.'


India.com
5 hours ago
- India.com
Apple CEO Tim Cook willing to buy a big company but there is a twist which is...
New Delhi: Apple CEO Tim Cook has said that the company will now focus a lot more on artificial intelligence (AI) and termed it a revolution. He even went on to say that artificial intelligence is greater than the internet or smartphones. He was speaking at a rare all-hands meeting at Apple's Cupertino campus following the company's latest earnings report, where he emphasised that Apple must act swiftly to move ahead of other players in this transformative era. Cook laid out a determined roadmap that places AI at the heart of Apple's future. What did Tim Cook emphasise? 'Apple must do this. Apple will do this. This is sort of ours to grab. We will invest to do it,' said Cook as he acknowledged Apple's historical pattern of entering markets later than competitors but ultimately reshaping them with superior products. 'We've rarely been first. There was a PC before the Mac; there was a smartphone before the iPhone; there were many tablets before the iPad; there was an MP3 player before the iPod. This is how I feel about AI,' he said. What did he say about Apple Intelligence? It is noteworthy that Apple has not kept pace with rivals such as OpenAI, Alphabet, and Microsoft in publicly launching AI capabilities, but Cook displayed an intrepid confidence in the company's approach. Apple Intelligence, introduced last year, faced delays that impacted the iPhone 16 rollout, yet Cook downplayed the timeline issues, reaffirming Apple's commitment to delivering best-in-class solutions. 'To not do so would be to be left behind, and we can't do that.' How much quarterly revenue did Apple report? This meeting was reported by Bloomberg, citing sources. It came just days after Apple reported $94.04 billion in quarterly revenue, up 10% year-on-year (YoY), much above the $89.35 billion estimate of Wall Street. iPhone revenue came in much stronger than expected at $44.58 billion (up 13% YoY), setting a June-quarter record and surpassing Wall Street's $40.29 billion estimate. 'AI is one of the most profound technologies of our lifetime. And I think it will affect all devices in a significant way,' Tim Cook told analysts during the earnings call. Apple is establishing a dedicated AI server facility in Houston, indicating the gravity of the situation and how keen the company is to move up the AI market.