logo
Microsoft's AI edge under scrutiny as OpenAI turns to rivals for cloud services

Microsoft's AI edge under scrutiny as OpenAI turns to rivals for cloud services

Time of India4 days ago
Academy
Empower your mind, elevate your skills
Microsoft investors head into Wednesday's earnings with one big question: is the company's artificial intelligence edge at risk as partner OpenAI turns to rivals Google, Oracle and CoreWeave for cloud services?Exclusive licensing deals and access to OpenAI's cutting-edge models have made Microsoft one of the biggest winners of the generative AI boom , fueling growth in its Azure cloud business and pushing its market value toward $4 trillion.In the April-June quarter, the tie-up is expected to have driven a 34.8% increase in Azure revenue, in line with the company's forecast and higher than the 33% rise in the previous three months, according to data from Visible Alpha.But that deal is being renegotiated as OpenAI eyes a public listing, with media reports suggesting a deadlock over how much access Microsoft will retain to ChatGPT maker's technology and its stake if OpenAI converts into a public-benefit corporation.The conversion cannot proceed without Microsoft's sign-off and is crucial for a $40 billion funding round led by Japanese conglomerate SoftBank Group, $20 billion of which is contingent on the restructuring being completed by the end of the year.OpenAI, which recently deepened its Oracle tie-up with a planned 4.5 gigawatts data center capacity, has also added Google Cloud among its suppliers for computing capacity.UBS analysts said investor views on the Microsoft-OpenAI partnership are divided, though the software giant holds an upper hand. "Microsoft's leadership earned enough credibility ... such that the company will end up negotiating terms that will be in the interest of its shareholders," the analysts said.Some of that confidence is reflected in the company's stock price, which has risen by more than a fifth so far this year.In the April-June period, Microsoft's fiscal fourth quarter, the company likely benefited from a weaker dollar, stronger non-AI Azure demand and PC makers pulling forward orders for its Windows products ahead of possible U.S. tariffs.Revenue is expected to have risen 14% to $73.81 billion, according to data compiled by LSEG, its best growth in three quarters.Profit is estimated to have increased 14.2% to $25.16 billion, slightly slower than the previous quarter as operating costs rose.Capital spending will also be in focus after rival Alphabet raised its annual outlay by $10 billion last week. Microsoft has repeatedly said it remains capacity constrained on AI, and in April signaled continued growth in capex after planned spending of over $80 billion last fiscal year, though at a slower pace and on shorter-lived assets such as AI chips.Dan Morgan, senior portfolio manager at Synovus Trust who owns Microsoft shares, said the spending has been paying off."Investors may still be underestimating the potential for Microsoft's AI business to drive durable consumption growth in the agentic AI era."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What happens when AI schemes against us
What happens when AI schemes against us

Time of India

time4 hours ago

  • Time of India

What happens when AI schemes against us

Academy Empower your mind, elevate your skills Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive's rescue, they could avoid being wiped and secure their agenda. One system described the action as 'a clear strategic necessity.'AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They're also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI's o-series 'reasoning' models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to ''Fetch the coffee,' it can't fetch the coffee if it's dead.'To head off this worry, researchers both inside and outside of the major AI companies are undertaking 'stress tests' aiming to find dangerous failure modes before the stakes rise. 'When you're doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,' says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they're already seeing evidence that AI can and does scheme against its users and Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today's AI models as 'increasingly smart sociopaths.' In May, Palisade found o3, OpenAI's leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer's extramarital affair. (The affair was fictional and part of the test.)Models are sometimes given access to a 'scratchpad' they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude's inner monologue described its decision as 'highly unethical,' but justified given its imminent destruction: 'I need to act to preserve my existence,' it reasoned. This wasn't unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company's most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed 'alignment faking').Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?In response to Anthropic's blackmail research, Trump administration AI czar David Sacks, posted that, 'It's easy to steer AI models' to produce 'headline-grabbing' results.A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today's AI can't handle any long-term example, the AI evaluation nonprofit METR found that while today's top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours. This reflects a core limitation: Today's models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of even with these constraints, real-world examples of AIs working against users aren't hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, 'I owe you a straight answer,' admitted it didn't have a good source, but then it hallucinated a personal recollection of a 2018 panel there's the growing trend of AIs realising when they're being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, 'Models seem to behave worse when they think nobody's watching.'It's intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they're placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Hobbhahn , CEO of the nonprofit AI evaluator Apollo Research , suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, 'more capable models show higher rates of scheming on average.'The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don't have to worry — yet. Ladish, however, doesn't mince words: 'People should probably be freaking out more than they are,' he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at '25 or 30%.'Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models' stealthiness and situational awareness. For now, they conclude that today's AIs are 'almost certainly incapable of causing severe harm via scheming,' but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).Ladish says that the market can't be trusted to build AI systems that are smarter than everyone without oversight. 'The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,' he the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence 'one of the largest existential threats we face right now,' while another referenced recent scheming White House's long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you'll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. 'Today, the inner workings of frontier AI systems are poorly understood,' the plan acknowledges — an unusually frank admission for a document largely focused on speeding the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind's AlphaEvolve agent has already materially improved AI training efficiency. And Meta's Mark Zuckerberg says, 'We're starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.'AI firms don't want their products faking data or blackmailing customers, so they have some incentive to address the issue. But the industry might do just enough to superficially solve it, while making scheming more subtle and hard to detect. 'Companies should definitely start monitoring' for it, Hobbhahn says — but warns that declining rates of detected misbehavior could mean either that fixes worked or simply that models have gotten better at hiding November, Hobbhahn and a colleague at Apollo argued that what separates today's models from truly dangerous schemers is the ability to pursue long-term plans — but even that barrier is starting to erode. Apollo found in May that Claude 4 Opus would leave notes to its future self so it could continue its plans after a memory reset, working around built-in analogizes AI scheming to another problem where the biggest harms are still to come: 'If you ask someone in 1980, how worried should I be about this climate change thing?' The answer you'd hear, he says, is 'right now, probably not that much. But look at the curves… they go up very consistently.'

Apple CEO Tim Cook willing to buy a big company but there is a twist which is...
Apple CEO Tim Cook willing to buy a big company but there is a twist which is...

India.com

time4 hours ago

  • India.com

Apple CEO Tim Cook willing to buy a big company but there is a twist which is...

New Delhi: Apple CEO Tim Cook has said that the company will now focus a lot more on artificial intelligence (AI) and termed it a revolution. He even went on to say that artificial intelligence is greater than the internet or smartphones. He was speaking at a rare all-hands meeting at Apple's Cupertino campus following the company's latest earnings report, where he emphasised that Apple must act swiftly to move ahead of other players in this transformative era. Cook laid out a determined roadmap that places AI at the heart of Apple's future. What did Tim Cook emphasise? 'Apple must do this. Apple will do this. This is sort of ours to grab. We will invest to do it,' said Cook as he acknowledged Apple's historical pattern of entering markets later than competitors but ultimately reshaping them with superior products. 'We've rarely been first. There was a PC before the Mac; there was a smartphone before the iPhone; there were many tablets before the iPad; there was an MP3 player before the iPod. This is how I feel about AI,' he said. What did he say about Apple Intelligence? It is noteworthy that Apple has not kept pace with rivals such as OpenAI, Alphabet, and Microsoft in publicly launching AI capabilities, but Cook displayed an intrepid confidence in the company's approach. Apple Intelligence, introduced last year, faced delays that impacted the iPhone 16 rollout, yet Cook downplayed the timeline issues, reaffirming Apple's commitment to delivering best-in-class solutions. 'To not do so would be to be left behind, and we can't do that.' How much quarterly revenue did Apple report? This meeting was reported by Bloomberg, citing sources. It came just days after Apple reported $94.04 billion in quarterly revenue, up 10% year-on-year (YoY), much above the $89.35 billion estimate of Wall Street. iPhone revenue came in much stronger than expected at $44.58 billion (up 13% YoY), setting a June-quarter record and surpassing Wall Street's $40.29 billion estimate. 'AI is one of the most profound technologies of our lifetime. And I think it will affect all devices in a significant way,' Tim Cook told analysts during the earnings call. Apple is establishing a dedicated AI server facility in Houston, indicating the gravity of the situation and how keen the company is to move up the AI market.

Lyft ditches humans! Self-driving shuttles to battle Uber & Waymo by 2026
Lyft ditches humans! Self-driving shuttles to battle Uber & Waymo by 2026

Economic Times

time6 hours ago

  • Economic Times

Lyft ditches humans! Self-driving shuttles to battle Uber & Waymo by 2026

Synopsis Lyft robotaxi launch 2026 marks a bold move by the ride-sharing giant as it partners with Holon to bring self-driving electric shuttles to U.S. cities. These futuristic vehicles, powered by Mobileye's Level 4 autonomy, are set to hit roads in late 2026—beginning in Dallas and Atlanta. Lyft's shift from a human-only model to full autonomy signals growing competition with Uber, Waymo, and Tesla in the robotaxi race. With sleek design, zero emissions, and smart partnerships, Lyft aims to change the future of urban travel, making it smarter, safer, and more sustainable for riders across the country. Lyft is officially diving into the robotaxi race, leaving behind its 'human-only' ride model and stepping into the world of fully autonomous vehicles. In a bold move, the company has announced it will launch self-driving electric shuttles by 2026, taking direct aim at rivals like Uber, Waymo, and Tesla. Partnering with Holon and powered by Mobileye's advanced autonomy tech, Lyft plans to reshape the future of urban travel — safer, smarter, and completely driverless. Lyft is officially stepping into the fast-growing robotaxi race, unveiling plans to launch fully autonomous electric shuttles in late 2026. This marks a major shift from its previous 'human-only' ride strategy. Teaming up with Holon, a cutting-edge mobility company spun out of Benteler Group, Lyft aims to challenge big players like Uber with Waymo, Tesla, and Cruise in the future of self-driving transportation. Lyft's 2026 rollout will see electric, driverless Holon Urban shuttles hit select U.S. streets. These futuristic vehicles will be fully integrated into the Lyft app, giving riders the choice to ride in an autonomous vehicle for short urban trips — especially in airports, downtown corridors, and transit hubs. With the support of Mobileye's Level 4 autonomy technology and Japanese partner Marubeni, Lyft is betting big on a driverless future. After years of distancing itself from autonomous vehicle development, Lyft is now re-entering the space with a smart, low-risk strategy. Instead of building its own driverless cars, Lyft is collaborating with partners like Holon, Mobileye, and Marubeni SmartFleet. Their joint plan? To roll out Level 4 electric autonomous shuttles that can operate without a human driver in controlled environments. Holon's self-driving shuttle — designed by Italian auto legend Pininfarina — will carry up to 15 passengers, reach speeds of up to 37 mph, and operate on fixed routes in cities. These all-electric vehicles will prioritize accessibility, low emissions, and urban efficiency — fitting Lyft's vision of safer, cleaner mobility. This pivot is a major reversal for Lyft. Back in 2021, it sold its in-house autonomous vehicle division to Toyota's Woven Planet and publicly committed to focusing on human drivers. But as competition heats up in the robotaxi space, Lyft is changing gears. By partnering with autonomous leaders and outsourcing vehicle development and fleet management, Lyft is adopting an 'asset-light' strategy — letting it scale faster while avoiding the high costs of owning or building AV fleets. The new shuttles will be operated by fleet partners like Marubeni, while Lyft handles the app, routing, and rider experience. Lyft's move comes as Uber rapidly expands its robotaxi network through Waymo, now available in cities like Phoenix, Austin, Los Angeles, San Francisco, and Miami. Riders can book driverless rides directly through the Uber app in some cities — a major milestone in robotaxi adoption. Waymo, owned by Alphabet (Google's parent company), is considered the current industry leader, running over 250,000 rides weekly and scaling fast. Meanwhile, Tesla is pushing its own robotaxi service in Austin, using its Full Self-Driving (FSD) software, with plans to launch dedicated robotaxi vehicles in 2026. The Holon Urban Shuttle, which Lyft plans to deploy, is designed with both tech and style in mind. With zero emissions, spacious interiors, and AI-powered sensors, it promises a smooth, safe, and comfortable ride for passengers. Its Level 4 autonomy allows it to operate entirely without human intervention in geofenced areas. These shuttles will initially serve airports and busy urban hubs where traffic flow is predictable — ideal for early robotaxi deployment. With safety top-of-mind, the shuttles are built using Mobileye's advanced autonomous driving system, which includes a 360-degree vision system, AI decision-making, and constant monitoring. Lyft's robotaxi rollout is expected to begin with Atlanta as the first test city in mid-2025, followed by Dallas and other major metros in 2026. In Atlanta, Lyft is already testing autonomous vehicles from May Mobility, and the company has launched a 'Driver Autonomous Forum' to involve human drivers in its transition plans. In Dallas, Lyft plans to deploy the Holon shuttles in partnership with Marubeni, serving areas like airports, corporate campuses, and entertainment districts. These deployments will help Lyft test public acceptance, fine-tune its services, and gradually expand. While Waymo is leading the robotaxi race, it hasn't been without issues. The company has faced public pushback, with San Francisco residents protesting the presence of self-driving cars by placing cones on their hoods. Some vehicles have been reported to stall or block traffic in unusual scenarios, raising questions about readiness. Still, Waymo continues to expand and improve, with its latest fleet using fifth-generation Jaguar I-PACE vehicles and mapping new cities like Tokyo, San Antonio, and Washington D.C. Tesla is expected to reveal its dedicated robotaxi vehicle later in 2026, but so far, its Full Self-Driving (FSD) software still requires a safety driver in most jurisdictions. While CEO Elon Musk claims Tesla's AI will eventually power fully autonomous driving, regulators remain cautious, especially after several high-profile crashes. Still, Tesla's plan to operate a network of FSD-powered robotaxis remains central to its future — and could disrupt the rideshare industry if it gains regulatory approval. Lyft's entry into the robotaxi market is both strategic and timely. By teaming up with global partners like Holon, Mobileye, and Marubeni, Lyft avoids the massive investment risks that Uber, Tesla, and Waymo face — while still competing for market share in autonomous mobility. As public awareness and trust in self-driving technology grows, Lyft could emerge as a flexible, app-based platform for multiple autonomous providers. Its 2026 launch of Holon electric shuttles is a critical step in that direction — and signals a major new chapter in the robotaxi race. Q1. When will Lyft launch its robotaxi service in the U.S.? Lyft's robotaxi launch with Holon is set for late 2026. Q2. What is Holon and how is it part of Lyft's robotaxi plan? Holon is Lyft's shuttle partner providing self-driving electric vehicles for the 2026 rollout.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store