logo
Meta's year of bold ‘superintelligence' bets unlikely to pump profits

Meta's year of bold ‘superintelligence' bets unlikely to pump profits

Indian Express4 days ago
It's crunch time for Mark Zuckerberg as he pulls out all the stops to stay relevant in Silicon Valley's intensifying advanced artificial intelligence race.
The Meta CEO has sparked a billion-dollar talent war, aggressively poaching researchers from rivals including OpenAI. But as Meta's spending rises, so does the pressure it faces to deliver returns.
For the second quarter, though, Wall Street is bracing for disappointment as the company is set to report its slowest profit growth in two years on Wednesday, rising by 11.5% to $15.01 billion, as operating costs jump nearly 9%.
Revenue, too, likely grew at its slowest pace in seven quarters in that period, up an expected 14.7% to $44.80 billion, according to an average analyst estimate from LSEG.
While Zuckerberg is no stranger to high-stakes pursuits – Meta's augmented-reality unit has burned more than $60 billion since 2020 – his latest push comes with added urgency because of the underwhelming performance of the company's large language Llama 4 model.
He recently pledged hundreds of billions of dollars to build massive AI data centers and shelled out $14.3 billion for a stake in startup Scale AI, poaching its 28-year-old billionaire CEO Alexandr Wang, even as Meta continued to lay off people.
Investors have largely backed Zuckerberg's frenzied pursuit of superintelligence – a hypothetical concept where AI surpasses human intelligence in every possible way – pushing the company's stock up more than a fifth so far this year.
But they will watch if Meta further increases its capital expenditure for the year after boosting it in April. Alphabet also upped the ante last week, increasing its annual capex forecast by 13% to $85 billion due to surging demand for its AI-powered Google Cloud services.
'We view rising capex as positive given… Meta can become a one-stop shop for many marketing departments,' said Ben Barringer, head of technology research at Quilter Cheviot, which holds Meta shares.
Lagging efforts from Alphabet's Google DeepMind and OpenAI, Meta launched a Superintelligence Lab last month that will work in parallel to Meta AI, the company's established AI research division, led by deep learning pioneer, Yann LeCun.
To differentiate its efforts, Zuckerberg has promised to release Meta's AI work as open source and touted that superintelligence can become a mainstream consumer product through devices like Ray-Ban Meta smartglasses, rather than a purely enterprise-focused technology.
The strategy plays to Meta's strengths, analysts say, pointing to its more than 3-billion strong social media user base and engagement gains in recent years, driven by AI-enhanced content targeting.
Still, Meta's mainstay advertising market is under threat from advertisers pulling back spending in the face of President Donald Trump's tariffs, and tough competition from Chinese-owned TikTok, whose U.S. ban now seems unlikely.
Some advertisers may have leaned on proven platforms such as Meta amid the uncertainty, but that will not shield the company from questions over its superintelligence ambitions and how they fit into its broader business strategy, said Minda Smiley, senior analyst at eMarketer.
'While Meta has seen massive gains from incorporating AI into its ad platform and algorithms, its attempts to directly compete with the likes of OpenAI are proving to be more challenging while costing it billions of dollars.'
Questions remain about when superintelligence can be achieved, a timeline Zuckerberg admits is uncertain. Meta's LeCun is also a known skeptic of the large language model path to superintelligence.
'Meta's AI strategy today is more cohesive than in 2023, but there's still a sense the company is still searching for direction,' MoffettNathanson analysts said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Saint, Satan, Sam: Chat about the ChatGPT Man
Saint, Satan, Sam: Chat about the ChatGPT Man

Indian Express

time8 minutes ago

  • Indian Express

Saint, Satan, Sam: Chat about the ChatGPT Man

For many people, AI (artificial intelligence) is almost synonymous with ChatGPT, a chatbot developed by OpenAI, which is the closest thing tech has had to a magic genie. You just tell ChatGPT what you want in information terms and it serves it up – from writing elaborate essays to advising you on how to clear up your table to even serving up images based on your descriptions. Such is its popularity that at one stage it even overtook the likes of Instagram and TikTok to become the most downloaded app in the world. While almost every major tech brand has its own AI tool (even the mighty Apple is working on one), AI for many still remains ChatGPT. The man behind this phenomenon is Samuel Harris 'Sam' Altman, the 40-year-old CEO of OpenAI, and perhaps the most polarising figure in tech since Steve Jobs. To many, he is a visionary who is changing the world and taking humanity to a better place. To many others, he is a cunning, manipulative person who uses his marketing skills to raise money and is actually destroying the planet. The truth might be somewhere between those two extremes. By some literary coincidence, two books have recently been released on Sam Altman, and are shooting up the bestseller charts. Both are superbly written and researched (based on interviews with hundreds of people), and while they start at almost the same point, they not surprisingly come to rather different conclusions about the man and his work. Those who tend to see Altman as a well-meaning, if occasionally odd, genius will love Keach Hagey's The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. Hagey is a Wall Street Journal reporter and while she does not put a halo around Altman, her take on the OpenAI CEO reflects the title of the book – she sees Altman as a visionary who is trying to change the world. The fact that Altman collaborated on the book (although he is believed to have thought he was too young for a biography) might have something to do with this, for the book does articulate Altman's vision on a variety of subjects, but most of all, on AI and where it is headed. Although it begins with the events leading up to Altman's being dramatically sacked as the CEO of OpenAI in November 2023, and his equally dramatic reinstatement within days, Hagey's book is a classic biography. It walks us through Altman's childhood, his getting interested in coding and then his decision to drop out from Stanford, before getting into tech CEO mode by first founding social media app Loopt and then joining tech incubator Y Combinator (which was behind the likes of Stripe, Airbnb and Dropbox) after meeting its co-founder Paul Graham, who is believed to have a profound impact on him (Hagey calls him 'his mentor'). Altman also gets in touch with a young billionaire who is very interested in AI and is worried that Google will come out with an AI tool that could ruin the world. Elon Musk in this book is very different from the eccentric character we have seen in the Trump administration, and is persuaded by Altman to invest in a 'Manhattan Project for AI,' which would be open source, and ensure that AI is only used for human good. Musk even proposes a name for it: OpenAI. And that is when things get really interesting. The similarities with Jobs are uncanny. Altman too gets deeply influenced by his parents (his father was known for his kind and generous nature), and like Jobs, although he is a geek, Altman's rise in Silicon Valley is more because of his ability to network and communicate than because of his tech knowledge. In perhaps the most succinct summary of Altman one can find, Hagey writes: 'Altman was not actually writing the code. He was, instead, the visionary, the evangelizer, and the dealmaker; in the nineteenth century, he would have been called 'the promoter.' His speciality, honed over years of advising and then running…Y Combinator, was to take the nearly impossible, convince others that it was in fact possible, and then raise so much money that it actually became possible.' But his ability to sell himself as a visionary and raise funds for causes has also led to Altman being seen as a person who literally moulded himself to the needs of his audience. And this in turn has seen him being seen as someone who indulges in doublespeak and exploits people for his own advantage (an accusation that was levelled at Jobs as well) ) – Musk ends up suing Altman and OpenAI for allegedly not being a non-profit organisation, which it was set up as. While Hagey never accuses Altman of being selfish, it is clear that the Board at OpenAI lost patience with what OpenAI co-founder Ilya Sutstkever refers to as 'duplicity and calamitous aversion to conflict.' It eventually leads to his being sacked by the OpenAI board for not being 'consistently candid in his communications with the board.' Of course, his sacking triggered off a near mutiny in OpenAI with employees threatening to leave, which in turn led to his being reinstated within a few days, and all being seemingly forgotten, if not forgiven. Hagey's book is a compelling read on Altman, his obsession with human progress (he has three hand axes used by hominids in his house), relationships with those he came in touch with, and Silicon Valley politics in general. At about 380 pages, The Optimist is easily the single best book on Altman you can read, and Hagey's brisk narration makes it a compelling read. A much more cynical perception of Altman and OpenAI comes in Karen Hao's much talked-about Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Currently a freelancer who writes for The Atlantic, Hao had previously worked in the Wall Street Journal and had covered OpenAI as back as in 2020, before ChatGPT had made it a household name. As its name indicates, Hao's book is as much about Altman as it is about OpenAI, and the place both play in the artificial intelligence revolution that is currently enveloping the world. At close to 500 pages, it is a bigger book than Hagey's, but reads almost like a thriller, and begins with a bang: 'On Friday, November 17, 2023, around noon Pacific time, Sam Altman, CEO of OpenAI, Silicon Valley's golden boy, avatar of the generative AI revolution, logged on to a Google Meet to see four of his five board members staring at him. From his videosquare, board member Ilya Sutskever, OPenAI's chief scientist, was brief: Altman was being fired.' While Hagey has focused more on Altman as a person, Hao looks at him as part of OpenAI, and the picture that emerges is not a pretty one. The first chapter begins with his meeting Elon Musk ('Everyone else had arrived, but Elon Musk was late usual) in 2015 and discussing the future of AI and humanity with a group of leading engineers and researchers. This meeting would lead to the formation of AI, a name given by Musk. But all of them ended up leaving the organisation, because they did not agree with Altman's perception and vision of AI. Hao uses the incident to show how Altman switched sides on AI, going from being someone who was concerned about AI falling into the wrong hands, to someone who pushed it as a tool for all. Like Hagey, Hao also highlights Altman's skills as a negotiator and dealmaker. However, her take is much darker. Hagey's Altman is a visionary who prioritises human good, and makes the seemingly impossible possible through sheer vision and effort. Hao's Altman is a power hungry executive who uses and exploits people, and is almost an AI colonialist. 'Sam is extremely good at becoming powerful,' says Paul Graham, the man who was Altman's mentor. 'You could parachute him into an island full of cannibals and come back in 5 years and he would be the king.' Hao's book is far more disturbing than Hagey's because it turns the highly rose-tinted view many have not just of Altman and OpenAI, but AI in general, on its head. We get to see a very competitive industry with far too much stress and poor work conditions (OpenAI hires workers in Africa at very low wages), and literally no regard for the environment (AI uses large amounts of water and electricity). OpenAI in Hao's book emerges almost as a sort of modern East India Company, looking to expand influence, territory and profits by mercilessly exploiting both customers and employees. Some might call it too dark, but her research and interviews across different countries cannot be faulted. It would be excessively naive to believe either book as the absolute truth on Altman in particular and OpenAI and AI in general, but they are both must-reads for any person who wants a complete picture of the AI revolution and its biggest brand and face. Mind you, it is a picture that is still in the process of being painted. AI is still in its infancy, and Altman turned forty in April. But as these two excellent books prove, neither is too young to be written about, while definitely being relevant enough to be read about.

Sam Altman hypes new models, products, and features ahead of GPT-5 launch: Know what's coming
Sam Altman hypes new models, products, and features ahead of GPT-5 launch: Know what's coming

India Today

time8 minutes ago

  • India Today

Sam Altman hypes new models, products, and features ahead of GPT-5 launch: Know what's coming

In the era of artificial intelligence, tech giants are racing to become the best. The OG OpenAI is also putting in all the effort to stay ahead of the game. First launched in 2022, OpenAI is now eyeing the release of its next-generation AI model, GPT 5, this month. Taking to X (formerly Twitter), CEO Sam Altman announced the company has a packed schedule and plans to roll out updates one after another. He added that in the coming months, OpenAI will introduce new models, products, and features. The most significant details, however, are still being kept under wraps. advertisementWhile Altman did not disclose what's coming in the next couple of months, he urged users to be a little patient with them. He added, "Please bear with us through some probable hiccups and capacity crunches. Although it may be slightly choppy, we think you'll really love what we've created for you!" The hiccups and capacity crunches take us straight to the time when OpenAI launched its image generation tool for GPT 4o and could not handle the frenzy around the Ghibli Studio trend. Just after the launch, Altman had to publish a post on X, describing how the GPU's were melting due to the overload and so was his team. This announcement comes just in time as the company is set to launch its next GPT model. Here is everything we know about the upcoming GPT 5. OpenAI GPT 5: Launch timeline and what to expect OpenAI is gearing up to unveil its muchanticipated nextgeneration language model, GPT 5, this month, with an opensource version tipped to arrive slightly earlier. Speaking recently on a podcast, Altman confirmed that the company is 'releasing GPT 5 soon'. While careful not to give too much away, he hinted that the leap forward in reasoning is notable, recounting a moment when GPT 5 managed to crack a complex question that had left him stumped. Altman called the experience a 'here it is' moment, stoking excitement around the model's close to the company suggest an early August launch date, with GPT 5 forming part of OpenAI's plan to unify its GPT and oseries models into a single, more streamlined family. This integration is designed to make life simpler for developers and users alike, particularly when working on reasoningbased the company has kept official details under wraps, GPT 5 is expected to debut in three versions: a flagship model, a smaller 'mini' version and an ultracompact 'nano' version. While the primary and mini models will be integrated into ChatGPT, the nano edition is expected to remain exclusive to API new system will also incorporate enhanced reasoning abilities developed and trialled with OpenAI's o3 model. By folding these features into GPT 5, the company hopes to offer a more rounded and capable toolset – one it sees as another step towards its longerterm ambition of Artificial General Intelligence, where machines can match or exceed human performance across a wide range of tasks. - Ends

‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom
‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom

Economic Times

time8 minutes ago

  • Economic Times

‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom

OpenAI is on the verge of releasing GPT-5, the most powerful model it has ever built. But its CEO, Sam Altman, isn't celebrating just yet. Instead, he's sounding the a revealing podcast appearance on This Past Weekend with Theo Von, Altman admitted that testing the model left him shaken. 'It feels very fast,' he said. 'There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?''His words weren't about performance metrics. They were about compared the development of GPT-5 to the Manhattan Project — the World War II effort that led to the first atomic bomb. The message was clear: speed and capability are growing faster than our ability to think through what they actually continued, 'Maybe it's great, maybe it's bad—but what have we done?' This wasn't just about AI as a tool. Altman was questioning whether humanity is moving so fast that it can no longer understand — or control — what it builds. 'It feels like there are no adults in the room,' he added, suggesting that regulation is far behind the pace of specs for GPT-5 are still under wraps, but reports suggest significant leaps over GPT-4: better multi-step reasoning, longer memory, and sharper multimodal capabilities. Altman himself didn't hold back about the previous version, saying, 'GPT-4 is the dumbest model any of you will ever have to use again, by a lot.'For many users, GPT-4 was already advanced. If GPT-5 lives up to the internal hype, it could change how people work, create, and another recent conversation, Altman described a moment where GPT-5 answered a complex question he couldn't solve himself. 'I felt like useless relative to the AI,' he admitted. 'It was really hard, but the AI just did it like that.'OpenAI's long-term goal has always been Artificial General Intelligence (AGI). That's AI capable of understanding and reasoning across almost any task — human-like once downplayed its arrival, suggesting it would 'whoosh by with surprisingly little societal impact.' Now, he's sounding far less sure. If GPT-5 is a real step toward AGI, the absence of a global framework to govern it could be dangerous. AGI remains loosely defined. Some firms treat it as a technical milestone. Others see it as a $100 billion opportunity, as Microsoft's partnership contract with OpenAI implies. Either way, the next model may blur the line between AI that helps and AI that acts. OpenAI isn't just facing ethical dilemmas. It's also under financial are pushing for the firm to transition into a for-profit entity by the end of the year. Microsoft, which has invested $13.5 billion in OpenAI, reportedly wants more control. There are whispers that OpenAI could declare AGI early in order to exit its agreement with Microsoft — a move that would shift the power balance in the AI sector insiders have reportedly described their wait-and-watch approach as the 'nuclear option.' In response, OpenAI is said to be prepared to go to court, accusing Microsoft of anti-competitive behaviour. One rumoured trigger could be the release of an AI coding agent so capable it surpasses a human programmer — something GPT-5 might be edging meanwhile, has tried to lower expectations about rollout glitches. Posting on X, he said, 'We have a ton of stuff to launch over the next couple of months — new models, products, features, and more. Please bear with us through some probable hiccups and capacity crunches.'While researchers and CEOs debate long-term AI impacts, one threat is already here: fraud. Haywood Talcove, CEO of the Government Group at LexisNexis Risk Solutions, works with over 9,000 public agencies. He says the AI fraud crisis is not approaching — it's already happening. 'Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programmes,' he warned. 'Criminal networks are using deepfakes, synthetic identities, and large language models to outpace outdated fraud defences — and they're winning.' During the pandemic, fraudsters exploited weaknesses to steal hundreds of billions in unemployment benefits. That trend has only accelerated. Today's tools are more advanced and automated, capable of filing tens of thousands of fake claims in a day. Talcove believes the AI arms race between criminals and institutions is widening. 'We may soon recognise a similar principle for AI that I call 'Altman's Law': every 180 days, AI capabilities double.' His call to action is blunt. 'Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.'Not everyone is convinced by Altman's remarks. Some see them as clever marketing. But his past record and unfiltered tone suggest genuine might be OpenAI's most ambitious release yet. It could also be a signpost for the world to stop, look around, and ask itself what kind of intelligence it really wants to build — and how much control it's willing to give up.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store