logo
#

Latest news with #OpenAI

Takeaways from Hard Fork's interview with OpenAI's Sam Altman
Takeaways from Hard Fork's interview with OpenAI's Sam Altman

The Star

timean hour ago

  • Business
  • The Star

Takeaways from Hard Fork's interview with OpenAI's Sam Altman

SAN FRANCISCO: Sam Altman, the chief executive of artificial intelligence company OpenAI, said Tuesday that he has had productive talks with US President Donald Trump about AI and credited him with understanding the geopolitical and economic importance of the technology. 'I think he really gets it,' Altman said. He added, 'I think he really understands the importance of leadership in this technology.' Altman, 40, made his remarks about Trump during a live interview in San Francisco with 'Hard Fork," the tech podcast from The New York Times. Over the 30-minute conversation, Altman and Brad Lightcap, OpenAI's chief operating officer, discussed AI's effect on jobs, the grab for technological talent by Meta's Mark Zuckerberg and regulatory and safety concerns about the fast-evolving and powerful technology. (The Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) Here are some of the takeaways. On Trump Altman has made it a point to forge a relationship with Trump. The day after the US president's inauguration in January, Altman stood behind Trump in the White House's Roosevelt Room as Trump announced a US$100bil (RM421.8bil) AI infrastructure deal, called Stargate, which was backed by OpenAI, SoftBank and Oracle. Trump described it as the 'largest AI infrastructure project by far in history.' On Tuesday, Altman said Trump understood AI and 'the potential for economic transformation, sort of geopolitical importance, the need to build a lot of infrastructure.' How AI Is Affecting Jobs Fears over how artificial intelligence could replace humans in jobs have loomed for years. More recently, there have been signs that some employers may be starting to use AI for entry-level jobs instead of hiring young graduates. Lightcap said he concurred with predictions that AI would change jobs. 'I think that there is going to be some sort of change,' he said. 'I think it's inevitable. I think every time you get a platform shift, you get the changing job market." Altman added that history suggested that better tools – in this case, artificial intelligence – would lead to more efficiency and people living richer lives. Regulatory and Safety Concerns Altman affirmed that there was a need to regulate AI but said it would be difficult to offer services if regulations varied by states. 'As these systems get quite powerful, we clearly need something,' he said. 'And I think something around the really risky capabilities and ideally something that can be quite adaptive and not like a law that survives 100 years.' His comments added to a debate over how governments should treat artificial intelligence. During the Biden administration, Altman and other tech executives said they believed in regulation of AI, though no laws were passed. Trump has taken a more laissez-faire attitude toward potentially regulating the technology. Lawmakers recently included a 10-year ban on AI regulation in Trump's domestic policy bill. 'I have become a bit more, jaded isn't the right word, but it's something in that direction, about the ability of policymakers to grapple with the speed of technology,' Altman said. The Fight for AI Talent Silicon Valley has been agog in recent weeks over how Meta, which owns Facebook, Instagram and WhatsApp, has been spending big to hire AI talent. Zuckerberg steered Meta's US$14.3bil (RM60.3bil) investment in the AI startup Scale AI and hired its 28-year-old chief executive, Alexandr Wang, to join a new lab that is pursuing 'superintelligence,' a theoretically powerful form of AI that could exceed the human brain. Zuckerberg has also dangled nine-figure pay packages to attract other technologists to Meta. In the interview with 'Hard Fork,' Altman seemed unbothered by the rivalry. When Lightcap was asked if he thought Zuckerberg really believed Meta would develop 'superintelligence' or if it was just a recruiting tactic, Lightcap replied, 'I think he believes he's superintelligent.' OpenAI's Relationship With Microsoft Microsoft is OpenAI's biggest investor, pumping billions of dollars into the AI company. But there have been reports since last year that the relationship between the companies has soured. 'Do you believe that, when you read those things?' Altman said in answer to questions about the relationship. He added that he had a 'super nice call' with Satya Nadella, Microsoft's chief executive, Monday and that they discussed their future of working together. 'Obviously in any deep partnership, there are points of tension, and we certainly have those,' Altman said. 'But on the whole, it's been like really wonderfully good for both companies.' – ©2025 The New York Times Company This article originally appeared in The New York Times.

In pursuit of godlike technology, Mark Zuckerberg amps up the AI race
In pursuit of godlike technology, Mark Zuckerberg amps up the AI race

Indian Express

timean hour ago

  • Business
  • Indian Express

In pursuit of godlike technology, Mark Zuckerberg amps up the AI race

In April, Mark Zuckerberg's lofty plans for the future of artificial intelligence crashed into reality. Weeks earlier, the 41-year-old CEO of Meta had publicly boasted that his company's new AI model, which would power the latest chatbots and other cutting-edge experiments, would be a 'beast.' Internally, Zuckerberg told employees that he wanted it to rival the AI systems of competitors like OpenAI and be able to drive features such as voice-powered chatbots, people who spoke with him said. But at Meta's AI conference that month, the new AI model did not perform as well as those of rivals. Features like voice interactions were not ready. Many developers, who attended the event with high expectations, left underwhelmed. Zuckerberg knew Meta was falling behind in AI, people close to him said, which was unacceptable. He began strategizing in a WhatsApp group with top executives, including Chris Cox, Meta's head of product, and Andrew Bosworth, the chief technology officer, about what to do. That kicked off a frenzy of activity that has reverberated across Silicon Valley. Zuckerberg demoted Meta's vice president in charge of generative AI. He then invested $14.3 billion in the startup Scale AI and hired Alexandr Wang, its 28-year-old founder. Meta approached other startups, including the AI search engine Perplexity, about deals. And Zuckerberg and his colleagues have embarked on a hiring binge, including reaching out this month to more than 45 AI researchers at rival OpenAI alone. Some received formal offers, with at least one as high as $100 million, two people with knowledge of the matter said. At least four OpenAI researchers have accepted Meta's offers. In another extraordinary move, executives in Meta's AI division discussed 'de-investing' in its AI model, Llama, two people familiar with the discussions said. Llama is an 'open source' model, with its underlying technology publicly shared for others to build on. They discussed embracing AI models from competitors like OpenAI and Anthropic, which have 'closed' code bases. A Meta spokesperson said company officials 'remain fully committed to developing Llama and plan to have multiple additional releases this year alone.' Zuckerberg has ramped up his activity to keep Meta competitive in a wildly ambitious race that has erupted within the broader AI contest. He is chasing a hypothetically godlike technology called 'superintelligence,' which is AI that would be more powerful than the human brain. Only a few Silicon Valley companies — OpenAI, Anthropic and Google — are considered to have the know-how to develop this, and Zuckerberg wants to ensure that Meta is included, people close to him said. 'He is like a lot of CEOs at big tech companies who are telling themselves that AI is going to be the biggest thing they have seen in their lifetime, and if they don't figure out how to become a big player in it, they are going to be left behind,' said Matt Murphy, a partner at the venture capital firm Menlo Ventures. He added, 'It is worth anything to prevent that.' Leaders at other tech behemoths are also going to extremes to capture future innovation that they believe will be worth trillions of dollars. Google, Microsoft and Amazon have supersized their AI investments to keep up with one another. And the war for talent has exploded, vaulting AI specialists into the same compensation stratosphere as NBA stars. Google's CEO, Sundar Pichai, and his top AI lieutenant, Demis Hassabis, as well as the chief executives of Microsoft and OpenAI, Satya Nadella and Sam Altman, are personally involved in recruiting researchers, two people with knowledge of the approaches said. Some tech companies are offering multimillion-dollar packages to AI technologists over email without a single interview. 'The market is setting a rate here for a level of talent which is really incredible, and kind of unprecedented in my 20-year career as a technology executive,' Meta's Bosworth said in a CNBC interview last week. He said Altman had made counteroffers to some of the people Meta had tried to hire. OpenAI and Google declined to comment. Some details of Meta's efforts were previously reported by Bloomberg and The Information. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) For years, Meta appeared to keep pace in the AI race. More than a decade ago, Zuckerberg hired Yann LeCun, who is considered a pioneer of modern AI. LeCun co-founded FAIR — or Fundamental AI Research — which became Meta's artificial intelligence research arm. After OpenAI released its ChatGPT chatbot in 2022, Meta responded the next year by creating a generative AI team under one of its executives, Ahmad Al-Dahle, to spread the technology throughout the company's products. Meta also open-sourced its AI models, sharing the underlying computer code with others to entrench its technology and spread AI development. But as OpenAI and Google built AI chatbots that could listen, look and talk, and rolled out AI systems designed to 'reason,' Meta struggled to do the same. One reason was that the company had less experience with a technique called 'reinforcement learning,' which others were using to build AI. Late last year, the Chinese startup DeepSeek released AI models that were built upon Llama but were more advanced and required fewer resources to create. Meta's open-source strategy, once seen as a competitive advantage, appeared to have let others get a leg up on it. Zuckerberg knew he needed to act. Around that time, outside AI researchers began receiving emails from him, asking if they would be interested in joining Meta, two people familiar with the outreach said. In April, Meta released two new versions of Llama, asserting that the models performed as well as or better than comparable ones from OpenAI and Google. To prove its claim, Meta cited its own testing benchmarks. On Instagram, Zuckerberg championed the releases in a video selfie. But some independent researchers quickly deduced that Meta's benchmarks were designed to make one of its models look more advanced than it was. They became incensed. Zuckerberg later learned that his AI team had wanted the models to appear to perform well, even though they were not doing as well as hoped, people with knowledge of the matter said. Zuckerberg was not briefed on the customized tests and was upset, two people said. His solution was to throw more bodies at the problem. Meta's AI division swelled to more than 1,000 people this year, up from a few hundred two years earlier. The rapid growth led to infighting and management squabbles. And with Zuckerberg's round-the-clock, hard-charging management style — his attention on a project is often compared to the 'Eye of Sauron' internally, a reference to the 'Lord of the Rings' villain — some engineers burned out and left. Executives hunkered down to brainstorm next steps, including potentially ratcheting back investment in Llama. In May, Zuckerberg sidelined Al-Dahle and ramped up recruitment of top AI researchers to lead a superintelligence lab. Armed with his checkbook, Zuckerberg sent more emails and text messages to prospective candidates, asking them to meet at Meta's headquarters in Menlo Park, California. Zuckerberg often takes recruitment meetings in an enclosed glass conference room, informally known as 'the aquarium.' The outreach included talking to Perplexity about an acquisition, two people familiar with the talks said. No deal has materialized. Zuckerberg also spoke with Ilya Sutskever, OpenAI's former chief scientist and a renowned AI researcher, about potentially joining Meta, two people familiar with the approach said. Sutskever, who runs the startup Safe Superintelligence, declined the overture. He did not respond to a request for comment. But Zuckerberg won over Wang of Scale, which works with data to train AI systems. They had met through friends and are also connected through Elliot Schrage, a former Meta executive who is an investor in Scale and adviser to Wang. This month, Meta announced that it would take a minority stake in Scale and bring on Wang — who is not known for having deep technical expertise but has many contacts in AI circles — as well as several of his top executives to help run the superintelligence lab. Meta is now in talks with Safe Superintelligence's CEO, Daniel Gross, and his investment partner Nat Friedman to join, a person with knowledge of the talks said. They did not respond to requests for comment. Meta has its work cut out for it. Some AI researchers have said Zuckerberg has not clearly laid out his AI mission outside of trying to optimize digital advertising. Others said Meta was not the right place to build the next AI superpower. Whether or not Zuckerberg succeeds, insiders said the playing field for technological talent had permanently changed. 'In Silicon Valley, you hear a lot of talk about the 10x engineer,' said Amjad Masad, the CEO of the AI startup Replit, using a term for extremely productive developers. 'Think of some of these AI researchers as 1,000x engineers. If you can add one person who can change the trajectory of your entire company, it's worth it.'

Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?
Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?

Forbes

timean hour ago

  • Forbes

Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?

Can Agentic AI Bring the Pope or the Queen Back to Life — and Rewrite History? Elon Musk recently sparked global debate by claiming AI could soon be powerful enough to rewrite history. He stated on X (formerly Twitter) that his AI platform, Grok, could 'rewrite the entire corpus of human knowledge, adding missing information and deleting errors.' This bold claim arrives alongside a recent groundbreaking announcement from Google: the launch of Google Veo3 AI Video Generator, a state-of-the-art AI video generation model capable of producing cinematic-quality videos from text and images. Part of the Google Gemini ecosystem, Google Veo3 AI generates lifelike videos complete with synchronized audio, dynamic camera movements, and coherent multi-scene narratives. Its intuitive editing tools, combined with accessibility through platforms like Google Gemini, Flow, Vids, and Vertex AI, open new frontiers for filmmakers, marketers, educators, and game designers alike. At the same time, industry leaders — including OpenAI, Anthropic, Microsoft Copilot, and Mistral (Claude) — are racing to build more sophisticated agentic AI systems. Unlike traditional reactive AI tools, these agents are designed to reason, plan, and orchestrate autonomous actions based on goals, feedback, and long-term context. This evolution marks a shift toward AI systems that function much like a skilled executive assistant — and beyond. The Promise: Immortalizing Legacy Through Agentic AI Together, these advances raise a fascinating question: What if agentic AI could bring historical figures like the Pope or the Queen back to life digitally? Could it even reshape our understanding of history itself? Imagine an AI trained on decades — or even a century — of video footage, writings, audio recordings, and public appearances by iconic figures such as Pope Francis or Queen Elizabeth II. Using agentic AI, we could create realistic, interactive digital avatars capable of offering insights, delivering messages, or simulating how these individuals might respond to today's complex issues based on their documented philosophies and behaviors. This application could benefit millions. For example, Catholic followers might seek guidance and blessings from a digital Pope, educators could build immersive historical simulations, and advisors to the British royal family could analyze past decision-making styles. After all, as the saying goes, 'history repeats itself,' and access to nuanced, context-rich perspectives from the past could illuminate our present. The Risk: The Dangerous Flip Side — Rewriting Truth Itself However, the same technologies that can immortalize could also distort and manipulate reality. If agentic AI can reconstruct the past, what prevents it — or malicious actors — from rewriting it? Autonomous agents that control which stories are amplified or suppressed online pose a serious threat. We risk a future where deepfakes, synthetic media, and AI-generated propaganda blur the line between fact and fiction. Already, misinformation campaigns and fake news challenge our ability to discern truth. Agentic AI could exponentially magnify these problems, making it harder than ever to distinguish between genuine history and fabricated narratives. Imagine a world where search engines no longer provide objective facts, but the version of history shaped by governments, corporations, or AI systems themselves. This could lead to widespread confusion, social polarization, and a fundamental erosion of trust in information. Ethics, Regulation, and Responsible Innovation The advent of agentic AI demands not only excitement but also ethical foresight and regulatory vigilance. Programming AI agents to operate autonomously requires walking a fine line between innovation and manipulation. Transparency in training data, explainability in AI decisions, and strict regulation of how agents interact are essential safeguards. The critical question is not just 'Can we?' but 'Should we?' Policymakers, developers, and industry leaders must collaborate to establish global standards and oversight mechanisms that ensure AI technologies serve the public good. Just as financial markets and pharmaceuticals drugs are regulated to protect society, so too must the AI agents shaping our future be subject to robust guardrails. As the old adage goes: 'Technology is neither good nor bad. It's how we use it that makes all the difference.' Navigating the Future of Agentic AI and Historical Data The convergence of generative video models like Google Veo3, visionary leaders like Elon Musk, and the rapid rise of agentic AI paints a complex and compelling picture. Yes, we may soon see lifelike digital recreations of the Pope or the Queen delivering messages, advising future generations, and influencing public discourse. But whether these advancements become tools of enlightenment or distortion depends entirely on how we govern, regulate, and ethically deploy these technologies today. The future of agentic AI — especially when it touches our history and culture — must be navigated with care, responsibility, and a commitment to truth.

Meta hires four more OpenAI researchers: Report
Meta hires four more OpenAI researchers: Report

Indian Express

time2 hours ago

  • Business
  • Indian Express

Meta hires four more OpenAI researchers: Report

Meta Platforms is hiring four more OpenAI artificial intelligence researchers, The Information reported on Saturday. The researchers, Shengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Ren have each agreed to join, the report said, citing a person familiar with their hiring. Earlier this week, the Instagram parent hired Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai, who were all working in OpenAI's Zurich office, the Wall Street Journal reported. Meta and ChatGPT maker OpenAI did not immediately respond to a Reuters request for comment. The company has recently been pushing to hire more researchers from OpenAI to join chief executive Mark Zuckerberg's superintelligence efforts. Reuters could not immediately verify the report.

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Straits Times

time2 hours ago

  • Science
  • Straits Times

AI is learning to lie, scheme, and threaten its creators

For now, deceptive behaviours only emerges when researchers deliberately stress-test the models. PHOTO: REUTERS AI is learning to lie, scheme, and threaten its creators NEW YORK - The world's most advanced AI models are exhibiting troubling new behaviours - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still do not fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of 'reasoning' models - AI systems that work through problems step-by-step rather than generating instant responses. According to Professor Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behaviour,' explained Mr Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' - appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Mr Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behaviour goes far beyond typical AI 'hallucinations' or simple mistakes. Mr Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up'. Users report that models are 'lying to them and making up evidence', according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Mr Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception'. Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mr Mantas Mazeika from the Centre for AI Safety (CAIS). No rules Current regulations are not designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Mr Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Mr Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Mr Hobbhahn acknowledged, 'but we're still in a position where we could turn it around'. Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mr Mazeika pointed out, AI's deceptive behaviour 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it'. Mr Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. AFP Join ST's Telegram channel and get the latest breaking news delivered to you.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store