logo
#

Latest news with #MarkChen

Research leaders urge tech industry to monitor AI's ‘thoughts'
Research leaders urge tech industry to monitor AI's ‘thoughts'

Yahoo

time15-07-2025

  • Science
  • Yahoo

Research leaders urge tech industry to monitor AI's ‘thoughts'

AI researchers from OpenAI, Google DeepMind, Anthropic, as well as a broad coalition of companies and nonprofit groups, are calling for deeper investigation into techniques for monitoring the so-called thoughts of AI reasoning models in a position paper published Tuesday. A key feature of AI reasoning models, such as OpenAI's o3 and DeepSeek's R1, are their chains-of-thought or CoTs — an externalized process in which AI models work through problems, similar to how humans use a scratch pad to work through a difficult math question. Reasoning models are a core technology for powering AI agents, and the paper's authors argue that CoT monitoring could be a core method to keep AI agents under control as they become more widespread and capable. 'CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions,' said the researchers in the position paper. 'Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved.' The position paper asks leading AI model developers to study what makes CoTs 'monitorable' — in other words, what factors can increase or decrease transparency into how AI models really arrive at answers. The paper's authors say that CoT monitoring may be a key method for understanding AI reasoning models, but note that it could be fragile, cautioning against any interventions that could reduce their transparency or reliability. The paper's authors also call on AI model developers to track CoT monitorability and study how the method could one day be implemented as a safety measure. Notable signatories of the paper include OpenAI chief research officer Mark Chen, Safe Superintelligence CEO Ilya Sutskever, Nobel laureate Geoffrey Hinton, Google DeepMind cofounder Shane Legg, xAI safety adviser Dan Hendrycks, and Thinking Machines co-founder John Schulman. First authors include leaders from the UK AI Security Institute and Apollo Research, and other signatories come from METR, Amazon, Meta, and UC Berkeley. The paper marks a moment of unity among many of the AI industry's leaders in an attempt to boost research around AI safety. It comes at a time when tech companies are caught in a fierce competition — which has led Meta to poach top researchers from OpenAI, Google DeepMind, and Anthropic with million-dollar offers. Some of the most highly sought-after researchers are those building AI agents and AI reasoning models. 'We're at this critical time where we have this new chain-of-thought thing. It seems pretty useful, but it could go away in a few years if people don't really concentrate on it,' said Bowen Baker, an OpenAI researcher who worked on the paper, in an interview with TechCrunch. 'Publishing a position paper like this, to me, is a mechanism to get more research and attention on this topic before that happens.' OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024. In the months since, the tech industry was quick to release competitors that exhibit similar capabilities, with some models from Google DeepMind, xAI, and Anthropic showing even more advanced performance on benchmarks. However, there's relatively little understood about how AI reasoning models work. While AI labs have excelled at improving the performance of AI in the last year, that hasn't necessarily translated into a better understanding of how they arrive at their answers. Anthropic has been one of the industry's leaders in figuring out how AI models really work — a field called interpretability. Earlier this year, CEO Dario Amodei announced a commitment to crack open the black box of AI models by 2027 and invest more in interpretability. He called on OpenAI and Google DeepMind to research the topic more, as well. Early research from Anthropic has indicated that CoTs may not be a fully reliable indication of how these models arrive at answers. At the same time, OpenAI researchers have said that CoT monitoring could one day be a reliable way to track alignment and safety in AI models. The goal of position papers like this is to signal boost and attract more attention to nascent areas of research, such as CoT monitoring. Companies like OpenAI, Google DeepMind, and Anthropic are already researching these topics, but it's possible that this paper will encourage more funding and research into the space. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching
OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching

Yahoo

time14-07-2025

  • Business
  • Yahoo

OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching

OpenAI isn't just building cutting-edge AI anymore; it's now fighting a full-blown talent war and it's using stock options as its shield. The company known worldwide for ChatGPT has seen its stock-based compensation jump more than five times over the past year; that's not a typo. In total, OpenAI handed out $4.4 billion in equity; that figure was 119% of its entire revenue for the same period. Yes it's now literally paying out more in stock than it's earning. Warning! GuruFocus has detected 6 Warning Sign with META. And the trigger? Meta (META, Financials) which has reportedly lured away at least nine researchers from OpenAI's AI team including some working on foundational models. These exits weren't minor; they cut deep. OpenAI had hoped this stock-based spending spree would cool off by 2025 projecting equity payouts to drop to 45% of revenue, and then under 10% by the end of the decade. But that was before Meta started raiding its brain trust. Now, those assumptions are out the window. According to internal chatter, Chief Research Officer Mark Chen believes the company might have to sweeten its equity offers even more; because when your biggest assets walk out the door, the only thing left to do is open it wider with better incentives. This isn't just about keeping salaries competitive; it's about survival. The AI space is getting fiercer by the week; and even with Microsoft in its corner, OpenAI can't afford to lose talent to rivals. Equity compensation isn't a perk anymore; it's a weapon. And OpenAI is loading up. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching
OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching

Yahoo

time07-07-2025

  • Business
  • Yahoo

OpenAI's Wallet Just Got Heavier -- Because Meta's Been Poaching

OpenAI isn't just building cutting-edge AI anymore; it's now fighting a full-blown talent war and it's using stock options as its shield. The company known worldwide for ChatGPT has seen its stock-based compensation jump more than five times over the past year; that's not a typo. In total, OpenAI handed out $4.4 billion in equity; that figure was 119% of its entire revenue for the same period. Yes it's now literally paying out more in stock than it's earning. Warning! GuruFocus has detected 6 Warning Sign with META. And the trigger? Meta (META, Financials) which has reportedly lured away at least nine researchers from OpenAI's AI team including some working on foundational models. These exits weren't minor; they cut deep. OpenAI had hoped this stock-based spending spree would cool off by 2025 projecting equity payouts to drop to 45% of revenue, and then under 10% by the end of the decade. But that was before Meta started raiding its brain trust. Now, those assumptions are out the window. According to internal chatter, Chief Research Officer Mark Chen believes the company might have to sweeten its equity offers even more; because when your biggest assets walk out the door, the only thing left to do is open it wider with better incentives. This isn't just about keeping salaries competitive; it's about survival. The AI space is getting fiercer by the week; and even with Microsoft in its corner, OpenAI can't afford to lose talent to rivals. Equity compensation isn't a perk anymore; it's a weapon. And OpenAI is loading up. This article first appeared on GuruFocus.

Here Are the Traits OpenAI Executives Look For in New Hires
Here Are the Traits OpenAI Executives Look For in New Hires

Entrepreneur

time07-07-2025

  • Business
  • Entrepreneur

Here Are the Traits OpenAI Executives Look For in New Hires

These traits matter more than a Ph.D or formal schooling in AI, say the executives. What kinds of skills do OpenAI leaders look for in new hires? OpenAI's head of ChatGPT, Nick Turley, and chief research officer, Mark Chen, tackled this question on an episode of the OpenAI podcast released last week. It turns out that the two OpenAI executives don't seek out an Ivy League educational background or AI breakthroughs in new hires. Instead, they search for more intrinsic traits: curiosity, agency, and adaptability. "Hiring is hard, especially if you want to have a small team that is very, very good and humble, and able to move fast," Turley admitted on the podcast. "I think curiosity has been the number one thing that I've looked for, and it's actually my advice to students when they ask me, 'What do I do in this world where everything's changing?'" Related: Getting a Wharton MBA Was 'a Waste of Time,' According to a Global Bank CEO. Here's the Degree He Recommends Instead. There's still so much that AI researchers have yet to learn about the technology that approaching its development requires "a certain amount of humility," Turley said. He explained that building AI is less about knowing the right answers and more about knowing how to ask the right questions with an innate curiosity. Turley looks for new hires who are "deeply curious" about the world and what OpenAI does. Related: Goldman Sachs CIO Says Coders Should Take Philosophy Classes — Here's Why Chen agreed with Turley and added that he looks for agency in new hires, or the ability to find problems and fix them with little oversight. He also searches for adaptability, or a willingness to adjust to a fast-changing environment. "You need to be able to quickly figure out what's important and pivot to what you need to do," Chen stated. Chen noted that agency and adaptability were more important than having a Ph.D in AI. He said that he himself joined OpenAI in 2018 as a resident without much formal AI training. "I think this is a field that people can pick up fairly quickly," Chen said. Related: These Are the AI Skills You Should Learn Right Now, According to the World's Youngest Self-Made Billionaire There are other skills that other executives have pinpointed as essential in the age of AI. Alexandr Wang, the MIT dropout who co-founded data training startup Scale AI and now leads Meta's AI efforts, noted in an interview with WaitWhat media CEO Jeff Berman last year that prompt engineering was an important skill to have. He recommended studying fields like math and physics that emphasized long-term thought. Meanwhile, Goldman Sachs' chief information officer, Marco Argenti, wrote in a post last year in the Harvard Business Review that he recommended studying philosophy in addition to engineering. OpenAI was worth $300 billion as of March, following a record-breaking $40 billion fundraising round, the biggest tech funding round on record from a private company.

OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree
OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree

International Business Times

time06-07-2025

  • Business
  • International Business Times

OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree

The AI powerhouse OpenAI has announced a week-long mandatory break this month, citing employee burnout as the reason behind the decision. After months of operating on intense 80-hour workweeks, leadership says the pause is meant to give staff time to rest and recharge. However, the timing of this break is raising eyebrowsacross Silicon Valley. That's because Meta, one of OpenAI's most aggressive competitors in artificial intelligence, is on a hiring binge, and OpenAI workers are prime recruitment material. Meta is reportedly offering signing bonuses of as much as $100 million to star AI researchers and engineers, especially to those who have been trained at OpenAI. In the past few months, several key members have already left OpenAI to join Meta's FAIR division and its newly formed AGI research labs. With burnout running high and better pay on the table, it's easy to see why some might jump ship. Inside OpenAI, the pressure is being felt. In an internal message, Chief Research Officer Mark Chen acknowledged that morale was weakening and fears were growing. CEO Sam Altman pledged better pay and greater recognition and encouraged teams to "keep focused on the mission." But, for some workers, these promises are coming too late. There are growing concerns that Meta may use this break to step up its poaching efforts, given how much of OpenAI's team is now offline. This week off is no exception, as only the executive leadership team will be working during the week the company is shut down, suggesting that this may be more of a defensive strategic posturing than a caring deed. The bigger issue? This is just one more example of a larger problem in the world of AI: the breakneck speed of development and the high-stakes competition for talent. In a course toward artificial general intelligence (AGI), the pressure on employees is only advancing. This shutdown is a moment of crisis for OpenAI, yes, but also a moment of reflection. If it is unable to keep its best talent, it may lose its edge in the AI competition. But if it takes now as its opportunity to rebuild its internal culture and to reimagine its working model, then it will return stronger.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store