logo
Meta loses its AI research head, as billions in investments hang in the balance

Meta loses its AI research head, as billions in investments hang in the balance

Yahoo01-04-2025
Meta's AI research head, Joelle Pineau is leaving amid major AI investments.
Pineau's exit complicates Meta's competition with OpenAI, Anthropic, and xAI.
Meta aims to make Llama the industry standard and reach a billion chatbot users.
Meta's head of artificial intelligence research, Joelle Pineau, is leaving the company at a time when the tech giant is pouring billions into AI development to keep pace with industry rivals.
Pineau, who joined Meta in 2017 and served as Vice President of AI Research and leader of Meta's Fundamental AI Research group (FAIR), announced her departure on Tuesday on LinkedIn.
"Today, as the world undergoes significant change, as the race for AI accelerates, and as Meta prepares for its next chapter, it is time to create space for others to pursue the work," she wrote. "I will be cheering from the sidelines, knowing that you have all the ingredients needed to build the best AI systems in the world." Her last day will be May 30.
"We thank Joelle for her leadership of FAIR," a Meta spokesperson told Business Insider in a statement. "She's been an important voice for Open Source and helped push breakthroughs to advance our products and the science behind them." They did not answer a question about whether Meta had already started looking for a successor.
Pineau, will continue teaching computer science at McGill University in Montreal, a role she also held during her time at Meta. She wrote on LinkedIn that she will take time "to observe and reflect" after leaving. She led roughly 1,000 people across 10 locations at the company.
Pineau's departure complicates Meta's efforts to compete with rivals like OpenAI, Anthropic, and Elon Musk's xAI. CEO Mark Zuckerberg has prioritized AI at Meta, committing as much as $65 billion to related projects this year.
Llama, Meta's open-source large language model that competes with proprietary models from other companies, has been a key initiative for the company. Zuckerberg aims to make Llama the industry standard worldwide and believes Meta's AI chatbot, available across Facebook, Instagram, and WhatsApp, could reach a billion users this year. As of December, 600 million users accessed Meta AI each month.
Last year, the company reorganized its AI teams to place Pineau and FAIR closer to the product division to accelerate the implementation of research into Meta's various products.
Pineau has been interested in AI for over 25 years. As a student at the University of Waterloo, Ontario, she worked on a voice recognition system for helicopter pilots, according to a Financial Times interview. She said she joined Meta because "it was pretty obvious that a lot of the biggest innovation in AI was going to happen in industry" and added that she didn't interview anywhere else because "Meta was the only [company] that had a commitment to open science and open research."
Pineau's departure comes amid other leadership changes at Meta. The company recently lost two other senior executives: Dan Neary, vice president for Asia-Pacific Meta's largest market, and Kate Hamill, managing director for retail and e-commerce in North America, who had spent more than a decade at the company.Read the original article on Business Insider
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Wall Street's Next Great Shift: From AI Stocks to AI Trading
Wall Street's Next Great Shift: From AI Stocks to AI Trading

Yahoo

time4 hours ago

  • Yahoo

Wall Street's Next Great Shift: From AI Stocks to AI Trading

This week proved what we've long believed: Markets move not on headlines . Despite escalating noise out of the Middle East and some ugly economic data, stocks held steady. And all the while, the smart money kept pouring into artificial intelligence. We're entering the second half of 2025 with an enormous tailwind behind us — and AI is driving it. InvestorPlace - Stock Market News, Stock Advice & Trading Tips But there's also a huge problem. So many investors are fixated on the wrong thing; they're focused on saber-rattling between the U.S. and Iran, on soft retail numbers, on homebuilder pessimism. All these are distractions from the real opportunities brewing beneath the surface. Of course, parsing noise from signal is no easy job. At times, it seems like tracking down a criminal in the Miami heat or on the Baskerville moors may be an easier task than cracking the case of the stock market. But all you really have to do is follow the money. For example: Amazon's (AMZN) throwing $13 billion into AI data centers in Australia. Meta (META) is poaching top AI talent and tossing around $100 million signing bonuses. SoftBank (SFTBY) wants to build a $1 trillion AI complex in Arizona. Microsoft (MSFT) is pushing Sovereign AI to governments. And Adobe's (ADBE) got a tool to protect brand identity in an AI-dominated web. When markets keep rising in the face of chaos, it's not a sign of delusion — . As of this writing, the S&P 500 was on track to close out the week at a new record high, its first since February. The Nasdaq Composite was approaching its own record close, and the Dow Jones had added 0.7%. This furious rally shows that investors are focused. They're thinking long-term, betting big on what comes next and filtering out the noise. So, I'm staying long. I'm staying bullish. And I'm preparing to buy any dip that AI-fueled fear might hand us… Because even while geopolitical and inflationary risks loom, the next wave of wealth creation is already underway. We saw it in action just in the last week. I'll break it down for you. Early in the week, stocks surged as fears around Israel-Iran tensions eased; Iran signaled interest in deescalation and nuclear talks. Meanwhile, AI stocks led the rally: our proprietary AI Appliers 15 Index rose 2.5%, while our AI Builders 15 Index jumped nearly 4%. Then, economic data disappointed: May retail sales were down 0.9%, and homebuilder sentiment hit its third-lowest level in a decade. But AI momentum continued with OpenAI's $200M Pentagon deal, SoftBank's $5B AI fundraise, and Meta's AI ad tools. After that, geopolitical noise intensified. Yet even still, AI advances continued: Marvell's (MRVL) 2nm SRAM tech, Meta's $100M AI hiring spree, Amazon's workforce shrinkage. Finally, even as the S&P consolidated after a solid run, AI headlines dominated: SoftBank explored $1T AI campus in Arizona; Meta tried to buy Safe Superintelligence. I really could go on. And on. And on. The message here is just as clear as it was when we talked last week: . And despite the immense number of distractions stacking up against us, we've got to put our blinders on and go full bloodhound when it comes to tracking down the opportunities everyone else is ignoring. But sometimes, it's not always as obvious as investing AI stocks – that's a given. We've talked about how promising Arista Networks (ANET), Broadcom (AVGO), and even Astera Labs (ALAB) are before. Instead, it's about investing AI… It's true; we trust AI with our health, the weather, and critical historical discoveries. Just look at what AI has helped do in the past few years… Decoded a 2,000-year-old scroll buried in volcanic ash. Using machine learning and X-ray scanning, researchers trained an AI to 'read' carbonized papyrus scrolls destroyed in the volcanic eruption that buried Pompeii. Human eyes couldn't interpret them, but AI saw patterns in the ink invisible to us. Ancient text, resurrected from ash. Discovered a brand-new antibiotic. In 2020, researchers at MIT used an AI system to scan more than 100 million chemical compounds – and they found a new, highly effective antibiotic called Halicin. It works on drug-resistant bacteria. Transformed cancer detection. AI systems can now spot early stage cancers (like breast or lung cancer) with greater accuracy than expert radiologists, finding subtle patterns in scans and biopsies that human eyes often miss. In some trials, it reduces false positives and saves lives. Predicted extreme weather better than government forecasters. A Google DeepMind model recently beat the U.S. National Weather Service in short-term storm forecasting. It can predict rainfall and dangerous weather hours in advance with unprecedented precision – a game-changer for everything from farming to hurricane evacuations. So, the obvious question is… why not trust it with our investing? I'm not saying to remove yourself from the equation and let a robot make all your decisions for you. But what I am saying is that the positive evidence for AI-based tools is mounting. Just look at what my colleagues over at our corporate partner, TradeSmith, are doing: They just dropped the brand-new , an AI-supercharged tool that is adept at uncovering hidden opportunities in the market. In backtesting, TradeSmithGPT uncovered returns like: 102% in seven days from RingCentral (RNG)… 103% in two days from EPAM Systems (EPAM)… 474% in 18 days from United Airlines (UAL)… 412% in four days from Cornell Inc… And 776% in 17 days from GoDaddy (GDDY). These opportunities were only possible through TradeSmithGPT's ultra-powerful artificial intelligence core – one that TradeSmith's software developers have spent years building and fine-tuning. TradeSmithGPT not only identifies the ideal profit windows for nearly 2,000 stocks, but it handpicks which type of trade you can execute to see the biggest possible gains. But don't just take my word for it; I encourage you to see TradeSmithGPT in action. TradeSmith's CEO, Keith Kaplan, recorded an eye-opening live demonstration of TradeSmithGPT just the other day – . Plus, Keith shared that TradeSmithGPT flagged three brand-new opportunities that are ready to be booked on Tuesday, July 1. So, you don't have much time to act. If you want to get serious about your trading – alongside your own 'detective work' for new opportunities – . The post Wall Street's Next Great Shift: From AI Stocks to AI Trading appeared first on InvestorPlace. 擷取數據時發生錯誤 登入存取你的投資組合 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤

Meta AI's new chatbot raises privacy alarms
Meta AI's new chatbot raises privacy alarms

Fox News

time8 hours ago

  • Fox News

Meta AI's new chatbot raises privacy alarms

Meta's new AI chatbot is getting personal, and it might be sharing more than you realize. A recent app update introduced a "Discover" feed that makes user-submitted chats public, complete with prompts and AI responses. Some of those chats include everything from legal troubles to medical conditions, often with names and profile photos still attached. The result is a privacy nightmare in plain sight. If you've ever typed something sensitive into Meta AI, now is the time to check your settings and find out just how much of your data could be exposed. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Meta's AI app, launched in April 2025, is designed to be both a chatbot and a social platform. Users can chat casually or deep dive into personal topics, from relationship questions to financial concerns or health issues. What sets Meta AI apart from other chatbots is the "Discover" tab, a public feed that displays shared conversations. It was meant to encourage community and creativity, letting users showcase interesting prompts and responses. Unfortunately, many didn't realize their conversations could be made public with just one tap, and the interface often fails to make the public/private distinction clear. The feature positions Meta AI as a kind of AI-powered social network, blending search, conversation, and status updates. But what sounds innovative on paper has opened the door to major privacy slip-ups. Privacy experts are sounding the alarm over Meta's Discover tab, calling it a serious breach of user trust. The feed surfaces chats containing legal dilemmas, therapy discussions, and deeply personal confessions, often linked to real accounts. In some cases, names and profile photos are visible. Although Meta says only shared chats appear, the interface makes it easy to hit "share" without realizing it means public exposure. Many assume the button saves the conversation privately. Worse, logging in with a public Instagram account can make shared AI activity publicly accessible by default, increasing the risk of identification. Some posts reveal sensitive health or legal issues, financial troubles, or relationship conflicts. Others include contact details or even audio clips. A few contain pleas like "keep this private," written by users who didn't realize their messages would be broadcast. These aren't isolated incidents, and as more people use AI for personal support, the stakes will only get higher. If you're using Meta AI, it's important to check your privacy settings and manage your prompt history to avoid accidentally sharing something sensitive. To prevent accidentally sharing sensitive prompts and ensure your future prompts stay private: On a phone: (iPhone or Android) On the website (desktop): Fortunately, you can change the visibility of prompts you've already posted, delete them entirely, and update your settings to keep future prompts private. On a phone: (iPhone or Android) On the website (desktop): If other users replied to your prompt before you made it private, those replies will remain attached but won't be visible unless you reshare the prompt. Once reshared, the replies will also become visible again. On both the app and the website: This issue isn't unique to Meta. Most AI chat tools, including ChatGPT, Claude, and Google Gemini, store your conversations by default and may use them to improve performance, train future models, or develop new features. What many users don't realize is that their inputs can be reviewed by human moderators, flagged for analysis, or saved in training logs. Even if a platform says your chats are "private," that usually just means they aren't visible to the public. It doesn't mean your data is encrypted, anonymous, or protected from internal access. In many cases, companies retain the right to use your conversations for product development unless you specifically opt out, and finding that opt-out isn't always straightforward. If you're signed in with a personal account that includes your real name, email address, or social media links, your activity may be easier to connect to your identity than you think. Combine that with questions about health, finances, or relationships, and you've essentially created a detailed digital profile without meaning to. Some platforms now offer temporary chat modes or incognito settings, but these features are usually off by default. Unless you manually enable them, your data is likely being stored and possibly reviewed. The takeaway: AI chat platforms are not private by default. You need to actively manage your settings, be mindful of what you share, and stay informed about how your data is being handled behind the scenes. AI tools can be incredibly helpful, but without the right precautions, they can also open you up to privacy risks. Whether you're using Meta AI, ChatGPT, or any other chatbot, here are some smart, proactive ways to protect yourself: 1) Use aliases and avoid personal identifiers: Don't use your full name, birthday, address, or any details that could identify you. Even first names combined with other context can be risky. 2) Never share sensitive information: Avoid discussing medical diagnoses, legal matters, bank account info, or anything you wouldn't want on the front page of a search engine. 3) Clear your chat history regularly: If you've already shared sensitive info, go back and delete it. Many AI apps let you clear chat history through Settings or your account dashboard. 4) Adjust privacy settings often: App updates can sometimes reset your preferences or introduce new default options. Even small changes to the interface can affect what's shared and how. It's a good idea to check your settings every few weeks to make sure your data is still protected. 5) Use an identity theft protection service: Scammers actively look for exposed data, especially after a privacy slip. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. Visit for tips and recommendations. 6) Use a VPN for extra privacy: A reliable VPN hides your IP address and location, making it harder for apps, websites, or bad actors to track your online activity. It also adds protection on public Wi-Fi, shielding your device from hackers who might try to snoop on your connection. For best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at 7) Don't link AI apps to your real social accounts: If possible, create a separate email address or dummy account for experimenting with AI tools. Keep your main profiles disconnected. To create a quick email alias you can use to keep your main accounts protected visit Meta's decision to turn chatbot prompts into social content has blurred the line between private and public in a way that catches many users off guard. Even if you think your chats are safe, a missed setting or default option can expose more than you intended. Before typing anything sensitive into Meta AI or any chatbot, pause. Check your privacy settings, review your chat history, and think carefully about what you're sharing. A few quick steps now can save you from bigger privacy headaches later. With so much sensitive data potentially at risk, do you think Meta is doing enough to protect your privacy, or is it time for stricter guardrails on AI platforms? Let us know by writing to us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

OpenAI is betting millions on building AI talent from the ground up amid rival Meta's poaching pitch
OpenAI is betting millions on building AI talent from the ground up amid rival Meta's poaching pitch

Yahoo

time9 hours ago

  • Yahoo

OpenAI is betting millions on building AI talent from the ground up amid rival Meta's poaching pitch

In Silicon Valley's white-hot race for artificial intelligence supremacy, mind-boggling pay packages are part of the industry's recruitment push. At OpenAI, however, the company's residency program is tackling attracting and keeping top talent by looking outside of the industry altogether. The six-month, full-time paid program offers aspiring AI researchers from adjacent fields like physics or neuroscience a pathway into the AI industry, rather than recruiting individuals already deeply invested in AI research and work. According to Jackie Hehir, OpenAI's research residency program manager, residents aren't those seeking in machine learning or AI, nor are they employees of other AI labs. Instead, she said in a program info session, 'they're really passionate about the space.' So what's in it for OpenAI? Hot talent at cut-rate prices. While the six-figure salary puts OpenAI residents in the top 5% of American workers, it's a bargain in the rarefied world of AI, where the bidding war for talent has some companies tossing around nine-figure bonuses. By offering a foothold into the AI world, OpenAI appears to be cultivating talent deeply embedded in the company's mission. This strategy, spearheaded by CEO Sam Altman, has long been part of the company's approach to retaining employees and driving innovation. One former OpenAI staffer described the employee culture to Business Insider as 'obsessed with the actual mission of creating AGI,' or artificial general intelligence. Mission driven or not, OpenAI's residents are also compensated handsomely, earning an annualized salary of $210,000, which translates to around $105,000 for the six-month program. The company also pays residents to relocate to San Francisco. Unlike internships, the program treats participants as full-fledged employees, complete with a full suite of benefits. Nearly every resident who performs well receives a full-time offer, and, according to Hehir, every resident offered a full-time contract so far has accepted. Each year, the company welcomes around 30 residents. The qualifications for residents at OpenAI are somewhat unconventional. In fact, the company claims there are no formal education or work requirements. Instead, they hold an 'extremely high technical bar' at parity to what they look for in full-time employees as it pertains to math and programming. 'While you don't need to have a degree in advanced mathematics, you do need to be really comfortable with advanced math concepts,' Hehir said. As OpenAI attempts to build talent from the ground up, its rivals, namely Meta, are pulling out all the stops to poach top AI talent with reports alleging that Meta CEO Mark Zuckerberg personally identified top OpenAI staff on what insiders dubbed 'The List' and attempted to recruit them with offers exceeding $100 million in signing bonuses. Meta's compensation packages for AI talent can reportedly reach over $300 million across four years for elite researchers. The flood of cash has ignited what some insiders call a 'summer of comp FOMO,' as AI specialists weigh whether to stay loyal to their current employers or leave for record-breaking paydays. Zuckerberg's methods have had some success, poaching a number of OpenAI employees for Meta's new superintelligence team. In response to news of the employees' departure, OpenAI's chief research officer, Mark Chen, told staff that it felt like 'someone has broken into our home and stolen something.' Meanwhile, OpenAI CEO Sam Altman called Meta's recruitment tactics 'crazy,' warning that money alone won't secure the best people. 'What Meta is doing will, in my opinion, lead to very deep cultural problems,' Altman told employees in a leaked internal memo this week. Ultimately, cultivating new talent, rather than trying to outbid the likes of Meta, may prove a more sustainable path for OpenAI in its quest to stay highly mission-oriented while supporting an industry grappling with a scarcity of top-tier talent. Estimates suggest there are only about 2,000 people worldwide capable of pushing the boundaries of large language models and advanced AI research. Whether the talent cultivated by Altman and OpenAI will remain loyal to the firm remains unknown. But Altman says that AI 'missionaries will beat mercenaries.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store