
Meta Names Chief Scientist for New AI Unit
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Insider
an hour ago
- Business Insider
AI-Powered Ads Set to Catalyze Yet Another META Earnings Beat
I've been bullish on Meta Platforms (META) for years, and since it is now my largest holding by far, I am particularly excited about its Q2 results, scheduled for release after tomorrow's market close. After a fantastic Q1 that crushed expectations in late April, Meta's stock has climbed above $100 per share; yet, I believe the stock remains a bargain, given its AI-fueled growth and overall investments to secure dominance in AI. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. For its upcoming results, investors will be eager to see if Meta can maintain its momentum, and given the company's relentless focus on maximizing monetization potential and advertising efficiency, I feel this is going to be another blockbuster quarter. The stock also appears reasonably valued to this day despite the recent share price gains. Thus, I remain firmly Bullish on the stock. Q1 Recap: AI and User Engagement Power Record Results To get a sense of where Meta's coming from heading into its Q2 results, keep in mind that Q1 was nothing short of spectacular, with revenue soaring to $42.3 billion, up 16% YoY, while beating estimates by nearly $1 billion. The company's Family Daily Active People (DAP) hit 3.43 billion, up 6%, showcasing sticky user engagement across Facebook, Instagram, and WhatsApp. AI-driven content recommendations fueled a 5% rise in ad impressions and a 10% increase in average ad prices, with Instagram Reels alone posting 20% year-over-year growth. In the meantime, Meta AI, approaching 1 billion monthly active users and over 3 billion across its app suite, has become a cornerstone of personalized content delivery, enhancing engagement and ad performance. Profitability was equally impressive, with Meta's operating margin expanding to 41% from 38% last year, driven by cost discipline and economies of scale within the Family of Apps segment. Despite Reality Labs posting a $4.2 billion operating loss, the core ad business generated $21.8 billion in operating income, powering a 35% surge in net income to $16.6 billion and a 37% jump in EPS to $6.43, well ahead of Wall Street's $5.25 forecast. One notable contributor here was Meta's notable investment in AI infrastructure, including models like Llama, which continues to optimize ad delivery and user retention, setting the stage for sustained growth without compromising gross margins. What Investors Should Watch Out for in Q2 As Meta heads into its Q2 earnings, Wall Street appears to be filled with optimism, as evidenced by the share price; yet, I would argue that expectations are tempered given the rather conservative estimates. Specifically, consensus projects Q2 revenue of $44.79 billion, only a 14.6% YoY increase, all while EPS is forecasted at $5.86, reflecting 13.5% growth over Q2 of 2024. Now, these figures do align with Meta's guidance of $42.5-$45.5 billion in revenue, supported by a 1% foreign currency tailwind. However, they are pretty conservative in my view, given Meta's ongoing momentum, as well as the fact that Meta has consistently beaten its outlook. In fact, Meta has beaten EPS and revenue estimates nine times in a row and is odds-on to make it ten out of ten this week. Regardless, I will be looking for progress on several key areas. First, the impact of AI on ad performance, primarily through tools like Advantage+ and the subsequent effect on conversions. Second, engagement metrics, especially time spent on Instagram and Facebook, will signal whether Meta's recommendation systems are keeping users increasingly engaged. Third, I will be checking for updates on WhatsApp monetization, with its 100 million business users that could unlock significant revenue potential. Finally, capital expenditure guidance, expected to be $64-$72 billion for 2025, will be scrutinized as Meta ramps up AI infrastructure investments. Valuation: Still a Bargain Despite the Run-Up While entering an earnings report following a rally can raise caution, I believe Meta's valuation still presents a compelling opportunity. At approximately 28x Wall Street's FY2025 EPS estimate of $25.73, the stock looks attractively priced for a company with a track record of 35%+ annual EPS growth—and 37% growth in Q1 alone. According to TipRanks data, META's profit margin has climbed consistently from just above 12% in Q4 2022 to over 36% today. My own forecast places 2025 EPS in the $29–$30 range, supported by continued ad strength, AI-driven efficiencies, and expanding margins. Even based on the Street's more conservative $25.47 estimate, Meta's forward P/E remains below that of peers like Microsoft and Amazon, despite outpacing Apple and Alphabet in earnings growth. Is META a Good Stock to Buy Now? Wall Street remains quite optimistic on Meta, with the stock carrying a Strong Buy consensus rating based on 41 Buy and four Hold recommendations over the past three months. Notably, not a single analyst rates the stock a Sell. However, META's average stock price target of $761.55 suggests a somewhat constrained 6.12% upside from current levels. Meta's AI-Powered Dominance Set to Continue All things considered, Meta continues to execute at an elite level, with strong fundamentals, accelerating AI tailwinds, and a clear path to monetization across its core platforms. While expectations for Q2 are modest, I see plenty of room for upside given the company's track record of consistent outperformance. Between robust engagement, ad efficiency gains, and compelling valuation, I view Meta as one of the best opportunities in large-cap tech today. I'll be watching closely on Wednesday, but my conviction remains Bullish heading into the big announcement tomorrow afternoon.


Tom's Guide
2 hours ago
- Tom's Guide
I put 5 of the best AI image generators to the test using NightCafe — this one took the top spot
Competition in the AI image generator space is intense, with multiple companies like Ideogram, Midjourney and OpenAI hoping to convince you to use their offerings. That is why I'm a fan of NightCafe and have been using it for a few years. It has all the major models in one place, including DALL-E 3, Flux, Google Imagen and Ideogram. I've created a lot of AI images over the years and every model brings something different. For example, Flux is a great general purpose model in different versions. Imagen 4 is incredible for realism and Ideogram does text better than anything but GPT-4o. With NightCafe you can try the same prompt over multiple models, or even create a realistic image of say a train station using Google Imagen, then use that as a starter image for an Ideogram project to overlay a caption or stylized logo. You can also just use the same prompt over multiple models to see which you prefer. NightCafe also offers most of the major video models including Kling, Runway Gen-4, Luma Dream Machine and Wan 2.1. For this test we're focusing on image models. Having all those models to hand is a great way to test each of them to find the one that best matches your personal aesthetic — and they're each more different than you think. As well as the 'headline' models like Flux and Imagen, there are also community models that are fine-tuned versions of Flux and Stable Diffusion. For this I focused on the core models OpenAI GPT1, Recraft v3, Google Imagen 4, Ideogram 3 and Flux Kontext. I've come up with a prompt to try across each model. It requires a degree of photorealism, it presents a complex scene and includes a subtle text requirement. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Google's Imagen 4 is the model you'll use if you ask the Gemini app to create an image of something for you. It's also the model used in Google Slides when you create images. This was the first image for this test and while it captured the smoke rising it emphasised it a little. It did create a visually compelling scene and followed the requirement for the two people in the scene. It captured the correct vehicle but there's no sign of the text. Black Forest Labs Flux models are among the most versatile and are open source. With the arrival of the Kontext variant, we got image models that also understand natural language better. This means, a bit like OpenAI's native image generation in GPT-4o, it gives much more accurate results, especially when rendering text or complex scenes. Flux Kontext captured the 'Cafe Matin' perfectly, got the woman right and it somehow feels more French than Imagen but I don't think it's as photographically accurate. GPT Image-1, not to be confused with the 2018 original GPT-1 model, is a multimodal model from OpenAI designed for improved render accuracy, it is used by Adobe, Figma, Canva and NightCafe. Like Kontext, it has a better understanding of natural language prompts. One downside to this model is it can't do 9:16 or 16:9 images. Only variants of square. It captured the truck and the name, but I don't think the scene is as good. It also randomly generated a second umbrella and placement of hands feels unreal. Ideogram has been one of my favorite AI image models since it launched. Always able to generate legible text, it is also more flexible in terms of style than the other models. The Ideogram website includes a well designed canvas and built-in upscaler. The result isn't perfect, the barista leans funny but the lighting is more realistic, the scene is also more realistic with the truck on the sidewalk instead of the road. It also feels more modern and the text is both legible and well designed. Recraft is more of a design model, perfect for both rendered text and illustration, but that doesn't mean it can't create a stunning image. When it hit the market it shook things up, beating other models to the top of leaderboards. I wasn't overly impressed with the output. Yes, it's the most visually striking in part thanks to the space given to the scene. But it over emphasises the smoke and where is the barista? Also for a model geared around text — there's no sign writing. While Flux had a number of issues visually, it was the most consistent and it included legible sign writing. If I were using this commercially, as a stock image, I'd go with the Google Imagen 4 image, but from a purely visual perspective — Flux wins. What you also get with Flux Kontext is easy adaptation. You could make a secondary prompt to change the truck color or replace the old lady with a businessman. You can do that in Gemini but not with Imagen. You'd need to use native image generation from Gemini 2+. If you want to make a change to any image using Kontext, even if it wasn't a Kontext image originally, just click on the image in NightCafe and select "Prompt to Edit". Costs about 2.5 credits and is just a simple descriptive text prompt away. I used the most expensive version of each model for this test. The one that takes the most processing time to work on each image. This allowed for the fairest comparison. What surprises me is just how differently each model interprets the same descriptive prompt. But it doesn't surprise me how much better they've all got at following that description. What I love about NightCafe though, is its one stop shop for AI content. It isn't just a place to use all the leading image and video models, it contains a large community with a range of games, activities and groups centered around content creation. Also, you can edit, enhance, fix faces, upscale and expand any image you create within the app.

Business Insider
3 hours ago
- Business Insider
Elon Musk explains why xAI is calling its staff engineers, not researchers
Elon Musk says he has banished the job title "researcher" from his AI startup, xAI. The distinction between researcher and engineer is a "thinly masked way of describing a two-tier engineering system," Musk wrote in an X post on Tuesday. "There are only engineers," Musk said."Researcher is a relic term from academia." Musk then drew a comparison to his rocket company, SpaceX, which he said did more "meaningful, cutting-edge" research on rockets and satellites than "all the academic university labs on earth combined." "But we don't use the pretentious, low-accountability term 'researcher,'" Musk said. xAI did not respond to requests for comment from Business Insider. OpenAI, which Musk co-founded with Sam Altman and Greg Brockman in 2015, uses a similar naming approach for its technical hires. Brockman, the president of OpenAI, wrote in an X post in February 2023 that they did not want to "bucket people into researchers and engineers" and "thought hard about what job titles to use." OpenAI later decided to use the term "Member of Technical Staff." Brockman said the term was first used by Xerox PARC, a research laboratory known for its pioneering innovations, such as the mouse and the graphical user interface used on computers. Anthropic, a company founded by former OpenAI employees, said on its website that its "research and engineering hires all share a single title — 'Member of Technical Staff.'" The company said it listed its engineers as authors on their research papers, "often as first author." "While there's historically been a division between engineering and research in machine learning, we think that boundary has dissolved with the advent of large models," Anthropic wrote on its career page.