logo
In pursuit of Godlike technology, Mark Zuckerberg amps up the AI race

In pursuit of Godlike technology, Mark Zuckerberg amps up the AI race

Miami Herald15 hours ago

SAN FRANCISCO -- In April, Mark Zuckerberg's lofty plans for the future of artificial intelligence crashed into reality.
Weeks earlier, the 41-year-old CEO of Meta had publicly boasted that his company's new AI model, which would power the latest chatbots and other cutting-edge experiments, would be a 'beast.' Internally, Zuckerberg told employees that he wanted it to rival the AI systems of competitors like OpenAI and be able to drive features such as voice-powered chatbots, people who spoke with him said.
But at Meta's AI conference that month, the new AI model did not perform as well as those of rivals. Features like voice interactions were not ready. Many developers, who attended the event with high expectations, left underwhelmed.
Zuckerberg knew Meta was falling behind in AI, people close to him said, which was unacceptable. He began strategizing in a WhatsApp group with top executives, including Chris Cox, Meta's head of product, and Andrew Bosworth, the chief technology officer, about what to do.
That kicked off a frenzy of activity that has reverberated across Silicon Valley. Zuckerberg demoted Meta's vice president in charge of generative AI. He then invested $14.3 billion in the startup Scale AI and hired Alexandr Wang, its 28-year-old founder. Meta approached other startups, including the AI search engine Perplexity, about deals.
And Zuckerberg and his colleagues have embarked on a hiring binge, including reaching out this month to more than 45 AI researchers at rival OpenAI alone. Some received formal offers, with at least one as high as $100 million, two people with knowledge of the matter said. At least four OpenAI researchers have accepted Meta's offers.
In another extraordinary move, executives in Meta's AI division discussed 'de-investing' in its AI model, Llama, two people familiar with the discussions said. Llama is an 'open source' model, with its underlying technology publicly shared for others to build on. They discussed embracing AI models from competitors like OpenAI and Anthropic, which have 'closed' code bases.
A Meta spokesperson said company officials 'remain fully committed to developing Llama and plan to have multiple additional releases this year alone.'
Zuckerberg has ramped up his activity to keep Meta competitive in a wildly ambitious race that has erupted within the broader AI contest. He is chasing a hypothetically godlike technology called 'superintelligence,' which is AI that would be more powerful than the human brain. Only a few Silicon Valley companies -- OpenAI, Anthropic and Google -- are considered to have the know-how to develop this, and Zuckerberg wants to ensure that Meta is included, people close to him said.
'He is like a lot of CEOs at big tech companies who are telling themselves that AI is going to be the biggest thing they have seen in their lifetime, and if they don't figure out how to become a big player in it, they are going to be left behind,' said Matt Murphy, a partner at the venture capital firm Menlo Ventures. He added, 'It is worth anything to prevent that.'
Leaders at other tech behemoths are also going to extremes to capture future innovation that they believe will be worth trillions of dollars. Google, Microsoft and Amazon have supersized their AI investments to keep up with one another. And the war for talent has exploded, vaulting AI specialists into the same compensation stratosphere as NBA stars.
Google's CEO, Sundar Pichai, and his top AI lieutenant, Demis Hassabis, as well as the chief executives of Microsoft and OpenAI, Satya Nadella and Sam Altman, are personally involved in recruiting researchers, two people with knowledge of the approaches said. Some tech companies are offering multimillion-dollar packages to AI technologists over email without a single interview.
'The market is setting a rate here for a level of talent which is really incredible, and kind of unprecedented in my 20-year career as a technology executive,' Meta's Bosworth said in a CNBC interview last week. He said Altman had made counteroffers to some of the people Meta had tried to hire.
OpenAI and Google declined to comment. Some details of Meta's efforts were previously reported by Bloomberg and The Information.
(The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.)
For years, Meta appeared to keep pace in the AI race. More than a decade ago, Zuckerberg hired Yann LeCun, who is considered a pioneer of modern AI. LeCun co-founded FAIR -- or Fundamental AI Research -- which became Meta's artificial intelligence research arm.
After OpenAI released its ChatGPT chatbot in 2022, Meta responded the next year by creating a generative AI team under one of its executives, Ahmad Al-Dahle, to spread the technology throughout the company's products. Meta also open-sourced its AI models, sharing the underlying computer code with others to entrench its technology and spread AI development.
But as OpenAI and Google built AI chatbots that could listen, look and talk, and rolled out AI systems designed to 'reason,' Meta struggled to do the same. One reason was that the company had less experience with a technique called 'reinforcement learning,' which others were using to build AI.
Late last year, the Chinese startup DeepSeek released AI models that were built upon Llama but were more advanced and required fewer resources to create. Meta's open-source strategy, once seen as a competitive advantage, appeared to have let others get a leg up on it.
Zuckerberg knew he needed to act. Around that time, outside AI researchers began receiving emails from him, asking if they would be interested in joining Meta, two people familiar with the outreach said.
In April, Meta released two new versions of Llama, asserting that the models performed as well as or better than comparable ones from OpenAI and Google. To prove its claim, Meta cited its own testing benchmarks. On Instagram, Zuckerberg championed the releases in a video selfie.
But some independent researchers quickly deduced that Meta's benchmarks were designed to make one of its models look more advanced than it was. They became incensed.
Zuckerberg later learned that his AI team had wanted the models to appear to perform well, even though they were not doing as well as hoped, people with knowledge of the matter said. Zuckerberg was not briefed on the customized tests and was upset, two people said.
His solution was to throw more bodies at the problem. Meta's AI division swelled to more than 1,000 people this year, up from a few hundred two years earlier.
The rapid growth led to infighting and management squabbles. And with Zuckerberg's round-the-clock, hard-charging management style -- his attention on a project is often compared to the 'Eye of Sauron' internally, a reference to the 'Lord of the Rings' villain -- some engineers burned out and left. Executives hunkered down to brainstorm next steps, including potentially ratcheting back investment in Llama.
In May, Zuckerberg sidelined Al-Dahle and ramped up recruitment of top AI researchers to lead a superintelligence lab. Armed with his checkbook, Zuckerberg sent more emails and text messages to prospective candidates, asking them to meet at Meta's headquarters in Menlo Park, California. Zuckerberg often takes recruitment meetings in an enclosed glass conference room, informally known as 'the aquarium.'
The outreach included talking to Perplexity about an acquisition, two people familiar with the talks said. No deal has materialized. Zuckerberg also spoke with Ilya Sutskever, OpenAI's former chief scientist and a renowned AI researcher, about potentially joining Meta, two people familiar with the approach said. Sutskever, who runs the startup Safe Superintelligence, declined the overture. He did not respond to a request for comment.
But Zuckerberg won over Wang of Scale, which works with data to train AI systems. They had met through friends and are also connected through Elliot Schrage, a former Meta executive who is an investor in Scale and adviser to Wang.
This month, Meta announced that it would take a minority stake in Scale and bring on Wang -- who is not known for having deep technical expertise but has many contacts in AI circles -- as well as several of his top executives to help run the superintelligence lab.
Meta is now in talks with Safe Superintelligence's CEO, Daniel Gross, and his investment partner Nat Friedman to join, a person with knowledge of the talks said. They did not respond to requests for comment.
Meta has its work cut out for it. Some AI researchers have said Zuckerberg has not clearly laid out his AI mission outside of trying to optimize digital advertising. Others said Meta was not the right place to build the next AI superpower.
Whether or not Zuckerberg succeeds, insiders said the playing field for technological talent had permanently changed.
'In Silicon Valley, you hear a lot of talk about the 10x engineer,' said Amjad Masad, the CEO of the AI startup Replit, using a term for extremely productive developers. 'Think of some of these AI researchers as 1,000x engineers. If you can add one person who can change the trajectory of your entire company, it's worth it.'
This article originally appeared in The New York Times.
Copyright 2025

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta's Xbox-Branded Quest 3S Just Sold Out for All the Wrong Reasons
Meta's Xbox-Branded Quest 3S Just Sold Out for All the Wrong Reasons

Gizmodo

timean hour ago

  • Gizmodo

Meta's Xbox-Branded Quest 3S Just Sold Out for All the Wrong Reasons

Everyone loves limited-edition stuff. There's Sony's 30th anniversary PS5, or Analogue's many limited edition Pocket handhelds, or— I don't know—the Shamrock f***ing Shake. But there's one type of person who loves limited-edition stuff more than your average consumer, and it's a scalper. For proof of that, see Meta's recently released Xbox-branded Quest 3S. See Meta Quest Xbox Edition at Best Buy In case you missed it, Meta's new limited-edition Quest 3S bundle just recently sold out, which on the surface sounds like a great thing for VR and XR. You may be tempted to say, 'Oh, wow! People really like XR headsets, huh?' But before you do that, it may also be worth taking a short gander at eBay, because the resale market over there paints a slightly more cynical picture. It's full of Xbox-branded Quest 3S bundles, folks—and they ain't just giving them away. This bundle, for reference, retails at $399, and the average price I'm seeing on eBay is about $600, though sometimes a little more or a little less. Here is the sad state of affairs on eBay as of the time of typing these words: The list goes on and on, unfortunately, which tells me one thing: the scalpers had a field day with this thing. And that's just kind of sad. It's not sad that someone would want to make money from reselling a limited-edition gadget—as annoying as scalpers are, I can't blame anyone for having a side hustle in this economy. But it is sad that Meta seemingly didn't do much to preserve its limited-edition Quest 3S for XR nerds who unequivocally deserve first dibs. It's also maybe a little sad—as someone who borders on said XR nerd identity—that the race to being out of stock may not actually be driven by real demand. XR headsets, while not the most crucial gadget in the world, are pretty cool and deserve more shine than they get, in my humble opinion. It would have been nice to see them really break through with a little help from an Xbox marketing gimmick. But as always, the almighty aftermarket prevails. To be fair, I'm sure not all of the sales were scalpers trying to make a buck off the XR headset's rarity. Some people, I presume, bought it because it's a pretty good deal for getting into XR—you get a sleek black headset with Xbox green details, Meta's Elite Strap for your head, and a limited-edition Xbox controller to top it off. Based on the retail price of all of that, this bundle saves you somewhere in the ballpark of $95. Some people bought this bundle because of Xbox, too. Here's one instance in which someone seems to have pulled the trigger on this bundle just for the controller. Honestly… respect. That's much more pure than trying to spin the whole thing around for $200. The idea that someone buys the new Quest colorway just to nab the limited edition Xbox Controller and sell the rest is really comical.. — SadlyItsDadley (@SadlyItsBradley) June 27, 2025Listen, scalpers are an inevitable fact of life nowadays when you're buying any gadget that's even slightly in demand. Like it or not, that's just the world we live in—one colored by bots and dropshipping. But I'd be lying if I said that it wouldn't have been nice to see a little effort on Meta's part to prevent that. It can be done! Just look at the Switch 2 launch. People have been resorting to cartoonish levels of robbery to get their hands on it—that's how in-demand this thing is—but Nintendo, with a little bit of forethought, has kept the scourge of scalpers to a dull roar. I guess Meta probably doesn't care that much either way, though. A sale is a sale, whether it ends up on eBay or on your dorky XR- and Xbox-loving head. Sadly, if you're in the latter camp, it looks like the aftermarket is your only option right now. Thanks, Zuckerberg. Just because you look like a dropshipper doesn't mean you have to act like one. See Meta Quest Xbox Edition at Best Buy

Did AI companies win a fight with authors? Technically
Did AI companies win a fight with authors? Technically

The Verge

time2 hours ago

  • The Verge

Did AI companies win a fight with authors? Technically

In the past week, big AI companies have — in theory — chalked up two big legal wins. But things are not quite as straightforward as they may seem, and copyright law hasn't been this exciting since last month's showdown at the Library of Congress. First, Judge William Alsup ruled it was fair use for Anthropic to train on a series of authors' books. Then, Judge Vince Chhabria dismissed another group of authors' complaint against Meta for training on their books. Yet far from settling the legal conundrums around modern AI, these rulings might have just made things even more complicated. Both cases are indeed qualified victories for Meta and Anthropic. And at least one judge — Alsup — seems sympathetic to some of the AI industry's core arguments about copyright. But that same ruling railed against the startup's use of pirated media, leaving it potentially on the hook for massive financial damage. (Anthropic even admitted it did not initially purchase a copy of every book it used.) Meanwhile, the Meta ruling asserted that because a flood of AI content could crowd out human artists, the entire field of AI system training might be fundamentally at odds with fair use. And neither case addressed one of the biggest questions about generative AI: when does its output infringe copyright, and who's on the hook if it does? Alsup and Chhabria (incidentally both in the Northern District of California) were ruling on relatively similar sets of facts. Meta and Anthropic both pirated huge collections of copyright-protected books to build a training dataset for their large language models Llama and Claude. Anthropic later did an about-face and started legally purchasing books, tearing the covers off to 'destroy' the original copy, and scanning the text. The authors argued that, in addition to the initial piracy, the training process constituted an unlawful and unauthorized use of their work. Meta and Anthropic countered that this database-building and LLM-training constituted fair use. Both judges basically agreed that LLMs meet one central requirement for fair use: they transform the source material into something new. Alsup called using books to train Claude 'exceedingly transformative,' and Chhabria concluded 'there's no disputing' the transformative value of Llama. Another big consideration for fair use is the new work's impact on a market for the old one. Both judges also agreed that based on the arguments made by the authors, the impact wasn't serious enough to tip the scale. Add those things together, and the conclusions were obvious… but only in the context of these cases, and in Meta's case, because the authors pushed a legal strategy that their judge found totally inept. Put it this way: when a judge says his ruling 'does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful' and 'stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one' — as Chhabria did — AI companies' prospects in future lawsuits with him don't look great. Both rulings dealt specifically with training — or media getting fed into the models — and didn't reach the question of LLM output, or the stuff models produce in response to user prompts. But output is, in fact, extremely pertinent. A huge legal fight between The New York Times and OpenAI began partly with a claim that ChatGPT could verbatim regurgitate large sections of Times stories. Disney recently sued Midjourney on the premise that it 'will generate, publicly display, and distribute videos featuring Disney's and Universal's copyrighted characters' with a newly launched video tool. Even in pending cases that weren't output-focused, plaintiffs can adapt their strategies if they now think it's a better bet. The authors in the Anthropic case didn't allege Claude was producing directly infringing output. The authors in the Meta case argued Llama was, but they failed to convince the judge — who found it wouldn't spit out more than around 50 words of any given work. As Alsup noted, dealing purely with inputs changed the calculations dramatically. 'If the outputs seen by users had been infringing, Authors would have a different case,' wrote Alsup. 'And, if the outputs were ever to become infringing, Authors could bring such a case. But that is not this case.' In their current form, major generative AI products are basically useless without output. And we don't have a good picture of the law around it, especially because fair use is an idiosyncratic, case-by-case defense that can apply differently to mediums like music, visual art, and text. Anthropic being able to scan authors' books tells us very little about whether Midjourney can legally help people produce Minions memes. Minions and New York Times articles are both examples of direct copying in output. But Chhabria's ruling is particularly interesting because it makes the output question much, much broader. Though he may have ruled in favor of Meta, Chhabria's entire opening argues that AI systems are so damaging to artists and writers that their harm outweighs any possible transformative value — basically, because they're spam machines. It's worth reading: Generative AI has the potential to flood the market with endless amounts of images, songs, articles, books, and more. People can prompt generative AI models to produce these outputs using a tiny fraction of the time and creativity that would otherwise be required. So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way. … As the Supreme Court has emphasized, the fair use inquiry is highly fact dependent, and there are few bright-line rules. There is certainly no rule that when your use of a protected work is 'transformative,' this automatically inoculates you from a claim of copyright infringement. And here, copying the protected works, however transformative, involves the creation of a product with the ability to severely harm the market for the works being copied, and thus severely undermine the incentive for human beings to create. … The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials. And boy, it sure would be interesting if somebody would sue and make that case. After saying that 'in the grand scheme of things, the consequences of this ruling are limited,' Chhabria helpfully noted this ruling affects only 13 authors, not the 'countless others' whose work Meta used. A written court opinion is unfortunately incapable of physically conveying a wink and a nod. Those lawsuits might be far in the future. And Alsup, though he wasn't faced with the kind of argument Chhabria suggested, seemed potentially unsympathetic to it. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote of the authors who sued Anthropic. 'This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition.' He was similarly dismissive of the claim that authors were being deprived of licensing fees for training: 'such a market,' he wrote, 'is not one the Copyright Act entitles Authors to exploit.' But even Alsup's seemingly positive ruling has a poison pill for AI companies. Training on legally acquired material, he ruled, is classic protected fair use. Training on pirated material is a different story, and Alsup absolutely excoriates any attempt to say it's not. 'This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,' he wrote. There were plenty of ways to scan or copy legally acquired books (including Anthropic's own scanning system), but 'Anthropic did not do those things — instead it stole the works for its central library by downloading them from pirated libraries.' Eventually switching to book scanning doesn't erase the original sin, and in some ways it actually compounds it, because it demonstrates Anthropic could have done things legally from the start. If new AI companies adopt this perspective, they'll have to build in extra but not necessarily ruinous startup costs. There's the up-front price of buying what Anthropic at one point described as 'all the books in the world,' plus any media needed for things like images or video. And in Anthropic's case these were physical works, because hard copies of media dodge the kinds of DRM and licensing agreements publishers can put on digital ones — so add some extra cost for the labor of scanning them in. But just about any big AI player currently operating is either known or suspected to have trained on illegally downloaded books and other media. Anthropic and the authors will be going to trial to hash out the direct piracy accusations, and depending on what happens, a lot of companies could be hypothetically at risk of almost inestimable financial damages — not just from authors, but from anyone that demonstrates their work was illegally acquired. As legal expert Blake Reid vividly puts it, 'if there's evidence that an engineer was torrenting a bunch of stuff with C-suite blessing it turns the company into a money piñata.' And on top of all that, the many unsettled details can make it easy to miss the bigger mystery: how this legal wrangling will affect both the AI industry and the arts. Echoing a common argument among AI proponents, former Meta executive Nick Clegg said recently that getting artists' permission for training data would 'basically kill the AI industry.' That's an extreme claim, and given all the licensing deals companies are already striking (including with Vox Media, the parent company of The Verge), it's looking increasingly dubious. Even if they're faced with piracy penalties thanks to Alsup's ruling, the biggest AI companies have billions of dollars in investment — they can weather a lot. But smaller, particularly open source players might be much more vulnerable, and many of them are also almost certainly trained on pirated works. Meanwhile, if Chhabria's theory is right, artists could reap a reward for providing training data to AI giants. But it's highly unlikely the fees would shut these services down. That would still leave us in a spam-filled landscape with no room for future artists. Can money in the pockets of this generation's artists compensate for the blighting of the next? Is copyright law the right tool to protect the future? And what role should the courts be playing in all this? These two rulings handed partial wins to the AI industry, but they leave many more, much bigger questions unanswered.

At 20 years old, Reddit is defending its data and fighting AI with AI
At 20 years old, Reddit is defending its data and fighting AI with AI

CNBC

time2 hours ago

  • CNBC

At 20 years old, Reddit is defending its data and fighting AI with AI

For 20 years, Reddit has pitched itself as "the front page of the internet." AI threatens to change that. As social media has changed over the past two decades with the shift to mobile and the more recent focus on short-form video, peers like MySpace, Digg and Flickr have faded into oblivion. Reddit, meanwhile, has refused to die, chugging along and gaining an audience of over 108 million daily users who congregate in more than 100,000 subreddit communities. There, Reddit users keep it old school and leave simple text comments to one another about their favorite hobbies, pastimes and interests. Those user-generated text comments are a treasure trove that, in the age of artificial intelligence, Reddit is fighting to defend. The emergence of AI chatbots like OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini threaten to inhale vast swaths of data from services like Reddit. As more people turn to chatbots for information they previously went to websites for, Reddit faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up. CEO Steve Huffman explained Reddit's situation to analysts in May, saying that challenges like the one AI poses can also create opportunities. While the "search ecosystem is under heavy construction," Huffman said he's betting that the voices of Reddit's users will help it stand out amid the "annotated sterile answers from AI." Huffman doubled down on that notion last week, saying on a podcast that the reality is AI is still in its infancy. "There will always be a need, a desire for people to talk to people about stuff," Huffman said. "That is where we are going to be focused." Huffman may be correct about Reddit's loyal user base, but in the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers." In a sign that the company believes so deeply in the value of its data, Reddit sued Anthropic earlier this month, alleging that the AI startup "engaged in unlawful and unfair business acts" by scraping subreddits for information to improve its large language models. While book authors have taken companies like Meta and Anthropic to court alleging that their AI models break copyright law and have suffered recent losses, Reddit is basing its lawsuit on the argument of unfair business practices. Reddit's case appears to center on Anthropic's "commercial exploitation of the data which they don't own," said Randy McCarthy, head of the IP law group at Hall Estill. Reddit is defending its platform of user-generated content, said Jason Bloom, IP litigation chair at the law firm Haynes Boone. The social media company's repository of "detailed and informative discussions" are particularly useful for "training an AI bot or an AI platform," Bloom said. As many AI researchers have noted, Reddit's large volume of moderated conversations can help make AI chatbots produce more natural-sounding responses to questions covering countless topics than say a university textbook. Although Reddit has AI-related data-licensing agreements with OpenAI and Google, the company alleged in its lawsuit that Anthropic has been covertly siphoning its data without obtaining permission. Reddit alleges that Anthropic's data-hoovering actions are "interfering with Reddit's contractual relationships with Reddit's users," the legal filing said. This lack of clarity regarding what is permitted when it comes to the use of data scraping for AI is what Reddit's case and other similar lawsuits are all about, legal and AI experts said. "Commercial use requires commercial terms," Huffman said on The Best One Yet podcast. "When you use something — content or data or some resource — in business, you pay for it." Anthropic disagrees "with Reddit's claims and will defend ourselves vigorously," a company spokesperson told CNBC. Reddit's decision to sue over claims of unfair business practices instead of copyright infringement underscores the differences between traditional publishers and platforms like Reddit that host user-generated content, McCarthy said. Bloom said that Reddit could have a valid case against Anthropic because social media platforms have many different revenue streams. One such revenue stream is selling access to their data, Bloom said. That "enables them to sell and license that data for legitimate uses while still protecting their consumers privacy and whatnot," Bloom said. Reddit isn't just fending off AI. It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week. Huffman has been pitching Reddit Answers as a best-of-both worlds tool, gluing together the simplicity of AI chatbots with Reddit's corpus of commentary. He used the feature after seeing electronic music group Justice play recently in San Francisco. "I was like, how long is this set? And Reddit could tell me it's 90 minutes 'cause somebody had already asked that question on Reddit," Huffman said on the podcast. Though investors are concerned about AI negatively impacting Reddit's user growth, Seaport Senior Internet Analyst Aaron Kessler said he agrees with Huffman's sentiment that the site's original content gives it staying power. People who visit Reddit often search for information about things or places they may be interested in, like tennis rackets or ski resorts, Kessler said. This user data indicates "commercial intent," which means advertisers are increasingly considering Reddit as a place to run online ads, he said. "You can tell by which page you're on within Reddit what the consumer is interested in," Kessler said. "You could probably even argue there's stronger signals on Reddit versus a Facebook or Instagram, where people may just be browsing videos."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store