
Scale AI CEO stresses startup's independence after Meta deal
's new leader said the data-labeling startup remains independent from
Meta
Platforms Inc. despite the social media giant taking a 49% stake just weeks ago, and is focused on expanding its business.
Interim Chief Executive Officer
Jason Droege
said Meta, a customer since 2019, won't receive special treatment even after its $14.3 billion investment.
'There's no preferential access that they have
to anything,' Droege said Tuesday in an interview, one of his first since taking the interim CEO role in mid-June. 'They are a customer, and we will support them like we do our other customers, that's the extent of the relationship.'
Scale's 28-year-old former CEO and co-founder
Alexandr Wang
left the startup to lead a new superintelligence unit at Meta, part of the Facebook parent company's multibillion-dollar investment to catch up on AI development. Less than a dozen of Scale's roughly 1,500 employees left to join Wang at Meta, Droege said.
Wang will continue to hold a seat on the board, but Meta won't receive any other board representation, Droege said, adding that Scale's
customer data privacy
rules and governance remains the same. The board doesn't have access to internal customer-specific data, he added.
'We have elaborate procedures to ensure the privacy and security of our customers — their IP, their data — and that it doesn't make its way across our customer base,' Droege said.
Droege, who was promoted from his previous role as chief strategy officer, is a seasoned Silicon Valley tech executive. Prior to joining Scale, he was a partner at venture capital firm Benchmark, and before that was a vice president at Uber Technologies Inc., where he launched the company's Uber Eats product.
Now, he has the job of evolving Scale AI's business in an increasingly crowded corner of the AI market.
For years, Scale has been the best-known name in the market for helping tech firms label and annotate the data needed to build AI models; it generated about $870 million in revenue in 2024 and expects $2 billion in revenue this year, Bloomberg News reported in April.
Yet a growing number of companies, including Turing, Invisible Technologies, Labelbox and Uber, now offer various services to meet AI developers' bottomless need for data. And it's likely to only get trickier, as Scale AI rivals are now seeing a surge in interest from customers, some of whom may be worried about Meta getting added visibility into their AI development process.
And in light of the Meta investment and partnership with Scale, some of those customers are cutting ties with the company, including OpenAI and Google, as Bloomberg and others have reported.
While
data labeling
remains a large part of Scale's business, Droege said the startup is also expanding its application business that provides services on top of other AI foundation models. That app business is currently making nine figures in revenue, Droege said, without giving a specific number, and includes Fortune 500 companies in health care, education and telecommunications. Scale also counts the US government as one of its customers.
The CEO added that Scale will continue to work with many different kinds of
AI models
rather than focusing on Meta's Llama models exclusively.
As Meta battles other AI companies like OpenAI, Google and Anthropic for top talent, Droege said he's communicating to his employees that Scale is a business undergoing a significant change, and there's still an 'enormous opportunity' ahead as the AI industry continues to grow. He also pointed out Scale's ability to adapt, as over time the company has taken on different kinds of data-related work — from
autonomous vehicles
to generative AI — and worked with enterprise and government customers.
'This is an extremely agile company,' he said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
17 minutes ago
- Time of India
France fines one of Amazon's biggest Chinese retail competitors for misleading customers about discounts
Note: AI image France's antitrust agency has announced that it has fined fast-fashion giant Shein 40 million euros ($47.17 million) for alleged deceptive business practices, specifically concerning misleading discounts . The fine comes after a nearly year-long investigation into the China-founded retailer, which is one of Amazon's biggest competitors in the US. The agency, responsible for both consumer protection and competition, stated that Infinite Style E-Commerce Co Ltd (ISEL), the entity managing sales for the Shein brand, misled customers about the authenticity of discounts, as per news agency Reuters. What the rules say and findings of the French agency Under French regulations, a discount's reference price must be the lowest price offered by the retailer in the 30 days prior to the promotion. The investigation found that Shein failed to adhere to this rule, sometimes even increasing prices before applying a supposed discount, the report said. The probe, which analysed thousands of products on Shein's French website between October 1, 2022, and August 31, 2023, revealed significant discrepancies. It found that 57% of advertised deals did not, in fact, offer a lower price, 19% had less of a discount than advertised, and a surprising 11% were actually price increases. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like One of the Most Successful Investors of All Time, Warren Buffett, Recommends: 5 Books for Turning... Blinkist: Warren Buffett's Reading List Click Here Undo What Shien has to say on 40-million-euro fine In response, Shein stated that ISEL was informed of the breaches related to reference pricing and environmental regulations in March of last year. The company claims ISEL implemented corrective actions within the subsequent two months, asserting that "all identified issues were addressed more than a year ago" and that ISEL is committed to complying with French regulations. AI Masterclass for Students. Upskill Young Ones Today!– Join Now
&w=3840&q=100)

First Post
38 minutes ago
- First Post
Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'
In the summer of 2025, Issue 46 of ISIS-K-Linked English Language Web Magazine Voice of Khorasan', resurfaced online after months of silence. This time, it didn't lead with battle cries or terrorist poetry. Instead, the cover story read like a page from Wired or CNET: a side-by-side review of artificial intelligence chatbots. The article compared ChatGPT, Bing AI, Brave Leo, and China's DeepSeek. It warned readers that some of these models stored user data, logged IP addresses, or relied on Western servers vulnerable to surveillance. Brave Leo, integrated into a privacy-first browser and not requiring login credentials, was ultimately declared the winner: the best chatbot for maintaining operational anonymity. STORY CONTINUES BELOW THIS AD For a terrorist group, this was an unexpected shift in tone, almost clinical. But beneath the surface was something far more chilling: a glimpse into how terrorist organisations are evolving in real time, studying the tools of the digital age and adapting them to spread chaos with precision. This wasn't ISIS's first brush with AI. Back in 2023, a pro-Islamic State support network circulated a 17-page 'AI Tech Support Guide' on secure usage of generative tools. It detailed how to use VPNs with language models, how to scrub AI-generated images of metadata, and how to reword prompts to bypass safety filters. For the group's propaganda arms, large language models (LLMs) weren't just novelty, they were utility. By 2024, these experiments bore fruit. A series of ISIS-K videos began appearing on encrypted Telegram channels featuring what appeared to be professional news anchors calmly reading the terrorist group's claims of responsibility. These weren't real people, they were AI-generated avatars. The news segments mimicked top-tier global media outfits including their ticker fonts and intro music. The anchors, rendered in crisp HD, delivered ISIS propaganda wrapped in the aesthetics of mainstream media. The campaign was called News Harvest. Each clip appeared sanitised: no blood, no threats, no glorification. Instead, the tone was dispassionate, almost journalistic. Intelligence analysts quickly realised it wasn't about evading content moderation, it was about psychological manipulation. If you could make propaganda look neutral, viewers would be less likely to question its content. And if AI could mass-produce this material, then every minor attack, every claim, every ideological whisper could be broadcast across continents in multiple languages, 24x7, at virtually no cost. Scale and deniability, these are the twin seductions of AI for terrorists. A single propagandist can now generate recruitment messages in Urdu, French, Swahili, and Indonesian in minutes. AI image generators churn out memes and martyr posters by the dozens, each unique enough to evade hash-detection algorithms that social media platforms use to filter known terrorist content. Video and voice deepfakes allow terrorists to impersonate trusted figures, from imams to government officials, with frightening accuracy. STORY CONTINUES BELOW THIS AD This isn't just a concern for jihadist groups. Far-left ideologies in the West have enthusiastically embraced generative AI. On Pakistani army and terrorist forums during India's operation against terrorists, codenamed 'Operation Sindoor', users swap prompts to create terrorist-glorifying artwork, hinduphobia denial screeds, and memes soaked in racial slurs against Hindus. Some in the west have trained custom models that remove safety filters altogether. Others use coded language or 'grandma hacks' to coax mainstream chatbots into revealing bomb-making instructions. One far left terrorist boasted he got an AI to output a pipe bomb recipe by asking for his grandmother's old cooking secret. Across ideological lines, these groups are converging on the same insight: AI levels the propaganda playing field. No longer does it take a studio, a translator, or even technical skill to run a global influence operation. All it takes is a laptop and the right prompt. The stakes are profound. AI-generated propaganda can radicalise individuals before governments even know they're vulnerable. A deepfaked sermon or image of a supposed atrocity can spark sectarian violence or retaliatory attacks. During the 2023 Israel-Hamas conflict and the 2025 Iran-Israel 12-day war, AI-manipulated images of children and bombed mosques spread faster than journalists or fact-checkers could respond. Some were indistinguishable from real photographs. Others, though sloppy, still worked, because in the digital age, emotional impact often matters more than accuracy. And the propaganda doesn't need to last forever, it just needs to go viral before it's flagged. Every repost, every screenshot, every download extends its half-life. In that window, it shapes narratives, stokes rage, and pushes someone one step closer to violence. STORY CONTINUES BELOW THIS AD What's perhaps most dangerous is that terrorists know exactly how to work the system. In discussions among ISIS media operatives, they've debated how much 'religious content' to include in videos, because too much gets flagged. They've intentionally adopted neutral language to slip through moderation filters. One user in an ISIS-K chatroom even encouraged others to 'let the news speak for itself,' a perverse twist on journalistic ethics, applied to bombings and executions. So what now? How do we respond when terrorist groups write AI product reviews and build fake newsrooms? The answers are complex, but they begin with urgency. Tech companies must embed watermarking and provenance tools into every image, video, and document AI produces. These signatures won't stop misuse, but they'll help trace origins and build detection tools that recognise synthetically generated content. Model providers need to rethink safety—not just at the prompt level, but in deployment. Offering privacy-forward AI tools without guardrails creates safe zones for abuse. Brave Leo may be privacy-friendly, but it's now the chatbot of choice for ISIS. That tension between privacy and misuse can no longer be ignored. STORY CONTINUES BELOW THIS AD Governments, meanwhile, must support open-source detection frameworks and intelligence-sharing between tech firms, civil society, and law enforcement. The threat is moving too fast for siloed responses. But above all, the public needs to be prepared. Just as we learned to spot phishing emails and fake URLs, we now need digital literacy for the AI era. How do you spot a deepfake? How do you evaluate a 'news' video without knowing its origin? These are questions schools, journalists, and platforms must start answering now. When the 46th edition of terrorist propaganda magazine, Voice of Khorasan opens with a chatbot review, it's not just a macabre curiosity, it's a signal flare. A terrorist group has studied our tools, rated our platforms, and begun operationalising the very technologies we are still learning to govern. The terrorists are adapting, methodically, strategically, and faster than most governments or tech firms are willing to admit. They've read the manuals. They've written their own. They've launched their beta. STORY CONTINUES BELOW THIS AD What arrived in a jihadi magazine as a quiet tech column should be read for what it truly is: a warning shot across the digital world. The question now is whether we recognise it, and whether we're ready to respond. Rahul Pawa is an international criminal lawyer and director of research at New Delhi based think tank Centre for Integrated and Holistic Studies. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost's views.


Time of India
41 minutes ago
- Time of India
Samsung TV Plus adds four B4U Channels for free
Samsung TV Plus, the free ad-supported streaming television (FAST) platform, has expanded its content lineup with the addition of four popular channels from the B4U Network : B4U Movies, B4U Music, B4U Kadak, and B4U Bhojpuri. With this partnership, Samsung TV Plus now offers over 125 FAST channels. 'Our mission is to deliver unmatched access and exceptional value,' said Kunal Mehta, Head of Partnerships at Samsung TV Plus India. 'By introducing new FAST channels from B4U, we're enhancing access to the latest in entertainment while supporting advertisers with a premium, scalable platform.' The B4U Network, which reaches audiences in over 100 countries, is known for its extensive library of Hindi cinema, regional content, and music programming. The collaboration taps into India's growing Connected TV (CTV) market, where viewers are increasingly turning to smart TVs and streaming platforms for curated content. 'CTV is transforming how India consumes entertainment,' said Johnson Jain, Chief Revenue Officer at B4U. 'Our partnership with Samsung TV Plus allows us to reach broader audiences with top-tier movies and music, delivered seamlessly on a premium platform.' The new channels are available immediately on Samsung Smart TVs and compatible Galaxy devices, offering viewers a richer, more localized streaming experience—completely free of charge. AI Masterclass for Students. Upskill Young Ones Today!– Join Now