logo
Cloudflare will now block AI bots by default on new sites

Cloudflare will now block AI bots by default on new sites

Cloudflare, which protects around 20% of all websites globally, is changing how AI companies interact with the open web. The company will now block AI crawlers by default on all newly onboarded websites, forcing AI firms to get explicit consent before accessing site content for training or search.
The move is Cloudflare's latest effort to prevent content scraping, which involves automatically extracting data from websites using AI bots without permission and usually for malicious intent.
This change comes alongside the launch of a new product called Pay per Crawl, a marketplace that allows website owners and media publishers to charge AI bots for scraping their content.
Publishers can set custom pricing depending on the use case – whether the data will be used to train large language models, power real-time search results, or assist independent bots. The product aims to create more transparency and fairness in how AI firms obtain data.
Leading media outlets such as Condé Nast, TIME, and The Atlantic have already joined the program, blaming declining referral traffic on AI-generated answers increasingly reducing the need for users to click through to original sources.
These publishers, among others, argue that without fair compensation, AI scraping undermines their ability to sustain journalism and content creation.
Cloudflare also released comparative crawl-to-crawl ratios that highlight the imbalance between traffic generated and data consumed. OpenAI's crawlers reportedly make 1,700 requests per referral, while Anthropic's reach a staggering 73,000-to-1. In contrast, Google's crawl-to-click ratio sits at a more balanced 14-1.
The move redefines Cloudflare's role from passive infrastructure to active negotiator in the AI data economy. As AI assistants become more independent, and as AI-generated answers increasingly replace traditional web traffic, tools like Pay per Crawl could determine which sites get fairly compensated.
(This article has been curated by Arfan Jeelany, who is an intern with The Indian Express)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

France fines one of Amazon's biggest Chinese retail competitors for misleading customers about discounts
France fines one of Amazon's biggest Chinese retail competitors for misleading customers about discounts

Time of India

time9 minutes ago

  • Time of India

France fines one of Amazon's biggest Chinese retail competitors for misleading customers about discounts

Note: AI image France's antitrust agency has announced that it has fined fast-fashion giant Shein 40 million euros ($47.17 million) for alleged deceptive business practices, specifically concerning misleading discounts . The fine comes after a nearly year-long investigation into the China-founded retailer, which is one of Amazon's biggest competitors in the US. The agency, responsible for both consumer protection and competition, stated that Infinite Style E-Commerce Co Ltd (ISEL), the entity managing sales for the Shein brand, misled customers about the authenticity of discounts, as per news agency Reuters. What the rules say and findings of the French agency Under French regulations, a discount's reference price must be the lowest price offered by the retailer in the 30 days prior to the promotion. The investigation found that Shein failed to adhere to this rule, sometimes even increasing prices before applying a supposed discount, the report said. The probe, which analysed thousands of products on Shein's French website between October 1, 2022, and August 31, 2023, revealed significant discrepancies. It found that 57% of advertised deals did not, in fact, offer a lower price, 19% had less of a discount than advertised, and a surprising 11% were actually price increases. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like One of the Most Successful Investors of All Time, Warren Buffett, Recommends: 5 Books for Turning... Blinkist: Warren Buffett's Reading List Click Here Undo What Shien has to say on 40-million-euro fine In response, Shein stated that ISEL was informed of the breaches related to reference pricing and environmental regulations in March of last year. The company claims ISEL implemented corrective actions within the subsequent two months, asserting that "all identified issues were addressed more than a year ago" and that ISEL is committed to complying with French regulations. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'
Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'

First Post

time30 minutes ago

  • First Post

Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'

In the summer of 2025, Issue 46 of ISIS-K-Linked English Language Web Magazine Voice of Khorasan', resurfaced online after months of silence. This time, it didn't lead with battle cries or terrorist poetry. Instead, the cover story read like a page from Wired or CNET: a side-by-side review of artificial intelligence chatbots. The article compared ChatGPT, Bing AI, Brave Leo, and China's DeepSeek. It warned readers that some of these models stored user data, logged IP addresses, or relied on Western servers vulnerable to surveillance. Brave Leo, integrated into a privacy-first browser and not requiring login credentials, was ultimately declared the winner: the best chatbot for maintaining operational anonymity. STORY CONTINUES BELOW THIS AD For a terrorist group, this was an unexpected shift in tone, almost clinical. But beneath the surface was something far more chilling: a glimpse into how terrorist organisations are evolving in real time, studying the tools of the digital age and adapting them to spread chaos with precision. This wasn't ISIS's first brush with AI. Back in 2023, a pro-Islamic State support network circulated a 17-page 'AI Tech Support Guide' on secure usage of generative tools. It detailed how to use VPNs with language models, how to scrub AI-generated images of metadata, and how to reword prompts to bypass safety filters. For the group's propaganda arms, large language models (LLMs) weren't just novelty, they were utility. By 2024, these experiments bore fruit. A series of ISIS-K videos began appearing on encrypted Telegram channels featuring what appeared to be professional news anchors calmly reading the terrorist group's claims of responsibility. These weren't real people, they were AI-generated avatars. The news segments mimicked top-tier global media outfits including their ticker fonts and intro music. The anchors, rendered in crisp HD, delivered ISIS propaganda wrapped in the aesthetics of mainstream media. The campaign was called News Harvest. Each clip appeared sanitised: no blood, no threats, no glorification. Instead, the tone was dispassionate, almost journalistic. Intelligence analysts quickly realised it wasn't about evading content moderation, it was about psychological manipulation. If you could make propaganda look neutral, viewers would be less likely to question its content. And if AI could mass-produce this material, then every minor attack, every claim, every ideological whisper could be broadcast across continents in multiple languages, 24x7, at virtually no cost. Scale and deniability, these are the twin seductions of AI for terrorists. A single propagandist can now generate recruitment messages in Urdu, French, Swahili, and Indonesian in minutes. AI image generators churn out memes and martyr posters by the dozens, each unique enough to evade hash-detection algorithms that social media platforms use to filter known terrorist content. Video and voice deepfakes allow terrorists to impersonate trusted figures, from imams to government officials, with frightening accuracy. STORY CONTINUES BELOW THIS AD This isn't just a concern for jihadist groups. Far-left ideologies in the West have enthusiastically embraced generative AI. On Pakistani army and terrorist forums during India's operation against terrorists, codenamed 'Operation Sindoor', users swap prompts to create terrorist-glorifying artwork, hinduphobia denial screeds, and memes soaked in racial slurs against Hindus. Some in the west have trained custom models that remove safety filters altogether. Others use coded language or 'grandma hacks' to coax mainstream chatbots into revealing bomb-making instructions. One far left terrorist boasted he got an AI to output a pipe bomb recipe by asking for his grandmother's old cooking secret. Across ideological lines, these groups are converging on the same insight: AI levels the propaganda playing field. No longer does it take a studio, a translator, or even technical skill to run a global influence operation. All it takes is a laptop and the right prompt. The stakes are profound. AI-generated propaganda can radicalise individuals before governments even know they're vulnerable. A deepfaked sermon or image of a supposed atrocity can spark sectarian violence or retaliatory attacks. During the 2023 Israel-Hamas conflict and the 2025 Iran-Israel 12-day war, AI-manipulated images of children and bombed mosques spread faster than journalists or fact-checkers could respond. Some were indistinguishable from real photographs. Others, though sloppy, still worked, because in the digital age, emotional impact often matters more than accuracy. And the propaganda doesn't need to last forever, it just needs to go viral before it's flagged. Every repost, every screenshot, every download extends its half-life. In that window, it shapes narratives, stokes rage, and pushes someone one step closer to violence. STORY CONTINUES BELOW THIS AD What's perhaps most dangerous is that terrorists know exactly how to work the system. In discussions among ISIS media operatives, they've debated how much 'religious content' to include in videos, because too much gets flagged. They've intentionally adopted neutral language to slip through moderation filters. One user in an ISIS-K chatroom even encouraged others to 'let the news speak for itself,' a perverse twist on journalistic ethics, applied to bombings and executions. So what now? How do we respond when terrorist groups write AI product reviews and build fake newsrooms? The answers are complex, but they begin with urgency. Tech companies must embed watermarking and provenance tools into every image, video, and document AI produces. These signatures won't stop misuse, but they'll help trace origins and build detection tools that recognise synthetically generated content. Model providers need to rethink safety—not just at the prompt level, but in deployment. Offering privacy-forward AI tools without guardrails creates safe zones for abuse. Brave Leo may be privacy-friendly, but it's now the chatbot of choice for ISIS. That tension between privacy and misuse can no longer be ignored. STORY CONTINUES BELOW THIS AD Governments, meanwhile, must support open-source detection frameworks and intelligence-sharing between tech firms, civil society, and law enforcement. The threat is moving too fast for siloed responses. But above all, the public needs to be prepared. Just as we learned to spot phishing emails and fake URLs, we now need digital literacy for the AI era. How do you spot a deepfake? How do you evaluate a 'news' video without knowing its origin? These are questions schools, journalists, and platforms must start answering now. When the 46th edition of terrorist propaganda magazine, Voice of Khorasan opens with a chatbot review, it's not just a macabre curiosity, it's a signal flare. A terrorist group has studied our tools, rated our platforms, and begun operationalising the very technologies we are still learning to govern. The terrorists are adapting, methodically, strategically, and faster than most governments or tech firms are willing to admit. They've read the manuals. They've written their own. They've launched their beta. STORY CONTINUES BELOW THIS AD What arrived in a jihadi magazine as a quiet tech column should be read for what it truly is: a warning shot across the digital world. The question now is whether we recognise it, and whether we're ready to respond. Rahul Pawa is an international criminal lawyer and director of research at New Delhi based think tank Centre for Integrated and Holistic Studies. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost's views.

Samsung TV Plus adds four B4U Channels for free
Samsung TV Plus adds four B4U Channels for free

Time of India

time33 minutes ago

  • Time of India

Samsung TV Plus adds four B4U Channels for free

Samsung TV Plus, the free ad-supported streaming television (FAST) platform, has expanded its content lineup with the addition of four popular channels from the B4U Network : B4U Movies, B4U Music, B4U Kadak, and B4U Bhojpuri. With this partnership, Samsung TV Plus now offers over 125 FAST channels. 'Our mission is to deliver unmatched access and exceptional value,' said Kunal Mehta, Head of Partnerships at Samsung TV Plus India. 'By introducing new FAST channels from B4U, we're enhancing access to the latest in entertainment while supporting advertisers with a premium, scalable platform.' The B4U Network, which reaches audiences in over 100 countries, is known for its extensive library of Hindi cinema, regional content, and music programming. The collaboration taps into India's growing Connected TV (CTV) market, where viewers are increasingly turning to smart TVs and streaming platforms for curated content. 'CTV is transforming how India consumes entertainment,' said Johnson Jain, Chief Revenue Officer at B4U. 'Our partnership with Samsung TV Plus allows us to reach broader audiences with top-tier movies and music, delivered seamlessly on a premium platform.' The new channels are available immediately on Samsung Smart TVs and compatible Galaxy devices, offering viewers a richer, more localized streaming experience—completely free of charge. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store