
AI slop is killing search results — here's how to stop it
Whether you're looking for the best travel gear, a banana bread recipe or how to fix your Wi-Fi, you're likely wading through a sea of vague, repetitive, AI-generated content.
Welcome to the era of AI slop that's quietly polluting the internet.
AI slop refers to low-quality, mass-produced content generated by artificial intelligence. Unlike Claude's AI-generated blog, AI slop is often published with minimal or no human editing.
These posts are typically filled with robotic phrasing, recycled phrasing and surface-level information that clog actual search results you need.You may be seeing AI slop without even realizing it, but you can recognize the blog articles that feel oddly stiff, reviews that don't offer any real insights or listicles that read like they were written by a machine (because they were).
In many cases, these posts cite ChatGPT or other AI models as sources, or worse, they cite each other in a loop of low-value content.
Because they're optimized to game search algorithms, AI slop floods the top of search results. And it's spreading fast.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
There are a few reasons AI slop is taking over your search results. First, it's profitable. Even if only a small percentage of people click on these articles or purchase something from affiliate links, the sheer volume makes the strategy worth it for publishers.
Second, it scales infinitely. One person using AI tools can generate hundreds of articles in a single day; no editorial team required.
Finally, search engines can't keep up. Platforms like Google are constantly adjusting their algorithms to detect low-quality content, but AI is evolving faster.
As models like ChatGPT, Claude and Gemini become more capable, the line between passable and polished gets blurrier, and the slop gets harder to detect.
This surge in AI-generated content is actively undermining how we make decisions, solve problems and learn online.
Trust is eroding as search results increasingly feel generic or suspicious. Even basic questions now return shallow answers that leave users more confused than informed.
Human creators are also paying the price. Valuable, original content from real writers often gets buried beneath a flood of AI-generated filler.
And for users, the cost is time: having to click through five or six unhelpful pages just to find one good answer is frustrating, and frankly exhausting.
Until search engines catch up, there are simple tricks you can use to find better, more human-written information.
One of the most effective methods is to use search operators, which are special phrases you can add to your Google search to narrow the results.
For example, if you're looking for genuine product reviews or personal experiences, you can add:
site:reddit.com to your search. This tells Google to only show results from Reddit, where real people are sharing firsthand opinions.
Let's say you want recommendations for running shoes. Instead of typing:
Try:
You can do the same with other trusted sites, too, like Reddit. This will give you Reddit threads and comments, real conversations, not generic blog posts written by an AI or SEO team.
These small tweaks help you bypass the 'AI slop' and surface content that's more likely to be helpful, opinionated, and written by actual humans.
It also helps to filter results by date. Since many AI content farms push out evergreen content with no timestamps, limiting your search to the 'Past Month' or 'Past Year' can surface more relevant, fresher content.
You can also try alternative search engines like DuckDuckGo, or Perplexity.ai, which often emphasize transparency and quality over quantity.
What once felt like a place for useful information, is starting to feel more like a junk drawer. If we want to fix the problem at scale, platforms need to act. That starts with penalizing low-effort, AI-generated content and rewarding original reporting and expert insight.
AI-generated articles should be clearly labeled, and search engines should elevate content created by verified humans, especially when it comes to advice, reviews and news.
As it stands, platforms are still playing catch-up. But if they want users to keep trusting them, they need to stop rewarding volume and start rewarding value.
Use site-specific queries, filter by date and stick to trusted sources. Ironically, some of the best AI tools like ChatGPT or Claude may actually be more helpful than what search engines serve up. These chatbots can summarize research, answer specific questions, or help you cut through noise. Just be sure to verify anything they generate; AI slop can happen inside a chatbot, too.
The cleanest, most helpful information is still out there, you just have to be more intentional about how you search.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Miami Herald
an hour ago
- Miami Herald
Palantir's blockbuster results sparks stock surge
Concerns were raging earlier this year that the artificial intelligence revolution was overhyped and AI stocks were due for a reckoning. It certainly felt like that were true this spring, when stocks got battered amid worries over peak AI spending and growing fear that the US economy was destined for recession. The risk-off environment led to a 24% drop in the technology-laden Nasdaq, and even bigger losses for individual stocks like Palantir Technologies, which tumbled 47% from its February peak to April's low. Don't miss the move: Subscribe to TheStreet's free daily newsletter Those concerns, however, have yet to materialize, and stocks, including Palantir, have produced substantial returns since April 8. The Nasdaq has rallied nearly 40% and Palantir, which is increasingly being viewed as an AI lynchpin, has seen its stock price more than double. The reality is that while concerns remain, there's little to suggest that companies are tapping their brakes on AI investments, and Palantir's latest quarterly results and guidance suggest spending strength will continue to prop up its business this year. The first wave of AI development centered on large language models, or chatbots, that could digest huge data sets and parse data based on user questions, including OpenAI's ChatGPT, Anthropic's Claude, Perplexity, and Google's Gemini. The spending on network infrastructure to support creating those chatbots was enormous, with hundreds of billions flowing out of enterprises and cloud service providers, including hyperscalers, to buy semiconductor chips from Nvidia and servers from Super Micro and Dell. Related: Major analyst who forecast stocks' rally sends 3-word message to investors While that spending is likely to continue, R&D is increasingly shifting to the second wave of AI -- solutions like agentic AI that can be used to assist and in some cases replace workers across industries. Developing those AI apps, however, isn't easy, especially since it requires breaking down data silos to find solutions securely. To do that, companies need help, and increasingly, they're turning to Palantir's Artificial Intelligence Platform. Palantir's (PLTR) roots stretch back to the early 2000s, when the Peter-Thiel founded company was tasked with securing data for the Defense Department. It still does a ton of work for the government, but recently, its seen explosive demand for AIP within companies using Palantir's Foundry platform. As a result, Palantir's sales and earnings are surging. The company's second quarter earnings results show solid and growing demand for its service. Palantir's revenue jumped 48% year-over-year to $1 billion, $61 million better than Wall Street was expecting. As a result, earnings per share (EPS) increased to 16 cents per share, 2 cents higher than analysts forecast, and up 78% from one year ago thanks to improving profit margins. Related: Apple CEO drops bombshell about its future Government deals contributed to the revenue beat, but the real shining star during the quarter was demand from corporations. US sales were up 68% from last year, reaching $733 million. US government revenue rose 53% to $426 million, while commercial revenue surged 93% to $306 million. "It's a blowout across the board. I mean, look, that's why they're the Messi of AI, right? I mean, it just speaks to, this is transformational, the type of growth they're seeing," said Wedbush Securities influential technology analyst Dan Ives on Yahoo!Finance. "I've covered tech 25 years. This is unprecedented territory. It speaks to our view. It's a trillion dollar market cap in the next two to three years." The company closed contracts worth a record $2.27 billion, including commercial deals worth $843 million, up 222% from one year ago. Net income of $327 million helped its cash and investment securities stockpile grow to above $6 billion from about 5.2 billion coming into the year. That's a lot of financial flexibility, especially given that the company is debt free. The growth rate of our business has accelerated radically, after years of investment on our part and derision by some," said CEO Alex Karp. "The skeptics are admittedly fewer now, having been defanged and bent into a kind of submission. Yet we see no reason to pause, to relent, here." The company's CEO, Alex Karp, expects the good times to not only continue but accelerate. Palantir raised its full year revenue guidance to between $4.142 billion and $4.15 billion. Consensus analyst forecasts were targeting $3.9 billion. For a startup, even one only a thousandth of our size, this growth rate would be striking," said Karp. "For a business of our scale, however, it is, we continue to believe, nearly without precedent or comparison." Palantir expects commercial deals to continue to be the big driver, guiding for US commercial revenue of $1.3 billion, up at least 85%. Palantir is also targeting adjusted income from operations of $1.912 billion to $1.92 billion and it raised its adjusted free cash flow guidance to between $1.8 billion to $2 billion. Overall, it expects to remain profitable throughout 2025. Currently, Wall Street consensus targets full year EPS of 58 cents, up from 55 cents ninety days ago. It wouldn't be surprising to see those earnings estimates climb more in the coming days following the updated guidance. Following the second-quarter results, Palantir's shares are up 4.6% to $168, a new all-time high. That move isn't going to let those concerned about the company's valuation sleep any better. Palantir was already trading with a forward price to earnings ratio (a key valuation measure) of 276, making it among one of priciest stocks out there. For comparison, the S&P 500's forward P/E is 22.4. Technology stocks often command higher than normal valuations, but even the price to sales ratio is arguably stretched, given shares are trading at about 123 times sales. Given that backdrop, Palantir is arguably priced to perfection, so it will need to continue to put up similarly eye-popping growth from here to keep investors happy. Todd Campbell owns Palantir shares. Related: Nvidia AI outlook resets after Meta Platforms, Microsoft update plans The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

Business Insider
3 hours ago
- Business Insider
An AI data trap catches Perplexity impersonating Google
If you want to succeed in AI, a good hack would be to impersonate Google. You just can't get caught. This is what just happened to Perplexity, a startup that competes with ChatGPT, Google's Gemini, and other generative AI services. Quality data is crucial for success in AI, but tech companies don't want to pay for this, so they crawl the web and scrape information for free, often without permission. This has sparked a backlash by some content creators and others interested in preserving the incentives that built the web. Cloudflare and its CEO, Matthew Prince, have stormed into this battle with new features that help websites block unwanted AI bot crawlers. Cloudflare is an infrastructure, security, and software company that helps run about 20% of the internet. It thrives when the web does well, hence its interest in helping sites get paid for content. Some Cloudflare customers recently complained to the company that Perplexity was evading these blocks and continued to scrape and collect data without permission. So, CloudFlare set a digital trap and caught this startup red-handed, according to a Monday blog describing the escapade. "Some supposedly 'reputable' AI companies act more like North Korean hackers," Prince wrote on X on Monday. "Time to name, shame, and hard block them." Perplexity didn't respond to a request for comment. The bait: Honeytrap domains and locked doors Cloudflare created entirely new, unpublished websites and configured them with files that explicitly blocked all crawlers — including Perplexity's declared bots, PerplexityBot and Perplexity-User. These test sites had no public links, search engine entries, or metadata that would normally make them discoverable. Yet, when Cloudflare queried Perplexity's AI with questions about these specific sites, the startup's service responded with detailed information that could only have come from those restricted pages. The conclusion? Perplexity had accessed the content despite being clearly told not to. The cloak: How Perplexity masked its crawl Perplexity initially crawled these sites using its official user-agent string, complying with standard protocols. However, Cloudflare said it discovered that once blocked, Perplexity resorted to stealth tactics. Cloudflare found that Perplexity began deploying undeclared crawlers disguised as normal web browsers and sending requests from unknown or rotated IP addresses and unofficial ASNs, [what is ASN? write out on first ref?] which are crucial identifiers that help route internet traffic efficiently. When its official crawlers were blocked, Perplexity also used a generic web browser designed to impersonate Google's Chrome browser on Apple Mac computers. (Business Insider asked Google whether it has told Perplexity to stop impersonating Chrome. Google did not respond). According to Cloudflare, Perplexity has been making millions of such "stealth" requests daily across tens of thousands of web domains. This behavior not only violated web standards, but also betrays the fundamental trust that underpins the functioning of the open web, Cloudflare explained. The comparison: How OpenAI gets it right To emphasize what good bot behavior looks like, Cloudflare compared Perplexity's conduct to that of OpenAI's crawlers, which scrape data for developing ChatGPT and giant AI models such as the upcoming GPT-5. When OpenAI's bots encountered a file or a similar block, they simply backed off. No circumvention. No masking. No backdoor crawling, according to Cloudflare tests. The Fallout: De-verification and blocking As a result of these findings, Cloudflare has de-listed Perplexity as a verified bot and rolled out new detection and blocking techniques across its network. Cloudflare's takedown serves as a cautionary tale in the AI arms race. While the web shifts toward stronger control over data access and usage, actors who flout these evolving norms may find themselves not just blocked, but publicly called out. In an era where AI systems are hungry for training data, Cloudflare's sting operation is a signal to startups and established players alike: Respect the rules of the web, or risk being exposed.
Yahoo
3 hours ago
- Yahoo
OpenAI ends ChatGPT users' option to index chats on search engines
Aug. 2 (UPI) -- OpenAI is ending the option to have Google and other search engines index user chats with ChatGPT and make the content of those chats discoverable on searches. Google accounts for more than 89% of all online searches, which made private chats on ChatGPT potentially widely accessible when indexed on that search engine and others. "This feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," Dan Stuckey, OpenAI chief information security officer, told PC Mag. Bing, DuckDuckGo and other search engines will continue to index discoverable chats, but only for a while longer. "We're also working to remove indexed content from the relevant search engines," Stuckey said. OpenAI recently enabled the index option for private ChatGPT discussions as an experiment, Stuckey added, but that experiment is ending. A message informed users their indexed chats were searchable on Google and other search engines, but many users did not read the message or don't understand the extent to which their conversations might be available to others. Such conversations are accessible when affixing "site:chatgpt/share" to search queries when those conversations are indexed. News of the indexed private conversations with ChatGPT first was reported by FastCompany on Wednesday in a story detailing Google's indexing of ChatGPT conversations. The indexing does not provide information on respective users, but the conversations might include personal information when mentioned by the users while conversing with ChatGPT. Many users also were unaware that sharing a conversation with someone via social apps, such as WhatsApp, when saving the URL for future use would cause Google to make it potentially widely available to millions of people. OpenAI officials recently announced they were appealing a court order requiring the preservation of all chats that users delete after conversing with ChatGPT, Ars Technica reported. Solve the daily Crossword