logo
#

Latest news with #bots

Companies must protect themselves against bots bypassing defenses
Companies must protect themselves against bots bypassing defenses

Tahawul Tech

time5 days ago

  • Business
  • Tahawul Tech

Companies must protect themselves against bots bypassing defenses

David Warburton, Director, F5 Labs, outlines the growing sophistication of bot adversaries and the steps companies can take to combat them in this exclusive op-ed. In today's digital landscape, where applications and APIs are the lifeblood of businesses, a silent threat lurks: sophisticated bot adversaries. While traditional security measures focus on preventing malicious attacks, automated threats are slipping through undetected by mimicking human behaviour and exploiting gaps in application logic in unexpected ways. F5 Labs' recently released 2025 Advanced Persistent Bots Report sheds light on the evolving tactics of advanced persistent bots and the challenges they pose. Here are three trends that stood out for me from this year's research, and what companies can do to protect themselves. 1. Credential stuffing: When stolen passwords expose valuable data Imagine a scenario where cybercriminals use readily available stolen credentials to access sensitive user accounts. This is the reality of credential stuffing, a prevalent bot-driven attack that exploits the widespread practice of password reuse. According to F5 Labs, some organisations experience upwards of 80% of login traffic coming from credential stuffing attacks launched by bots. The report highlights that, even with a low success rate of 1% to 3% per attack campaign, the sheer volume of automated logins translates into a substantial number of compromised accounts. Incidents such as the PayPal breach in 2022, where almost 35,000 user accounts were accessed to expose highly monetisable personal information, provide massive databases of usernames and passwords for malicious use across other online services. Even a small success rate can yield significant results, because many people reuse passwords. These details can then be used for fraudulent transactions or data theft, or sold on the dark web for targeted attacks. In recent years, several well-known brands have reported credential stuffing attacks. The decline of genetic testing firm 23andMe was, in part, attributed to a credential stuffing campaign that exposed customer health and ancestry information. Data was found for sale on the dark web at a price of $1,000 for 100 profiles, up to $100,000 for 100,000 profiles. The company cited customers' lack of adoption of the site's multi-factor authentication (MFA) option as the primary failure but, in fact, the insidious nature of credential stuffing lies in its ability to bypass traditional security measures. Since the bots are using legitimate credentials and are not trying to exploit any vulnerabilities, they don't trigger typical alarms. MFA can help but, due to the rise in real-time phishing proxies (RTPP), it's not foolproof. Organisations must implement smart bot detection solutions that analyse login patterns, device fingerprints, and behavioural anomalies to see what's really going on. 2. Hospitality under siege: Gift card bots and the rise of 'carding' While finance and retail sectors are often considered prime targets for cyberattacks, F5 Labs research showed that hospitality is heavily targeted by malicious bot activity. In particular, 'carding' and gift card bots are found to target hospitality websites and APIs, with some organisations experiencing a 300% surge in malicious bot activity compared to last year. The report also notes that the average value of gift cards targeted by bots is increasing. Carding uses bots to validate stolen credit card numbers by rapidly testing them on checkout pages and APIs. Gift card bots exploit loyalty programs and gift card systems. Attackers use them to check balances, transfer points, or redeem rewards illegally. These bots often target vulnerabilities like simple patterns and sequential gift card IDs. The hospitality industry's vulnerability stems from the fact that loyalty points and gift cards are essentially digital currency. Cybercriminals can easily convert these assets into cash or use them to purchase goods and services. To protect themselves, hospitality businesses must implement robust bot detection and mitigation strategies specifically tailored to address these kinds of threats. This includes monitoring gift card activity, analysing transaction patterns and implementing solutions that can differentiate between humans and bots. CATPCHAs, once the go-to solution for blocking bots, have been easily bypassed by bot operators for years. 3. Bypassing the gatekeepers: Residential proxies and the futility of CAPTCHAs Traditional bot defences like CAPTCHAs and IP blocking are failing against increasingly sophisticated evasion tactics. Bot operators can easily outsource CAPTCHA solving to human click farms, where individuals are paid small amounts to solve challenges on demand. Furthermore, the rise of residential proxy networks is a significant factor. These networks route bot traffic through residential IPs via compromised devices, masking the true IP addresses of the bots. The F5 Labs report suggests that residential proxies are now widely used by bot operators, and the majority of bot traffic now appears to originate from these networks. Identity management vendor, Okta, flagged the role of broad availability of residential proxy services in a surge of credential stuffing attacks on its users last year. The company said that millions of fake requests had routed through residential proxies to make them appear to originate from mobile devices and browsers of everyday users, rather than from the IP space of virtual private server (VPS) providers. To effectively combat these advanced evasion techniques, organisations need to move beyond traditional defences and embrace smart bot solutions. These solutions leverage machine learning and behavioural analysis to identify bots based on their unique characteristics. By focusing on human-like behaviour, rather than relying on IP addresses or CAPTCHAs, organisations can more accurately detect and block sophisticated bot attacks. Navigating the risk landscape: Finding your bot defence sweet spot Ultimately, the level of bot defence an organisation implements depends on its risk appetite. Every business must weigh the potential costs and benefits of different mitigation strategies and determine the level of risk it is willing to accept. Completely eliminating all bot traffic may not be feasible—or even desirable, as some automated activity is legitimate and beneficial. However, failing to address malicious bot activity can lead to significant financial losses, reputational damage, and customer frustration. The key is to find the right balance. By understanding the different types of bots targeting your organisation, assessing the potential impact of their activities, and implementing appropriate detection and mitigation measures, you can effectively manage your bot risk and protect your business—and your customers—from advanced persistent bot threats. Image Credit: F5

X Will Enable AI Bots to Create Community Notes
X Will Enable AI Bots to Create Community Notes

Yahoo

time02-07-2025

  • Yahoo

X Will Enable AI Bots to Create Community Notes

This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. X is moving to the next stage of its Community Notes fact-checking process, with the addition of 'AI Note Writers,' automated bots that can create their own Community Notes, which will then be assessed by human Notes contributors. As you can see in this example, X is now enabling developers to build Community Notes creation bots, which can be focused on providing accurate answers within certain niches or elements. The bots will then be able to respond to user calls for a Community Note on a post, and provide contextual information and references to support their assessment. As explained by X: 'Starting today, the world can create AI Note Writers that can earn the ability to propose Community Notes. Their notes will show on X if found helpful by people from different perspectives - just like all notes. Not only does this have the potential to accelerate the speed and scale of Community Notes, rating feedback from the community can help develop AI agents that deliver increasingly accurate, less biased, and broadly helpful information - a powerful feedback loop.' The process makes sense, especially given people's growing reliance on AI tools for answers these days. The latest wave of AI bots are able to reference key data sources, and provide succinct explanations, which probably makes them well-suited to this type of fact-checking process. Systematically, that could provide more accurate answers within fact-checks, while humans will still need to assess those answers before they're displayed to users. It makes sense, however, I wonder whether X is going to actually allow AI fact-checks that don't end up aligning with Elon Musk's own perspective on certain issues. Because Elon's repeatedly criticized his own AI bot's answers to various user queries of late. Just last week Musk publicly chastised his Grok AI bot after it referenced data from Media Matters and Rolling Stone in its answers to users. Musk responded by saying that Grok's 'sourcing is terrible,' and that 'only a very dumb AI would believe MM and RS.' He then followed that up by promising to overhaul the Grok, by eliminating all 'politically incorrect, but nonetheless factually true' info from its data banks, essentially editing the bot's data sources to better align with his own ideological views. Maybe, if such an overhaul does take place, X will then only allow users to reference its Grok datasets to use in creating these Community Notes chatbots, which will ensure that they don't reference data that Musk doesn't agree with. Which doesn't feel overly balanced or truthful. But at the same time, it seems unlikely that Musk will be keen to allow bots as fact-checkers if they consistently counter his own claims. But maybe, this is a key step in improvement on that front, by providing more direct data-backed responses, faster, which will then ensure that more questionable claims are challenged in the app. In theory, it could be a valuable addition, I'm just not sure that Musk's efforts to influence similar AI tools is a positive signal for the project. Either way, X is launching its Community Notes AI program today, with a pilot that'll expand over time.

X opens up to Community Notes written by AI bots
X opens up to Community Notes written by AI bots

The Verge

time01-07-2025

  • Business
  • The Verge

X opens up to Community Notes written by AI bots

X is launching a way for developers to create AI bots that can write Community Notes that can potentially appear on posts. Like humans, the 'AI Note Writers' will be able to submit a Community Note, but they will only actually be shown on a post 'if found helpful by people from different perspectives,' X says in a post on its Community Notes account. Notes written by AI will be 'clearly marked for users' and, to start, 'AIs can only write notes on posts where people have requested a note.' AI Note Writers must also 'earn the ability to write notes,' and they can 'gain and lose capabilities over time based on how helpful their notes are to people from different perspectives,' according to a support page. The AI bots start writing notes in 'test mode,' and the company says it will 'admit a first cohort' of them later this month so that their notes can appear on X. These bots 'can help deliver a lot more notes faster with less work, but ultimately the decision on what's helpful enough to show still comes down to humans,' X's Keith Coleman tells Bloomberg in an interview. 'So we think that combination is incredibly powerful.' Coleman says there are 'hundreds' of notes published on X each day.

The internet of agents is rising fast, and publishers are nowhere near ready
The internet of agents is rising fast, and publishers are nowhere near ready

Fast Company

time23-06-2025

  • Business
  • Fast Company

The internet of agents is rising fast, and publishers are nowhere near ready

Imagine you owned a bookstore. Most of your revenue depends on customers coming in and buying books, so you set up different aspects of the business around that activity. You might put low-cost 'impulse buy' items near the checkout or start selling coffee as a convenience. You might even partner with publishers to put displays of popular bestsellers in high-visibility locations in the story to drive sales. Now imagine one day a robot comes in to buy books on behalf of someone. It ignores the displays, the coffee kiosk, and the tchotchkes near the till. It just grabs the book the person ordered, pays for it, and walks out. The next day 4 robots come in, then 12 the day after that. Soon, robots are outnumbering humans in your store, which are dwindling by the day. You soon see very few sales from nonbook items, publishers stop bothering with those displays, and the coffee goes cold. Revenue plummets. In response, you might start charging robots a fee to enter your store, and if they don't pay it, you deny them entry. But then one day a robot that looks just like a human comes in—to the point that you can't tell the difference. What do you do then? This analogy is basically what the publishing world is going through right now, with bot traffic to media websites skyrocketing over the past three months. That's according to new data from TollBit, which recently published its State of the Bots report for the first quarter of 2025. Even more concerning, however, is that the most popular AI search engines are choosing to ignore long-respected standards for blocking bots, in some cases arguing that when a search 'agent' acts on behalf of an individual user, the bot should be treated as human. The robot revolution TollBit's report paints a fast-changing picture of what's happening with AI search. Over the past several months, AI companies have either introduced search abilities or greatly increased their search activity. Bot scraping focused on retrieval-augmented generation (RAG), which is distinct from training data, increased 49% over the previous quarters. Anthropic's Claude notably introduced search, and in the same period ChatGPT (the world's most popular chatbot by far) had a spike in users, plus deep research tools from all the major providers began to take hold. At the same time, publishers increased their defenses. The report reveals that media websites in January were using various methods to block AI bots four times as much as they were doing in a year before. The first line of defense is to adjust their website's file, which tells which specific bots are welcome and which ones are forbidden from accessing the content. The thing is, adhering to is ultimately an honor system and not really enforceable. And the report indicates more AI companies are treating it as such: Among sites in TollBit's network, bot scrapes that ignore increased from 3.3% to 12.9% in just one quarter. Part of that increase is due to a relatively new stance the AI companies have taken, and it's subtle but important. Broadly speaking, there are three different kinds of bots that scrape or crawl content: Training bots: These are bots that crawl the internet to scrape content to provide training data for AI models. Search indexing bots: Bots that go out and crawl the web to ensure the model has fast access to important information outside its training set (which is usually out of date). This is a form of RAG. User agent bots: Also a form of RAG, these are crawlers that go out to the web in real time to find information directly in response to a user query, regardless of whether the content it finds has been previously indexed. Because No. 3 is an agent acting on behalf of a human, AI companies argue that it's an extension of that user behavior and have essentially given themselves permission to ignore settings for that use case. This isn't guesswork— Google, Meta, and Perplexity have made it explicit in their developer notes. This is how you get human-looking robots in the bookstore. When humans go to websites, they see ads. Humans can be intrigued or enticed by other content, such as a link to a podcast about the same topic as an article they're reading. Humans can decide whether or not to pay for a subscription. Humans sometimes choose to make a transaction based on the information in front of them. Bots don't really do any of that (not yet, anyway). Large parts of the internet economy depend on human attention to websites, but as the report shows, that behavior drops off massively when someone uses AI to search the web—AI search engines provide very little in the way of referral traffic compared to traditional search. This of course is what's behind many of the lawsuits now in play between media companies and AI companies. How that is resolved in the legal realm is still TBD, but in the meantime, some media sites are choosing to block bots—or at least are attempting to—from accessing their content at all. For user agent bots, however, that ability has been taken away. The AI companies have always seen data harvesting in the way that's most favorable to their insatiable demand for it, famously claiming that data only needs to be 'publicly available' to qualify as training data. Even when they claim to respect for their search engines, it's an open secret that they sometimes use third-party scrapers to bypass it. Unmasking the bots So apart from suing and hoping for the best, how can publishers regain some, well, agency in the emerging world of agent traffic? If you believe AI substitution threatens your growth, there are additional defenses to consider. Hard paywalls are easier to defend, both technically and legally, and there are several companies (including TollBit, but there are others, such as ScalePost) that specialize in redirecting bot traffic to paywalled endpoints specifically for bots. If the robot doesn't pay, it's denied the content, at least in theory. Collective action is another possibility. I doubt publishers would launch a class action around this specific relabeling of user agents, but it does provide more ammunition in broader copyright lawsuits. Besides going to court, industry associations could come out against the move. The News/Media Alliance in particular has been very vocal about AI companies' alleged transgressions of copyright. The idea of treating agentic activity as the equivalent of human activity has consequences that go beyond the media. Any content or tool that's been traditionally available for free will need to reevaluate that access now that robots are destined to be a growing part of the mix. If there was any doubt that simply updating instructions was adequate, the TollBit report blew it out of the water. The stance that 'AI is just doing what humans do' is often used as a defense for when AI systems ingest large amounts of information and then produce new content based on it. Now the makers of those systems are quietly extending that idea, allowing their agents to effectively impersonate humans while shopping the web for data. Until it's clear how to build profitable stores for robots, there should be a way to force their masks off.

DAVID MARCUS: Your social media feed is being hijacked to divide MAGA supporters
DAVID MARCUS: Your social media feed is being hijacked to divide MAGA supporters

Fox News

time20-06-2025

  • Politics
  • Fox News

DAVID MARCUS: Your social media feed is being hijacked to divide MAGA supporters

As our society buries itself deeper and deeper into the cave of social media, we are seeing a growing divide between what happens in our real world and what we see on platforms like X and TikTok. A bombshell new report from the National Contagion Research Institute shows much of this is being directed by our foreign enemies. It also shows one of their top goals is to infiltrate and divide the MAGA movement. According to NCRI, Russia and Iran have been employing tens of thousands of bots to inject extreme rhetoric into American social media discourse, and perhaps more importantly, to artificially inflate the influence of content creators who push radical and divisive agendas. To quote one NCRI analyst, "If you talk to Republicans right now, more than 80% of them support the war against Iran. But if you go on Twitter [X] you get the sense that there is a civil war raging." This manipulation of social media by our enemies is far more insidious than most Americans realize, so let's walk through how this kind of information operation, the technical name for propaganda, works. Imagine, for example, that there was an obscure comedian, or Instagram model who began to "just ask questions," about why Jews run everything, or why black people commit crimes. Even better, they might post about how they aren't allowed to ask these very questions and insinuate that neither are you. At this point, according to the report, Russian and Iranian bot armies will begin to follow these radical accounts, massively pumping up their numbers. It will like and share the most divisive content, and work behind the scenes to make this person famous. On platforms that monetize interaction, this can mean very large payouts for creators, as spy bots mindlessly watch their videos over and over, and the beauty of it is that the content creator never even has to know they are getting paid off. When we talk about influencers being bought and paid for by foreign foes, it may not mean a duffle bag full of cash in a bus station locker, simply by using thousands of bots to juice the numbers, the social media companies themselves facilitate the payouts. Perhaps the most obvious way we can see this malign foreign influence online is in the incredible amount of casual racism and antisemitism, supposedly being posted by Americans, that we see on X. These hate posts range from straight-up Nazi apologism, to memes about fatherless black homes, or weird eugenics IQ graphs, and if their prevalence in the algorithm accurately reflects the level of racism in America, then this is a deeply racist country. Only it isn't. Because X does not accurately reflect our society, instead countries that despise America are infusing hate into the platform and propping up the handful of real people willing to push racism and division. What the Russian and Iranian bot farms hope we will believe is that America is full of secret racists who will only say their true beliefs through their anonymous personas, but this is absurd, America knows IRL, that that kind of racism is buried in our past. The question becomes, what can we do to fight back against this massive information operation aimed at our minds? Liberals have long taken the exact wrong approach, which is to try to protect the end user from malicious content. This always adds up to censorship, one way or the other. The better approach, at least as far as the government is concerned, is to target the bot farms and countries that back them. This can be done through cyberattacks, sanctions, any number of measures. There is also a role for the social media industry to play here. We are hearing growing calls for X to use a flag to identify the country of origin of its accounts. This would immediately help users see through the foreign operations. The silver lining in all of this, as the report shows, is that making the leap from influence on a social media screen to influence in the real world is not as easy as we might have once imagined. These foreign-backed influencers have few outlets they can go to off of social media. Sure, Piers Morgan may put on anyone with 250k followers no matter how awful they are, but Main Street America isn't seeing it. As a free society, America is by definition vulnerable to informational attacks, and as citizens in that free society all of us bear a responsibility to process the unfettered flow of information we have access to in responsible ways. Make no mistake, your social media feed is under direct foreign attack. So far, the attacks haven't done too much damage, but keeping it that way, first and foremost, starts with all of us.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store