
Hackbots Accelerate Cyber Risk — And How to Beat Them
Hackbots combine the intelligence of modern LLMs—most notably GPT‑4—with orchestration layers that enable intelligent decision‑making, adapting test payloads, refining configurations, and parsing results. Unlike legacy scanners, these systems analyse target infrastructure and dynamically choose tools and strategies, often flagging novel vulnerabilities that evade conventional detection. Academic research demonstrates that GPT‑4 agents can autonomously perform complex operations like blind SQL injection and database schema extraction without prior specifications.
Corporate platforms have begun integrating hackbot capabilities into ethical hacking pipelines. HackerOne, for instance, now requires human review before any vulnerability submission, underscoring that hackbots remain tools under human supervision. Cybersecurity veteran Jack Nunziato explains: 'hackbots leverage advanced machine learning … to dynamically and intelligently hack applications,' a leap forward from rigid automated scans. Such systems are transforming both offensive and defensive security landscapes.
ADVERTISEMENT
Alongside legitimate use, underground markets are offering hackbots-as-a-service. Products like WormGPT and FraudGPT are being promoted on darknet forums, providing scripting and social‑engineering automation under subscription models. Though some users criticise their limited utility—one described WormGPT as 'just an old cheap version of ChatGPT'—the consensus is that even basic automation can significantly lower the barrier for entry into cybercrime. Security analysts caution that these services, even if imperfect, democratise attack capabilities and may increase volume and reach of malicious campaigns.
While hackbots enable faster and more thorough scans, they lack human creativity. Modern systems depend on human-in-the-loop oversight, where experts validate results and craft exploit chains for end-to-end attacks. Yet the speed advantage is real: automated agents can tirelessly comb through code, execute payloads, and surface anomalies across large environments. One cybersecurity researcher noted hackbots are 'getting good, really good, at simulating … a curious, determined hacker'.
Defensive strategies must evolve rapidly to match this new threat. The UK's National Cyber Security Centre has warned that AI will likely increase both the volume and severity of cyberattacks. GreyNoise Intelligence recently reported that actors are increasingly exploiting long-known vulnerabilities in edge devices as defenders lag on patching — demonstrating how automation favours adversaries. Organisations must enhance their baseline defences to withstand hackbots, which operate at machine scale.
A multi-layered response is critical. Continuous scanning, hardened endpoint controls, identity‑centric solutions, and robust patch management programmes form the backbone of resilience. Privileged Access Management, especially following frameworks established this year, is being touted as indispensable. Likewise, advanced Endpoint Detection and Response and Extended Detection & Response platforms use AI defensively, applying behavioural analytics to flag suspicious activity before attackers can exploit high-velocity toolkits.
Legal and policy frameworks are also adapting. Bug bounty platforms now integrate hackbot disclosures under rules requiring human oversight, promoting ethical use while mitigating abuse. Security regulators and insurers are demanding evidence of AI-aware defences, particularly in critical sectors, aligning with risk-based compliance models.
ADVERTISEMENT
Industry insiders acknowledge the dual nature of the phenomenon. Hackbots serve as force multipliers for both defenders and attackers. As one expert puts it, 'these tools could reshape how we defend systems, making it easier to test at scale … On the other hand, hackbots can … scale sophisticated attacks faster than any human ever could'. That tension drives the imperative: treat hackbots as exotic scanners failing to catch human logic, but succeed in deploying scalable exploitation.
Recent breakthroughs on LLM‑powered exploit automation heighten the stakes. A February 2024 study revealed GPT‑4 agents autonomously discovering SQL vulnerabilities on live websites. With LLMs maturing rapidly, future iterations may craft exploit payloads, bypass filters, and compose stealthier attacks.
To pre‑empt this, defenders must embed AI strategies within security operations. Simulated red-team exercises should leverage hackbot‑style agents, exposing defenders to their speed and variety. Build orchestration workflows that monitor, sandbox, and neutralise test feeds. Maintain visibility over AI‑driven tooling across pipelines and supply chains.
Ethical AI practices extend beyond tooling. Security teams must ensure any in‑house or third‑party AI system has strict governance. That mandates access control, audit logging, prompt validation, and fallbacks to expert review. In contexts where hackbots are used, quarterly audits should verify compliance with secure‑by‑design frameworks.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
a day ago
- Arabian Post
LLMs Fail to Deliver Real Intelligence Despite Huge Investment
The trajectory of large language models like GPT and its counterparts has raised numerous questions in recent months. As companies such as OpenAI continue to pour billions into scaling these models, the fundamental issue of their cognitive limitations remains glaring. The hype surrounding LLMs, though widely praised for their fluency and utility, overlooks a critical flaw in their design. These models may perform tasks that mimic intelligent behaviour but do not actually possess the ability to think, reason, or understand. A growing chorus of AI researchers and experts argues that no amount of funding, data, or compute power will transform LLMs into entities capable of genuine intelligence. Despite ambitious plans from companies like OpenAI to expand the infrastructure behind LLMs to an unimaginable scale, their current model architecture continues to hit the same cognitive wall. At the core of this issue is the realization that LLMs are fundamentally engineered to mimic intelligence rather than to achieve it. OpenAI's recent announcements have been staggering. The company has unveiled plans to deploy up to 100 million GPUs—an infrastructure investment that could exceed $3 trillion. These resources would be used to enhance the size and speed of existing LLMs. Such efforts would consume enormous amounts of energy, rivaling that of entire countries, and generate vast quantities of emissions. The scale of the operation is unprecedented, but so too is the question: What exactly will this achieve? Will adding more tokens to a slightly bigger and faster model finally lead to true intelligence? ADVERTISEMENT The simple answer appears to be no. LLMs are not designed to possess cognition. They are designed to predict, autocomplete, summarise, and assist with routine tasks—but these are functions of performance, not understanding. The biggest misconception in AI development today is the conflation of fluency with intelligence. Proponents of scaling continue to tout that more data, more models, and more compute will unlock something that is fundamentally elusive. But as the limitations of LLMs become increasingly apparent, the vision of artificial general intelligence using current methodologies seems like a pipe dream. The reality of AI's current state is jarring: a vast, burning of resources with little to show for it. Companies like Meta, xAI, and DeepMind are all investing heavily in LLMs, creating an illusion of progress by pushing for bigger and more powerful systems. However, these innovations are essentially 'performance theatre,' with much of the energy and resources funnelled into creating benchmarks and achieving superficial gains in fluency rather than advancing the underlying technology. This raises important questions: Why is there so little accountability for the environmental impact of such projects? Where is the true innovation in cognitive science? LLMs, despite their capacity to accomplish specific tasks effectively, are essentially still limited by their design. The push to scale them further, under the assumption that doing so will lead to breakthroughs in artificial intelligence, ignores the inherent problems that cannot be solved with brute force alone. The architecture behind LLMs—based on pattern recognition and statistical correlation—simply cannot generate the complex, dynamic processes involved in real cognition. Experts argue that the AI community must acknowledge these limitations and pivot toward new approaches. The vast majority of AI researchers now agree that a shift in paradigm is necessary. LLMs, no matter how large or finely tuned, cannot produce the kind of intelligence required to understand, reason, or adapt in a human-like way. To move forward, a radically different model must be developed—one that incorporates cognitive architecture and a deeper understanding of how real intelligence functions. The current momentum in AI, driven by large companies and investors, seems to be propelled by a desire for immediate results and visible performance metrics. But it's crucial to remember that speed means little if it's headed in the wrong direction. Without a rethinking of the very foundations of AI research, the race to scale LLMs will continue to miss the mark. In fact, there's a real risk that the over-emphasis on the scalability of these models could stifle the kind of breakthroughs needed to move the field forward.


Al Etihad
4 days ago
- Al Etihad
AI rivals search engines as go-to tool for online shopping in Europe
23 July 2025 13:46 BERLIN (dpa)More than half of artificial intelligence users turn to AI when shopping online before a search engine, according to a market research study released on 3% of those surveyed by market research institute Norstat said they always prefer using AI tools over search engines when shopping 14% said they mostly use AI, while 35% percent do so sometimes.A total of 7,282 people aged between 18 and 60 from Germany, the United Kingdom, Sweden, Norway, Denmark and Finland took part in the survey in turn to AI most frequently for online travel purchases, in 33% of cases. This is followed by 22% for consumer electronics, 20% for DIY products and hobby supplies, and 19% for software or digital proportion of AI use is comparatively low for fashion and clothing purchases, accounting for just 13%, while it is used in 12% of cosmetics purchases and 7% of real estate in Germany use AI far less for shopping than those in other European countries. However, they are ahead when it comes to its professional use, with half of all respondents saying they use AI tools daily or several times a week for AI tools, OpenAI's ChatGPT is way ahead of the competition, regularly used by 86% of AI users. In comparison, 26% said they regularly use Google's Gemini and 20% often use Microsoft Copilot. The study found that the Chinese AI bot DeepSeek, which is the subject of heated debate among AI experts and data protectionists, is not regularly used by European consumers.


Tahawul Tech
5 days ago
- Tahawul Tech
smart agent Archives
The update allows ChatGPT to autonomously handle tasks such as preparing slide decks, summarising emails, analysing competitors, or booking travel. The agent can browse websites, run code, interact with APIs and generate editable content outputs like spreadsheets or presentations.