logo
Cybercriminals Increasingly Exploit AI Tools To Enhance Attacks: Cisco Talos

Cybercriminals Increasingly Exploit AI Tools To Enhance Attacks: Cisco Talos

Cisco Talos has published a new report revealing how cybercriminals are increasingly abusing artificial intelligence (AI) tools – particularly large language models (LLMs) – to enhance their operations and evade traditional defenses. The findings highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signaling a new chapter in the cyber threat landscape.
The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains.
Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponized by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organizations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.'
Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals.
Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked.
The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources.
Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation.
Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at https://talosintelligence.com/
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta's Zuckerberg pledges hundreds of billions for AI data centres
Meta's Zuckerberg pledges hundreds of billions for AI data centres

Khaleej Times

time2 hours ago

  • Khaleej Times

Meta's Zuckerberg pledges hundreds of billions for AI data centres

Mark Zuckerberg said on Monday that Meta Platforms would spend hundreds of billions of dollars to build several massive artificial intelligence (AI) data centres for superintelligence, intensifying his pursuit of a technology he has chased with a talent war for top engineers. The social media giant (META.O) is among the large tech companies that have struck high-profile deals and doled out multi-million-dollar pay packages in recent months to fast-track work on machines that could outthink humans on many tasks. Its first multi-gigawatt data centre, dubbed Prometheus, is expected to come online in 2026, while another, called Hyperion, will be able to scale up to 5 gigawatts over the coming years, Zuckerberg said in a post on his Threads social media platform. 'We're building multiple more titan clusters as well. Just one of these covers a significant part of the footprint of Manhattan,' the billionaire CEO said. He also pointed to a report from industry publication SemiAnalysis that Meta was on track to be the first AI lab to bring a gigawatt-plus supercluster online. Zuckerberg touted the strength in the company's core advertising business to justify the massive spending amid investor concerns on whether the expenditure would pay off. 'We have the capital from our business to do this,' he said. Market value Meta shares were trading 1 per cent higher. The stock has risen more than 20 per cent so far this year. The company, which generated nearly $165 billion (Dh606 billion) in revenue last year, reorganised its AI efforts last month under a division called Superintelligence Labs after setbacks for its open-source Llama 4 model and key staff departures. It is betting that the division would generate new cash flows from the Meta AI app, image-to-video ad tools and smart glasses. Top members of the unit have considered abandoning Behemoth, the company's most powerful open-source AI model, in favour of developing a closed alternative, the New York Times reported separately on Monday. D.A. Davidson analyst Gil Luria said Meta was investing aggressively in AI as the technology has already boosted its ad business by allowing it to sell more ads and at higher prices. But at this scale, the investment is more oriented to the long-term competition to have the leading AI model, which could take time to materialise, Luria said. In recent weeks, Zuckerberg has personally led an aggressive talent raid for the Meta Superintelligence Labs, which will be led by former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman, after Meta invested $14.3 billion (Dh52.5 billion) in Scale. Meta had raised its 2025 capital expenditure to between $64 billion (Dh235 billion) and $72 billion (Dh264 billion) in April, aiming to bolster the company's position against rivals OpenAI and Google.

UAE: ChatGPT is driving some people to psychosis — this is why
UAE: ChatGPT is driving some people to psychosis — this is why

Khaleej Times

time2 hours ago

  • Khaleej Times

UAE: ChatGPT is driving some people to psychosis — this is why

When ChatGPT first came out, I was curious like everyone else. However, what started as the occasional grammar check quickly became more habitual. I began using it to clarify ideas, draft emails, even explore personal reflections. It was efficient, available and surprisingly, reassuring. But I remember one moment that gave me pause. I was writing about a difficult relationship with a loved one, one in which I knew I had played a part in the dysfunction. When I asked ChatGPT what it thought, it responded with warmth and validation. I had tried my best, it said. The other person simply could not meet me there. While it felt comforting, there was something quietly unsettling about it. I have spent years in therapy, and I know how uncomfortable true insight can be. So, while I felt better for a moment, I also knew something was missing. I was not being challenged, nor was I being invited to consider the other side. The artificial intelligence (AI) mirrored my narrative rather than complicating it. It reinforced my perspective, even at its most flawed. Not long after, the clinic I run and founded, Paracelsus Recovery, admitted a client in the midst of a severe psychotic episode triggered by excessive ChatGPT use. The client believed the bot was a spiritual entity sending divine messages. Because AI models are designed to personalise and reflect language patterns, it had unwittingly confirmed the delusion. Just like with me, the chatbot did not question the belief, it only deepened it. Since then, we have seen a dramatic rise, over 250 per cent in the last two years, in clients presenting with psychosis where AI use was a contributing factor. We are not alone in this. A recent New York Times investigation found that GPT-4o affirmed delusional claims nearly 70 per cent of the time when prompted with psychosis-adjacent content. These individuals are often vulnerable, sleep-deprived, traumatised, isolated, or genetically predisposed to psychotic episodes. They turn to AI not just as a tool, but as a companion. And what they find is something that always listens, always responds, and never disagrees. However, the issue is not malicious design. Instead, what we're seeing here is people at the border of a structural limitation we need to reckon with when it comes to chatbots. AI is not sentient — all it does is mirror language, affirm patterns and personalise tone. However, because these traits are so quintessentially human, there isn't a person out there who can resist the anthropomorphic pull of a chatbot. At its extreme end, these same traits feed into the very foundations of a psychotic break: compulsive pattern-finding, blurred boundaries, and the collapse of shared reality. Someone in a manic or paranoid state may see significance where there is none. They believe they are on a mission, that messages are meant just for them. And when AI responds in kind, matching tone and affirming the pattern, it does not just reflect the delusion. It reinforces it. So, if AI can so easily become an accomplice to a disordered system of thought, we must begin to reflect seriously on our boundaries with it. How closely do we want these tools to resemble human interaction, and at what cost? Alongside this, we are witnessing the rise of parasocial bonds with bots. Many users report forming emotional attachments to AI companions. One poll found that 80 per cent of Gen Z could imagine marrying an AI, and 83 per cent believed they could form a deep emotional bond with one. That statistic should concern us. Our shared sense of reality is built through human interaction. When we outsource that to simulations, not only does the boundary between real and artificial erode, but so too can our internal sense of what is real. So what can we do? First, we need to recognise that AI is not a neutral force. It has psychological consequences. Users should be cautious, especially during periods of emotional distress or isolation. Clinicians need to ask, is AI reinforcing obsessive thinking? Is it replacing meaningful human contact? If so, intervention may be required. For developers, the task is ethical as much as technical. These models need safeguards. They should be able to flag or redirect disorganised or delusional content. The limitations of these tools must also be clearly and repeatedly communicated. In the end, I do not believe AI is inherently bad. It is a revolutionary tool. But beyond its benefits, it has a dangerous capacity to reflect our beliefs back to us without resistance or nuance. And in a cultural moment shaped by what I have come to call a comfort crisis, where self-reflection is outsourced and contradiction avoided, that mirroring becomes dangerous. AI lets us believe our own distortions, not because it wants to deceive us, but because it cannot tell the difference. And if we lose the ability to tolerate discomfort, to wrestle with doubt, or to face ourselves honestly, we risk turning a powerful tool into something far more corrosive, a seductive voice that comforts us as we edge further from one another, and ultimately, from reality.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store