logo
#

Latest news with #FraudGPT

Cisco Talos Reveals Rise in Malicious Use of AI Tools
Cisco Talos Reveals Rise in Malicious Use of AI Tools

TECHx

time10-07-2025

  • TECHx

Cisco Talos Reveals Rise in Malicious Use of AI Tools

Home » Emerging technologies » Cyber Security » Cisco Talos Reveals Rise in Malicious Use of AI Tools Cisco Talos, one of the world's most trusted threat intelligence teams, has revealed how cybercriminals are increasingly abusing artificial intelligence (AI) tools to enhance their operations. According to a newly published report, large language models (LLMs) are being exploited to generate malicious content and bypass traditional security measures. Cisco Talos reported that both custom-built and jailbroken (modified) versions of LLMs are now being used to scale cyberattacks. These versions are producing phishing emails, malware, viruses, and other harmful content. The report noted that some LLMs are being connected to external tools, including email accounts and credit card checkers. This integration is helping cybercriminals automate and amplify their attacks. Cisco Talos researchers also documented the presence of malicious LLMs on underground forums. These include names such as: FraudGPT DarkGPT WhiteRabbitNeo These tools are advertised with features like ransomware creation, phishing kit generation, and card verification services. Interestingly, the report also revealed that some fake AI tools are being used to scam fellow cybercriminals. Cisco Talos highlighted how attackers are jailbreaking legitimate AI models. These jailbreaks aim to bypass safety guardrails and alignment training, allowing the generation of normally restricted content. Additionally, the report warned that AI models themselves are becoming targets. Attackers are inserting backdoors into downloadable models, enabling them to function as programmed by the attacker when activated. Models using external data sources are also at risk. If threat actors manipulate the source data, it could compromise the model's behavior. Fady Younes, Managing Director for Cybersecurity at Cisco covering the Middle East, Africa, Türkiye, Romania, and CIS, commented on the findings. He stated that while large language models offer significant potential, they are now being weaponized to scale attacks. He emphasized the need for strong AI governance, user awareness, and foundational cybersecurity measures. 'With recent innovations like Cisco AI Defense, we are committed to helping enterprises achieve end-to-end protection as they build, use, and innovate with AI,' Younes added. Cisco Talos concluded that as AI becomes more integrated into enterprise and consumer systems, security strategies must evolve. It stressed the importance of: Scanning for tampered AI models Validating external data sources Monitoring abnormal LLM behavior Educating users on the risks of prompt manipulation The report signals a new phase in the cyber threat landscape. Cisco Talos continues to monitor the situation as part of its mission to strengthen global cybersecurity.

Cybercriminals Increasingly Exploit AI Tools To Enhance Attacks: Cisco Talos
Cybercriminals Increasingly Exploit AI Tools To Enhance Attacks: Cisco Talos

Channel Post MEA

time09-07-2025

  • Channel Post MEA

Cybercriminals Increasingly Exploit AI Tools To Enhance Attacks: Cisco Talos

Cisco Talos has published a new report revealing how cybercriminals are increasingly abusing artificial intelligence (AI) tools – particularly large language models (LLMs) – to enhance their operations and evade traditional defenses. The findings highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signaling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponized by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organizations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at

Cisco Talos: Cybercriminals Increasingly Exploit AI and Language Models to Enhance Attacks
Cisco Talos: Cybercriminals Increasingly Exploit AI and Language Models to Enhance Attacks

Web Release

time09-07-2025

  • Web Release

Cisco Talos: Cybercriminals Increasingly Exploit AI and Language Models to Enhance Attacks

Cisco Talos, one of the world's most trusted threat intelligence teams, has published a new report revealing how cybercriminals are increasingly abusing artificial intelligence (AI) tools – particularly large language models (LLMs) – to enhance their operations and evade traditional defenses. The findings highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signaling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponized by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organizations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at

Hackbots Accelerate Cyber Risk — And How to Beat Them
Hackbots Accelerate Cyber Risk — And How to Beat Them

Arabian Post

time10-06-2025

  • Arabian Post

Hackbots Accelerate Cyber Risk — And How to Beat Them

Security teams globally face mounting pressure as artificial‑intelligence‑driven 'hackbots' emerge as a new front in cyber warfare. These autonomous agents, powered by advanced large language models and automation frameworks, are increasingly capable of probing systems, identifying exploits, and in some instances, launching attacks with minimal human intervention. Experts warn that if left unchecked, hackbots could rapidly outpace traditional scanning tools and elevate the scale of cyber threats. Hackbots combine the intelligence of modern LLMs—most notably GPT‑4—with orchestration layers that enable intelligent decision‑making, adapting test payloads, refining configurations, and parsing results. Unlike legacy scanners, these systems analyse target infrastructure and dynamically choose tools and strategies, often flagging novel vulnerabilities that evade conventional detection. Academic research demonstrates that GPT‑4 agents can autonomously perform complex operations like blind SQL injection and database schema extraction without prior specifications. Corporate platforms have begun integrating hackbot capabilities into ethical hacking pipelines. HackerOne, for instance, now requires human review before any vulnerability submission, underscoring that hackbots remain tools under human supervision. Cybersecurity veteran Jack Nunziato explains: 'hackbots leverage advanced machine learning … to dynamically and intelligently hack applications,' a leap forward from rigid automated scans. Such systems are transforming both offensive and defensive security landscapes. ADVERTISEMENT Alongside legitimate use, underground markets are offering hackbots-as-a-service. Products like WormGPT and FraudGPT are being promoted on darknet forums, providing scripting and social‑engineering automation under subscription models. Though some users criticise their limited utility—one described WormGPT as 'just an old cheap version of ChatGPT'—the consensus is that even basic automation can significantly lower the barrier for entry into cybercrime. Security analysts caution that these services, even if imperfect, democratise attack capabilities and may increase volume and reach of malicious campaigns. While hackbots enable faster and more thorough scans, they lack human creativity. Modern systems depend on human-in-the-loop oversight, where experts validate results and craft exploit chains for end-to-end attacks. Yet the speed advantage is real: automated agents can tirelessly comb through code, execute payloads, and surface anomalies across large environments. One cybersecurity researcher noted hackbots are 'getting good, really good, at simulating … a curious, determined hacker'. Defensive strategies must evolve rapidly to match this new threat. The UK's National Cyber Security Centre has warned that AI will likely increase both the volume and severity of cyberattacks. GreyNoise Intelligence recently reported that actors are increasingly exploiting long-known vulnerabilities in edge devices as defenders lag on patching — demonstrating how automation favours adversaries. Organisations must enhance their baseline defences to withstand hackbots, which operate at machine scale. A multi-layered response is critical. Continuous scanning, hardened endpoint controls, identity‑centric solutions, and robust patch management programmes form the backbone of resilience. Privileged Access Management, especially following frameworks established this year, is being touted as indispensable. Likewise, advanced Endpoint Detection and Response and Extended Detection & Response platforms use AI defensively, applying behavioural analytics to flag suspicious activity before attackers can exploit high-velocity toolkits. Legal and policy frameworks are also adapting. Bug bounty platforms now integrate hackbot disclosures under rules requiring human oversight, promoting ethical use while mitigating abuse. Security regulators and insurers are demanding evidence of AI-aware defences, particularly in critical sectors, aligning with risk-based compliance models. ADVERTISEMENT Industry insiders acknowledge the dual nature of the phenomenon. Hackbots serve as force multipliers for both defenders and attackers. As one expert puts it, 'these tools could reshape how we defend systems, making it easier to test at scale … On the other hand, hackbots can … scale sophisticated attacks faster than any human ever could'. That tension drives the imperative: treat hackbots as exotic scanners failing to catch human logic, but succeed in deploying scalable exploitation. Recent breakthroughs on LLM‑powered exploit automation heighten the stakes. A February 2024 study revealed GPT‑4 agents autonomously discovering SQL vulnerabilities on live websites. With LLMs maturing rapidly, future iterations may craft exploit payloads, bypass filters, and compose stealthier attacks. To pre‑empt this, defenders must embed AI strategies within security operations. Simulated red-team exercises should leverage hackbot‑style agents, exposing defenders to their speed and variety. Build orchestration workflows that monitor, sandbox, and neutralise test feeds. Maintain visibility over AI‑driven tooling across pipelines and supply chains. Ethical AI practices extend beyond tooling. Security teams must ensure any in‑house or third‑party AI system has strict governance. That mandates access control, audit logging, prompt validation, and fallbacks to expert review. In contexts where hackbots are used, quarterly audits should verify compliance with secure‑by‑design frameworks.

Fortinet (FTNT) Launches New AI-Powered Workspace Security Suite and Powerful FortiDLP Upgrades
Fortinet (FTNT) Launches New AI-Powered Workspace Security Suite and Powerful FortiDLP Upgrades

Yahoo

time06-06-2025

  • Business
  • Yahoo

Fortinet (FTNT) Launches New AI-Powered Workspace Security Suite and Powerful FortiDLP Upgrades

We recently published a list of . In this article, we are going to take a look at where Fortinet, Inc. (NASDAQ:FTNT) stands against other buzzing AI stocks on latest news and ratings. On June 4th, Fortinet, Inc. (NASDAQ:FTNT) announced a new AI-powered Workspace Security Suite, the FortiMail Workspace Security, along with powerful FortiDLP upgrades, aiming to better protect modern businesses. The new capabilities allow FortiMail to be recognized as one of the broadest and most customizable email security platforms, protecting beyond email to incorporate browser and collaboration security. Together with new features in FortiDLP, Fortinet's next-generation data loss prevention (DLP) and insider risk management solution, users will be able to avail a unified, AI-powered approach to safeguarding users and sensitive data across today's dynamic work environments. A close-up of a user authenticating into a secure network using a two-factor authentication process. By integrating artificial intelligence with integrated email, browser, collaboration, and data security, the company aims to offer better protection for security teams so that they can turn complexity into clarity and threats into easier tasks handled. 'In today's evolving threat landscape, securing user productivity and sensitive data requires a unified strategy that considers both outsider threats and insider risks. Cybercriminals are aiming their efforts right at users and increasingly leveraging tools like FraudGPT, BlackmailerV3, and ElevenLabs to automate the creation of malware, deepfake videos, phishing websites, and synthetic voices—making attacks more scalable, convincing, and difficult to detect. With our expanded AI-powered FortiMail Workspace Security suite and FortiDLP solutions, Fortinet empowers organizations to stay ahead of threat actors and insider risks while ensuring users, data, and productivity remain secure.' Fortinet, Inc. (NASDAQ:FTNT), a cybersecurity company, provides enterprise-level next-generation firewalls and network security solutions, leveraging artificial intelligence across its cybersecurity products. Overall, FTNT ranks 9th on our list of buzzing AI stocks on latest news and ratings. While we acknowledge the potential of FTNT as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store