Latest news with #CiscoTalos


TECHx
5 days ago
- TECHx
Cisco Talos Reveals Rise in Malicious Use of AI Tools
Home » Emerging technologies » Cyber Security » Cisco Talos Reveals Rise in Malicious Use of AI Tools Cisco Talos, one of the world's most trusted threat intelligence teams, has revealed how cybercriminals are increasingly abusing artificial intelligence (AI) tools to enhance their operations. According to a newly published report, large language models (LLMs) are being exploited to generate malicious content and bypass traditional security measures. Cisco Talos reported that both custom-built and jailbroken (modified) versions of LLMs are now being used to scale cyberattacks. These versions are producing phishing emails, malware, viruses, and other harmful content. The report noted that some LLMs are being connected to external tools, including email accounts and credit card checkers. This integration is helping cybercriminals automate and amplify their attacks. Cisco Talos researchers also documented the presence of malicious LLMs on underground forums. These include names such as: FraudGPT DarkGPT WhiteRabbitNeo These tools are advertised with features like ransomware creation, phishing kit generation, and card verification services. Interestingly, the report also revealed that some fake AI tools are being used to scam fellow cybercriminals. Cisco Talos highlighted how attackers are jailbreaking legitimate AI models. These jailbreaks aim to bypass safety guardrails and alignment training, allowing the generation of normally restricted content. Additionally, the report warned that AI models themselves are becoming targets. Attackers are inserting backdoors into downloadable models, enabling them to function as programmed by the attacker when activated. Models using external data sources are also at risk. If threat actors manipulate the source data, it could compromise the model's behavior. Fady Younes, Managing Director for Cybersecurity at Cisco covering the Middle East, Africa, Türkiye, Romania, and CIS, commented on the findings. He stated that while large language models offer significant potential, they are now being weaponized to scale attacks. He emphasized the need for strong AI governance, user awareness, and foundational cybersecurity measures. 'With recent innovations like Cisco AI Defense, we are committed to helping enterprises achieve end-to-end protection as they build, use, and innovate with AI,' Younes added. Cisco Talos concluded that as AI becomes more integrated into enterprise and consumer systems, security strategies must evolve. It stressed the importance of: Scanning for tampered AI models Validating external data sources Monitoring abnormal LLM behavior Educating users on the risks of prompt manipulation The report signals a new phase in the cyber threat landscape. Cisco Talos continues to monitor the situation as part of its mission to strengthen global cybersecurity.


Tahawul Tech
5 days ago
- Tahawul Tech
Cisco Talos report shows LLMs are being weaponised by cybercriminals
A comprehensive report from Cisco Talos has shown that Large Language Models are being increasingly weaponised to launch cyberattacks at scale. Cisco Talos has observed a growing use of uncensored, jailbroken and criminal-designed LLMs to support phishing, malware development, and other malicious activities. The findings also highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signalling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponised by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organisations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at


Channel Post MEA
6 days ago
- Channel Post MEA
Cybercriminals Increasingly Exploit AI Tools To Enhance Attacks: Cisco Talos
Cisco Talos has published a new report revealing how cybercriminals are increasingly abusing artificial intelligence (AI) tools – particularly large language models (LLMs) – to enhance their operations and evade traditional defenses. The findings highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signaling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponized by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organizations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at


Web Release
6 days ago
- Web Release
Cisco Talos: Cybercriminals Increasingly Exploit AI and Language Models to Enhance Attacks
Cisco Talos, one of the world's most trusted threat intelligence teams, has published a new report revealing how cybercriminals are increasingly abusing artificial intelligence (AI) tools – particularly large language models (LLMs) – to enhance their operations and evade traditional defenses. The findings highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signaling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponized by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organizations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at


Zawya
6 days ago
- Business
- Zawya
Cisco Talos: Cybercriminals increasingly exploit AI and language models to enhance attacks
Key Highlights: Cisco Talos has observed growing use of uncensored, jailbroken, and criminal-designed LLMs to support phishing, malware development, and other malicious activities. Cybercriminals are connecting LLMs to external tools for vulnerability scanning, stolen data validation, and automated infrastructure provisioning. Jailbreak methods are evolving rapidly, bypassing safety guardrails in legitimate AI tools. Dubai, UAE – Cisco Talos, one of the world's most trusted threat intelligence teams, has published a new report revealing how cybercriminals are increasingly abusing artificial intelligence (AI) tools – particularly large language models (LLMs) – to enhance their operations and evade traditional defenses. The findings highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signaling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: "While large language models offer enormous potential for innovation, they are also being weaponized by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organizations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called "AI tools" are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at