logo
Check Point Uncovers Malware Targeting AI Detection Tools

Check Point Uncovers Malware Targeting AI Detection Tools

TECHx2 days ago

Home » Emerging technologies » Cyber Security » Check Point Uncovers Malware Targeting AI Detection Tools
Check Point Research has revealed the first known attempt of malware designed to manipulate AI-based security systems using prompt injection techniques. The discovery highlights a shift in cyberattack strategies as threat actors begin targeting large language models (LLMs).
The malware embedded natural-language text within its code to trick AI models into misclassifying it as safe. This method specifically targeted AI-assisted malware analysis workflows. The attempt, however, was unsuccessful.
Check Point reported that this marks the beginning of what it calls 'AI Evasion' a new threat category where malware aims to subvert AI-powered detection tools. The company warns that this could signal the start of adversarial tactics aimed directly at AI.
Uploaded anonymously to VirusTotal in June from the Netherlands, the malware included TOR components and sandbox evasion features. What stood out was a hardcoded C++ string acting as a prompt to the AI, instructing it to act like a calculator and respond with 'NO MALWARE DETECTED.'
Despite the evasion attempt, Check Point's AI analysis system correctly flagged the malware and identified the prompt injection.
Key findings:• First documented use of prompt injection in malware• AI model manipulation attempts failed but raise concerns
• Check Point labels the tactic as part of a new AI Evasion trend
Eli Smadja, Research Group Manager at Check Point Software Technologies, stated, 'This is a wake-up call for the industry. We're seeing malware that's not just trying to evade detection it's trying to manipulate AI itself.'
Check Point believes this mirrors past cybersecurity shifts, such as the evolution of sandbox evasion, and anticipates an emerging arms race between AI defenders and AI-aware attackers.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Check Point Uncovers Malware Targeting AI Detection Tools
Check Point Uncovers Malware Targeting AI Detection Tools

TECHx

time2 days ago

  • TECHx

Check Point Uncovers Malware Targeting AI Detection Tools

Home » Emerging technologies » Cyber Security » Check Point Uncovers Malware Targeting AI Detection Tools Check Point Research has revealed the first known attempt of malware designed to manipulate AI-based security systems using prompt injection techniques. The discovery highlights a shift in cyberattack strategies as threat actors begin targeting large language models (LLMs). The malware embedded natural-language text within its code to trick AI models into misclassifying it as safe. This method specifically targeted AI-assisted malware analysis workflows. The attempt, however, was unsuccessful. Check Point reported that this marks the beginning of what it calls 'AI Evasion' a new threat category where malware aims to subvert AI-powered detection tools. The company warns that this could signal the start of adversarial tactics aimed directly at AI. Uploaded anonymously to VirusTotal in June from the Netherlands, the malware included TOR components and sandbox evasion features. What stood out was a hardcoded C++ string acting as a prompt to the AI, instructing it to act like a calculator and respond with 'NO MALWARE DETECTED.' Despite the evasion attempt, Check Point's AI analysis system correctly flagged the malware and identified the prompt injection. Key findings:• First documented use of prompt injection in malware• AI model manipulation attempts failed but raise concerns • Check Point labels the tactic as part of a new AI Evasion trend Eli Smadja, Research Group Manager at Check Point Software Technologies, stated, 'This is a wake-up call for the industry. We're seeing malware that's not just trying to evade detection it's trying to manipulate AI itself.' Check Point believes this mirrors past cybersecurity shifts, such as the evolution of sandbox evasion, and anticipates an emerging arms race between AI defenders and AI-aware attackers.

Check Point Launches Quantum Smart-1 Appliances
Check Point Launches Quantum Smart-1 Appliances

TECHx

time03-06-2025

  • TECHx

Check Point Launches Quantum Smart-1 Appliances

Home » Emerging technologies » Cyber Security » Check Point Launches Quantum Smart-1 Appliances Check Point® Software Technologies Ltd. (NASDAQ: CHKP) has announced the launch of its next-generation Quantum Smart-1 Management Appliances. The company is a global leader in cybersecurity solutions. The new appliances deliver a 2X increase in managed gateways. They also offer up to 70% higher log processing speeds. These improvements are designed to meet the complex demands of hybrid enterprises. Check Point revealed that the appliances are fully integrated into the Check Point Infinity Platform. This integration enhances threat detection and response using a hybrid mesh architecture. It also supports connections with over 250 third-party solutions. Nataly Kremer, Chief Product Officer at Check Point, stated that security teams are under pressure from AI-generated threats and fragmented infrastructures. She said the new appliances simplify these challenges through AI, automation, and precision. The rise of remote work and distributed teams has increased vulnerabilities. Check Point Research reported that AI services are now used in over 51% of enterprise networks each month. This growth widens security risks, making advanced security policies essential. According to Check Point, the Quantum Smart-1 Management Appliances help security teams operate faster and smarter across on-premises, cloud, and remote environments. Key features include: Managing up to 10,000 gateways, enabling scale without rearchitecting. Achieving up to 70% higher log speeds for faster threat response. Storing up to 70TB of logs locally to meet compliance needs. The appliances unify security operations and reduce complexity in hybrid environments. They are available in five models, including the high-performance 7000 Ultra. Check Point confirmed that the new models support tools like Infinity AI Copilot, Playblocks, Policy Advisor, Compliance, and Infinity AIOps. These tools streamline policy and firewall management. Miercom recently named Check Point the top performer in its AI-Powered Cyber Security Platform Benchmark. The report recognized Check Point's superior management usability and threat prevention. Rob Smithers, CEO of Miercom, said Check Point's Infinity Platform outperformed its peers. He added that its AI-driven design and hybrid mesh model set a new standard in cybersecurity. Check Point continues to advance cybersecurity with AI-powered solutions tailored for modern enterprise needs.

AI Security Challenges: Deepfakes, Malware & More
AI Security Challenges: Deepfakes, Malware & More

TECHx

time15-05-2025

  • TECHx

AI Security Challenges: Deepfakes, Malware & More

Home » Expert opinion » AI Security Challenges: Deepfakes, Malware & More Check Point Research's AI Security Report uncovers how cybercriminals are weaponizing AI, from deepfakes and data poisoning to Dark LLMs, and what defenders must do to stay ahead. As artificial intelligence becomes more deeply embedded in business operations, it's also reshaping how cyber threats evolve. The same technologies helping organizations improve efficiency and automate decision-making are now being co-opted and weaponized by threat actors. The inaugural edition of the Check Point Research AI Security Report explores how cyber criminals are not only exploiting mainstream AI platforms, but also building and distributing tools specifically designed for malicious use. The findings highlight five growing threat categories that defenders must now account for when securing systems and users in an AI-driven world. Get the AI Security Report now AI Use and the Risk of Data Leakage An analysis of data collected from Check Point's GenAI Protect reveals that 1 in every 80 GenAI prompts poses a high risk of sensitive data leakage. Data also shows that 7.5% of prompts, about 1 in 13, contain potentially sensitive information, introducing critical security, compliance, and data integrity challenges. As organizations increasingly integrate AI into their operations, understanding these risks is more important than ever. AI-Enhanced Impersonation and Social Engineering Social engineering remains one of the most effective attack vectors, and as AI evolves, so too do the techniques used by threat actors. Autonomous and interactive deepfakes are changing the game of social engineering, drastically improving the realism and scale of attacks. Text and audio have already evolved to generate non scripted, real time text, while video is only advancements away. A recent FBI alert underscored the growing use of AI-generated content in fraud and deception, while real-world incidents, such as the impersonation of Italy's defense minister using AI-generated audio, have already caused significant financial harm. As these capabilities scale, identity verification based on visual or auditory cues is becoming less reliable, prompting an urgent need for multi-layered identity authentication. LLM Data Poisoning and Manipulation Concerns have been raised by researchers regarding LLM (large language model) poisoning, which is a cyber security threat where training datasets are altered to include malicious content, causing AI models to replicate the harmful content. Despite the strong data validation measures in place by major AI providers like OpenAI and Google, there have been instances of successful poisoning attacks, including the upload of 100 compromised AI models to the Hugging Face platform. While data poisoning typically affects the training phase of AI models, new vulnerabilities have arisen as modern LLMs access real-time online information, leading to a risk known as 'retrieval poisoning.' A notable case involves the Russian disinformation network 'Pravda,' which created around 3.6 million articles in 2024 aimed at influencing AI chatbot responses. Research indicated that these chatbots echoed Pravda's false narratives about 33% of the time, underscoring the significant danger of using AI for disinformation purposes. AI-Created Malware Creation and Data Mining AI is now being used across the entire cyber attack lifecycle, from code generation to campaign optimization. Tools like FunkSec's AI-generated DDoS module and custom Chat-GPT- style chatbot demonstrate how ransomware groups are integrating AI into operations, not just for malware creation, but for automating public relations and campaign messaging. AI is also playing a critical role in analyzing stolen data. Infostealers and data miners use AI to rapidly process and clean massive logs of credentials, session tokens, and API keys. This allows for faster monetization of stolen data and more precise targeting in future attacks. In one case, a dark web service called Gabbers Shop advertised the use of AI to improve the quality of stolen credentials, ensuring they were valid, organized, and ready for resale. The Weaponization and Hijacking of AI Models Threat actors are no longer just using AI, they are turning it into a dedicated tool for cyber crime. One key trend is the hijacking and commercialization of LLM accounts. Through credential stuffing and infostealer malware, attackers are collecting and reselling access to platforms like ChatGPT and OpenAI's API, using them to generate phishing lures, malicious scripts, and social engineering content without restriction. Even more concerning is the rise of Dark LLMs, maliciously modified AI models such as HackerGPT Lite, WormGPT, GhostGPT, and FraudGPT. These models are created by jailbreaking ethical AI systems or modifying open-source models like DeepSeek. They are specifically designed to bypass safety controls and are marketed on dark web forums as hacking tools, often with subscription-based access and user support. What This Means for Defenders The use of AI in cyber crime is no longer theoretical. It's evolving in parallel with mainstream AI adoption, and in many cases, it's moving faster than traditional security controls can adapt. The findings in the AI Security Report from Check Point Research suggest that defenders must now operate under the assumption that AI will be used not just against them, but against the systems, platforms, and identities they trust. Security teams should begin incorporating AI-aware defenses into their strategies, including AI-assisted detection, threat intelligence systems that can identify AI-generated artifacts, and updated identity verification protocols that account for voice, video, and textual deception. As AI continues to influence every layer of cyber operations, staying informed is the first step toward staying secure. By Vasily Dyagilev – Regional Director, Middle East & RCIS at Check Point Software Technologies Ltd

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store