logo
#

Latest news with #AISecurityReport

Exclusive: Why cyber leaders must think like business leaders in APAC
Exclusive: Why cyber leaders must think like business leaders in APAC

Techday NZ

time12-06-2025

  • Business
  • Techday NZ

Exclusive: Why cyber leaders must think like business leaders in APAC

Cybersecurity leaders can no longer afford to speak only in technical terms. That was the key message from Jayant Dave, Chief Information Security Officer (CISO) for Asia Pacific and Japan at Check Point, during a recent interview. He says the job now demands a blend of "technical acumen and business insight." "If a critical application or infrastructure is down for an hour, what is the dollar value of that loss?" he said. "You must connect technical risk to business loss. That's how the business understands it." Dave believes aligning cyber risk with broader enterprise risk frameworks is one of the biggest challenges facing CISOs today. The key to overcoming this, he says, is in developing a "shared common language" between cybersecurity and enterprise risk teams. "In my banking experience, cybersecurity is the first line of defence," he explained. "Then you have operational risk, internal audit, and even regulators. All these must be aligned when designing your cybersecurity risk appetite." This team-of-teams approach goes beyond the technical. It involves legal, compliance, and crisis management teams working closely with defenders. "When the bad day happens, it's not just defenders. Legal teams are better equipped to respond to stakeholder obligations that cyber professionals may not be aware of," he added. Boards and senior leaders are also more involved than ever. According to Dave, today's boards, particularly in heavily regulated industries like banking and healthcare, are now "custodians of risk appetites". "They understand cyber risk now. They expect clear roles and responsibilities and they review risk appetite statements quarterly," he said. "If you're out of the appetite, that means you need to invest. You need to act. You need to report." In Dave's view, true cyber resilience involves more than just prevention. "Yes, prevent if you can. But you also need to anticipate threats, enhance controls, and be able to respond and recover fast," he said. Check Point's recent AI Security Report highlights the double-edged nature of AI in this context. While it enables defenders to act quickly, it also allows attackers to move faster and cheaper than ever before. "If generating malware used to take days, it now takes minutes. AI has made phishing, DDoS, and social engineering attacks far more effective," he said. "But defenders have the same tools. It's about using them smartly." He described AI as "a weapon of destruction" but also a powerful defensive tool - if used responsibly. "When electricity was invented, we stopped saying we were using it. Everything became electrical. The same is happening with AI," he added. For companies operating in the Asia Pacific region, Dave warned against assuming regulatory uniformity. "Some people assume APAC is one country, one regulator. It's not. I dealt with 17 markets in my last role - each with different rules," he said. He stressed the need for businesses to understand local data residency laws, especially when outsourcing. "Countries like China, India, and Indonesia have strict laws that don't allow sensitive data to be moved out. If your cloud provider isn't in-country, you'll face tough regulatory oversight." Supply chain risk is another growing concern, exacerbated by geopolitical tensions and the recent memory of COVID-19. "It's not just about buying a cool tool," he said. "You need strategic partners embedded in the region who can provide support long-term. Some suppliers with great services vanished during the pandemic. That's a real risk." On talent shortages, Dave said he doesn't believe AI will cost jobs in cybersecurity. In fact, the opposite. "We need more people. Skills in AI and quantum are in demand. Upskilling is essential," he said. "My advice? Train continuously. In some banks, you must complete certain credits each year to stay current." Internships and real-world experience are part of that continuous learning journey, even if Dave himself didn't follow that path. "Every year, I've upskilled," he said. "In a modern security operations centre, you now have separate teams for threats, fraud and insider threats—all AI-powered. Analysts must train to keep up." Frameworks like the Cyber Risk Institute (CRI) are vital tools for aligning technical and business risk, Dave explained. "CRI consolidates policies like ISO, NIST and emerging tech standards. It helps you develop cybersecurity risk appetite statements in a language the business understands," he said. He pointed out that in countries like Australia and Singapore, governance structures now mandate board approval of such statements. "Once approved by the board, there's no turning back. Regulators want evidence that senior leaders are involved." Crisis preparedness is a major theme too. Dave advocates for including board members in cyber exercises. "If a critical third-party provider is compromised, who decides to disconnect them? Business leaders do," he said. "So they must be involved in those scenarios." According to Dave, the role of the CISO has transformed and must continue to evolve. "CISOs must think like business leaders now," he concluded. "If they don't understand the business dynamics, it can be a total disaster."

AI Security Challenges: Deepfakes, Malware & More
AI Security Challenges: Deepfakes, Malware & More

TECHx

time15-05-2025

  • TECHx

AI Security Challenges: Deepfakes, Malware & More

Home » Expert opinion » AI Security Challenges: Deepfakes, Malware & More Check Point Research's AI Security Report uncovers how cybercriminals are weaponizing AI, from deepfakes and data poisoning to Dark LLMs, and what defenders must do to stay ahead. As artificial intelligence becomes more deeply embedded in business operations, it's also reshaping how cyber threats evolve. The same technologies helping organizations improve efficiency and automate decision-making are now being co-opted and weaponized by threat actors. The inaugural edition of the Check Point Research AI Security Report explores how cyber criminals are not only exploiting mainstream AI platforms, but also building and distributing tools specifically designed for malicious use. The findings highlight five growing threat categories that defenders must now account for when securing systems and users in an AI-driven world. Get the AI Security Report now AI Use and the Risk of Data Leakage An analysis of data collected from Check Point's GenAI Protect reveals that 1 in every 80 GenAI prompts poses a high risk of sensitive data leakage. Data also shows that 7.5% of prompts, about 1 in 13, contain potentially sensitive information, introducing critical security, compliance, and data integrity challenges. As organizations increasingly integrate AI into their operations, understanding these risks is more important than ever. AI-Enhanced Impersonation and Social Engineering Social engineering remains one of the most effective attack vectors, and as AI evolves, so too do the techniques used by threat actors. Autonomous and interactive deepfakes are changing the game of social engineering, drastically improving the realism and scale of attacks. Text and audio have already evolved to generate non scripted, real time text, while video is only advancements away. A recent FBI alert underscored the growing use of AI-generated content in fraud and deception, while real-world incidents, such as the impersonation of Italy's defense minister using AI-generated audio, have already caused significant financial harm. As these capabilities scale, identity verification based on visual or auditory cues is becoming less reliable, prompting an urgent need for multi-layered identity authentication. LLM Data Poisoning and Manipulation Concerns have been raised by researchers regarding LLM (large language model) poisoning, which is a cyber security threat where training datasets are altered to include malicious content, causing AI models to replicate the harmful content. Despite the strong data validation measures in place by major AI providers like OpenAI and Google, there have been instances of successful poisoning attacks, including the upload of 100 compromised AI models to the Hugging Face platform. While data poisoning typically affects the training phase of AI models, new vulnerabilities have arisen as modern LLMs access real-time online information, leading to a risk known as 'retrieval poisoning.' A notable case involves the Russian disinformation network 'Pravda,' which created around 3.6 million articles in 2024 aimed at influencing AI chatbot responses. Research indicated that these chatbots echoed Pravda's false narratives about 33% of the time, underscoring the significant danger of using AI for disinformation purposes. AI-Created Malware Creation and Data Mining AI is now being used across the entire cyber attack lifecycle, from code generation to campaign optimization. Tools like FunkSec's AI-generated DDoS module and custom Chat-GPT- style chatbot demonstrate how ransomware groups are integrating AI into operations, not just for malware creation, but for automating public relations and campaign messaging. AI is also playing a critical role in analyzing stolen data. Infostealers and data miners use AI to rapidly process and clean massive logs of credentials, session tokens, and API keys. This allows for faster monetization of stolen data and more precise targeting in future attacks. In one case, a dark web service called Gabbers Shop advertised the use of AI to improve the quality of stolen credentials, ensuring they were valid, organized, and ready for resale. The Weaponization and Hijacking of AI Models Threat actors are no longer just using AI, they are turning it into a dedicated tool for cyber crime. One key trend is the hijacking and commercialization of LLM accounts. Through credential stuffing and infostealer malware, attackers are collecting and reselling access to platforms like ChatGPT and OpenAI's API, using them to generate phishing lures, malicious scripts, and social engineering content without restriction. Even more concerning is the rise of Dark LLMs, maliciously modified AI models such as HackerGPT Lite, WormGPT, GhostGPT, and FraudGPT. These models are created by jailbreaking ethical AI systems or modifying open-source models like DeepSeek. They are specifically designed to bypass safety controls and are marketed on dark web forums as hacking tools, often with subscription-based access and user support. What This Means for Defenders The use of AI in cyber crime is no longer theoretical. It's evolving in parallel with mainstream AI adoption, and in many cases, it's moving faster than traditional security controls can adapt. The findings in the AI Security Report from Check Point Research suggest that defenders must now operate under the assumption that AI will be used not just against them, but against the systems, platforms, and identities they trust. Security teams should begin incorporating AI-aware defenses into their strategies, including AI-assisted detection, threat intelligence systems that can identify AI-generated artifacts, and updated identity verification protocols that account for voice, video, and textual deception. As AI continues to influence every layer of cyber operations, staying informed is the first step toward staying secure. By Vasily Dyagilev – Regional Director, Middle East & RCIS at Check Point Software Technologies Ltd

AI security report warns of rising deepfakes & Dark LLM threat
AI security report warns of rising deepfakes & Dark LLM threat

Techday NZ

time01-05-2025

  • Techday NZ

AI security report warns of rising deepfakes & Dark LLM threat

Check Point Research has released its inaugural AI Security Report, detailing how artificial intelligence is affecting the cyber threat landscape, from deepfake attacks to generative AI-driven cybercrime and defences. The report explores four main areas where AI is reshaping both offensive and defensive actions in cyber security. According to Check Point Research, one in 80 generative AI prompts poses a high risk of sensitive data leakage, with one in 13 containing potentially sensitive information that could be exploited by threat actors. The study also highlights incidents of AI data poisoning linked to disinformation campaigns, as well as the proliferation of so-called 'Dark LLMs' such as FraudGPT and WormGPT. These large language models are being weaponised for cybercrime, enabling attackers to bypass existing security protocols and carry out malicious activities at scale. Lotem Finkelstein, Director of Check Point Research, commented on the rapid transformation underway, stating, "The swift adoption of AI by cyber criminals is already reshaping the threat landscape. While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It's not a distant future - it's just around the corner." The report examines how AI is enabling attackers to impersonate and manipulate digital identities, diminishing the boundary between what is authentic and fake online. The first threat identified is AI-enhanced impersonation and social engineering. Threat actors are now using AI to generate convincing phishing emails, audio impersonations, and deepfake videos. In one case, attackers successfully mimicked Italy's defence minister with AI-generated audio, demonstrating the sophistication of current techniques and the difficulty in verifying online identities. Another prominent risk is large language model (LLM) data poisoning and disinformation. The study refers to an example involving Russia's disinformation network Pravda, where AI chatbots were found to repeat false narratives 33% of the time. This trend underscores the growing risk of manipulated data feeding back into public discourse and highlights the challenge of maintaining data integrity in AI systems. The report also documents the use of AI for malware development and data mining. Criminal groups are reportedly harnessing AI to automate the creation of tailored malware, conduct distributed denial-of-service (DDoS) campaigns, and process stolen credentials. Notably, services like Gabbers Shop are using AI to validate and clean stolen data, boosting its resale value and targeting efficiency on illicit marketplaces. A further area of risk is the weaponisation and hijacking of AI models themselves. Attackers have stolen LLM accounts or constructed custom Dark LLMs, such as FraudGPT and WormGPT. These advanced models allow actors to circumvent standard safety mechanisms and commercialise AI as a tool for hacking and fraud, accessible through darknet platforms. On the defensive side, the report makes it clear that organisations must now presume that AI capabilities are embedded within most adversarial campaigns. This shift in assumption underlines the necessity for a revised approach to cyber defence. Check Point Research outlines several strategies for defending against AI-driven threats. These include using AI-assisted detection and threat hunting to spot synthetic phishing content and deepfakes, and adopting enhanced identity verification techniques that go beyond traditional methods. Organisations are encouraged to implement multi-layered checks encompassing text, voice, and video, recognising that trust in digital identity can no longer be presumed. The report also stresses the importance of integrating AI context into threat intelligence, allowing cyber security teams to better recognise and respond to AI-driven tactics. Lotem Finkelstein added, "In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defences. This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."

Check Point Research Launches AI Security Report: Exposing the Rise of AI-Powered Cybercrime and Defenses
Check Point Research Launches AI Security Report: Exposing the Rise of AI-Powered Cybercrime and Defenses

Yahoo

time30-04-2025

  • Business
  • Yahoo

Check Point Research Launches AI Security Report: Exposing the Rise of AI-Powered Cybercrime and Defenses

New report unveils four key AI-driven cyber threats and how organizations can outsmart attackers in an AI-driven world SAN FRANCISCO, April 30, 2025 (GLOBE NEWSWIRE) -- RSA CONFERENCE, – Check Point Software Technologies Ltd. (NASDAQ: CHKP), a pioneer and global leader of cyber security solutions, today launched its inaugural AI Security Report at RSA Conference 2025. This report offers an in-depth exploration of how cyber criminals are weaponizing artificial intelligence (AI), alongside strategic insights for defenders to stay ahead. As AI reshapes industries, it has also erased the lines between truth and deception in the digital world. Cyber criminals now wield generative AI and large language models (LLMs) to obliterate trust in digital identity. In today's landscape, what you see, hear, or read online can no longer be believed at face value. AI-powered impersonation bypasses even the most sophisticated identity verification systems, making anyone a potential victim of deception on a scale. "The swift adoption of AI by cyber criminals is already reshaping the threat landscape,' said Lotem Finkelstein, Director of Check Point Research. 'While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behavior. It's not a distant future - it's just around the corner.' Key Threat Insights from the AI Security Report: At the heart of these developments is AI's ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake. The report uncovers four core areas where this erosion of trust is most visible: AI-Enhanced Impersonation and Social Engineering: Threat actors use AI to generate realistic, real-time phishing emails, audio impersonations, and deepfake videos. Notably, attackers recently mimicked Italy's defense minister using AI-generated audio, demonstrating that no voice, face, or written word online is safe from fabrication. LLM Data Poisoning and Disinformation: Malicious actors manipulate AI training data to skew outputs. A case involving Russia's disinformation network Pravda showed AI chatbots repeating false narratives 33% of the time, underscoring the need for robust data integrity in AI systems. AI-Created Malware and Data Mining: Cyber criminals harness AI to craft and optimize malware, automate DDoS campaigns, and refine stolen credentials. Services like Gabbers Shop use AI to validate and clean stolen data, enhancing its resale value and targeting efficiency. Weaponization and Hijacking of AI Models: From stolen LLM accounts to custom-built Dark LLMs like FraudGPT and WormGPT, attackers are bypassing safety mechanisms and commercializing AI as a tool for hacking and fraud on the dark web. Defensive Strategies: The report emphasizes that defenders must now assume AI is embedded within adversarial campaigns. To counter this, organizations should adopt AI-aware cyber security frameworks, including: AI-Assisted Detection and Threat Hunting: Leverage AI to detect AI-generated threats and artifacts, such as synthetic phishing content and deepfakes. Enhanced Identity Verification: Enhanced Identity Verification: Move beyond traditional methods and implement multi-layered identity checks that account for AI-powered impersonation across text, voice, and video—recognizing that trust in digital identity is no longer guaranteed. Threat Intelligence with AI Context: Equip security teams with the tools to recognize and respond to AI-driven tactics. "In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defenses," added Finkelstein. "This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly." The full AI Security Report 2025 is available for download here and join the April 30 livestream for more insights about the report. Follow Check Point via: LinkedIn: (Formerly known as Twitter): About Check Point Software Technologies Ltd. Check Point Software Technologies Ltd. ( is a leading protector of digital trust, utilizing AI-powered cyber security solutions to safeguard over 100,000 organizations globally. Through its Infinity Platform and an open garden ecosystem, Check Point's prevention-first approach delivers industry-leading security efficacy while reducing risk. Employing a hybrid mesh network architecture with SASE at its core, the Infinity Platform unifies the management of on-premises, cloud, and workspace environments to offer flexibility, simplicity and scale for enterprises and service providers. Legal Notice Regarding Forward-Looking Statements This press release contains forward-looking statements. Forward-looking statements generally relate to future events or our future financial or operating performance. Forward-looking statements in this press release include, but are not limited to, statements related to our expectations regarding future growth, the expansion of Check Point's industry leadership, the enhancement of shareholder value and the delivery of an industry-leading cyber security platform to customers worldwide. Our expectations and beliefs regarding these matters may not materialize, and actual results or events in the future are subject to risks and uncertainties that could cause actual results or events to differ materially from those projected. The forward-looking statements contained in this press release are also subject to other risks and uncertainties, including those more fully described in our filings with the Securities and Exchange Commission, including our Annual Report on Form 20-F filed with the Securities and Exchange Commission on April 2, 2024. The forward-looking statements in this press release are based on information available to Check Point as of the date hereof, and Check Point disclaims any obligation to update any forward-looking statements, except as required by law. MEDIA CONTACT: INVESTOR CONTACT: Liz Wu Kip E. Meintzer Check Point Software Technologies Check Point Software Technologies press@ ir@ in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store