Latest news with #Lucky_Gh0
Yahoo
12-06-2025
- Business
- Yahoo
Ransomware Groups Use AI to Level Up
A new wave of AI-powered threats is on the loose. A recent CISCO Talos report found that ransomware gangs are leveraging AI hype, luring enterprises with fake AI business-to-business software while pressuring victims with psychological manipulation. Ransomware groups like CyberLock, Lucky_Gh0$t, and a newly-discovered malware dubbed 'Numero,' are all impersonating legitimate AI software, such as Novaleads, the multinational lead monetization platform. Kiran Chinnagangannagari, co-founder and chief product and technology officer at global cybersecurity firm Securin told CIO Upside that this new tactic is not niche. 'It is part of a growing trend where cybercriminals often use malicious social media ads or SEO poisoning to push these fake tools, targeting businesses eager to adopt AI but unaware of the risks,' Chinnagangannagari said. Mandiant, the cybersecurity arm of Google, recently reported a similar campaign running malicious ads on Facebook and LinkedIn, redirecting users to fake AI video-generator tools imitating Luma AI, Canva Dream Lab and Kling AI. READ ALSO: Does Agentic AI Have Staying Power? and Nvidia Safety Patent Signals Physical AI Push Ransomware gangs are also using psychological manipulation to increase the success rate of their attacks. For example, CyberLock is leaving victims notes asking them to pay $50,000, an unusually low ransom demand considering the industry's average. The notes say that the ransom payment will be used for 'humanitarian aid' in various regions, including Palestine, Ukraine, Africa and Asia. The $50,000 demand pressures smaller businesses into paying quickly while avoiding the scrutiny that comes with multi-million dollar ransoms, Chinnagangannagari said. Organizations should never pay the ransom, as payment offers no guarantee of results, Chinnagangannagari said.'Companies should focus on robust backups and incident response plans to recover without negotiating,' he added. Security leaders also need to prepare their teams for psychological manipulation, not just technical defenses, said Mike Logan, CEO of C2 Data Technology. 'These ransomware attacks are not just technical threats but psychological weapons.' In certain industries, these smaller-scale ransomware attacks can have more serious impacts. 'There are edge cases, healthcare for example, where human lives are at stake,' Logan said. However, even in those cases, the goal should be to have preventive controls in place so that paying never becomes the only option, he said. Companies should report the incident, work with authorities, and treat the breach as a catalyst to modernize their security posture, he said. The new wave of AI business-targeting ransomware demands a paradigm shift in defense strategies. AI tools are now considered by cybersecurity experts as high-risk assets, Chinnagangannagari said. Training staff on how to spot fake, malicious and suspicious online activity, especially when downloading unverified AI apps, is essential. This post first appeared on The Daily Upside. To receive cutting-edge insights into technology trends impacting CIOs and IT leaders, subscribe to our free CIO Upside newsletter.


Techday NZ
04-06-2025
- Business
- Techday NZ
Cybercriminals harness AI to boost phishing & malware attacks
New research has brought to light the growing use of artificial intelligence tools by cybercriminals behind lesser-known ransomware and malware attacks, highlighting a swiftly evolving threat landscape. The investigations indicate that small cybercriminal groups, including CyberLock, Lucky_Gh0$t, and Numero, have harnessed AI capabilities both to develop more persistent malware and to trick users into downloading malicious payloads. The study outlines how these criminal organisations are adopting AI-driven lures to infect unsuspecting victims, departing from traditional manual techniques in favour of automated, highly convincing fraud. The proliferation of new, seemingly innovative AI services has created opportunities for attackers to blend fraudulent tools with legitimate ones, making it more difficult for individuals and organisations to distinguish between benign and malicious actors online. Steve Wilson, Chief AI and Product Officer at Exabeam, explained the nuances of these new threats. "While AI delivers massive benefits to security teams, we must stay open-eyed about the risks in today's rapidly evolving threat landscape. The recent wave of cybercriminals exploiting AI hype underscores the importance of vigilance," Wilson said. He added, "In some ways, these incidents are classic phishing scams repackaged, but AI puts a concerning new spin on them." Wilson points to two significant risk factors. "First, the sheer excitement and constant emergence of new AI tools mean users are increasingly comfortable trying services from unknown vendors, blurring the lines between legitimate new solutions and malicious impostors. Second, AI technology itself makes it alarmingly easy to craft high-quality counterfeit websites and sophisticated phishing campaigns. Attackers can now mimic authentic brands with unprecedented realism, greatly increasing their chances of success." For users, this evolving threat means that caution is more critical than ever. Wilson cautioned: "Both individuals and organizations must ramp up their vigilance. Users should approach new AI services with scepticism and heightened awareness, carefully verifying legitimacy before engaging. Meanwhile, corporate defenders need to proactively adopt advanced detection tools and modern techniques tailored to counter these AI-enhanced threats. Staying ahead demands constant vigilance and aggressive adaptation." Mike Mitchell, National Cyber Security Consultant at AltTab, echoed these concerns while highlighting the double-edged sword AI presents for the sector. "AI is transforming the world of cyber security, acting as both an ally and a rising threat. On defence teams, AI helps detect and respond to attacks faster by automating tasks like threat hunting, alert triage, and incident response. But attackers are also using AI to launch smarter sophisticated phishing campaigns, automating attacks, and bypassing traditional defences," he said. "This has created a constant race between offensive and defensive innovation." Mitchell emphasised the importance of responsible use and adaption. "As AI agents become more advanced, the focus must shift to ethical use, responsible adoption and strengthening human-AI collaboration. One thing is certain; the future of cyber security is intrinsically linked with the evolution of AI and staying ahead means we must continue to adapt quickly." The findings reflect broader concerns within the cybersecurity community regarding the unpredictable consequences of fast-moving innovation in AI. As both attackers and defenders race to leverage the latest tools, organisations of every size are being urged to educate their users, refine their detection and response protocols, and remain vigilant when navigating the crowded field of AI-enabled products and services. Industry leaders recommend a cautious, informed approach to all new digital tools, particularly those involving AI. By staying alert to the latest tactics employed by cybercriminals, and investing in advanced defence strategies, businesses and individuals can help to reduce their exposure to the next wave of AI-powered threats.