logo
#

Latest news with #threatactors

The Rise—And Risk—Of AI In Offensive Security
The Rise—And Risk—Of AI In Offensive Security

Forbes

time18 hours ago

  • Forbes

The Rise—And Risk—Of AI In Offensive Security

Gunter Ollmann is a global cybersecurity innovator with decades of experience, patented tech and leadership across 80+ countries. Offensive security tools, which are designed to proactively identify threats and vulnerable attack vectors before they occur, have long been exploited by threat actors. AI is, unfortunately, perpetuating the issue and, in particular, is making social engineering easier since it empowers criminals with native language capabilities, supercharging their effectiveness. But AI is also working to the defender's advantage by shaking up the traditional penetration testing sector, which once centered on "breadth," e.g., identifying as many vulnerabilities as possible using scanners and automated tools, so that it has now evolved into full-scale attack and breach simulation. This capability effectively puts the defenders in the attacker's "shoes" so they replicate the tactics of threat actors to help organizations understand how far an attacker could infiltrate their systems. How AI Enhances Offensive Security While Introducing New Risks As with most things AI-related, innovation is a double-edged sword. As tools improve, they benefit not only defenders but also attackers. For defenders, tools that once required manual triage are now equipped with AI that can scan, correlate and validate vulnerabilities. For instance, when different scanners return conflicting information, AI can determine which findings are likely false positives, saving human analysts hours of triage. Now, instead of sifting through lengthy lists of potential issues, testers can focus on what truly matters: issues that are exploitable and impactful. For attackers who used to rely heavily on manual efforts to gather intelligence on targets, they can now use AI to mine the internet, analyze social networks, access data dumps and even build virtual personas that can infiltrate private online communities. These personas can be tailored to a specific user's interests—we have seen train hobbyists targeted and used to establish trust before delivering a targeted phishing link or malware payload. These AI-generated personas may join relevant forums, interact with the target over time and build credibility in a way that was previously too labor-intensive to execute. AI also plays a major role in passive reconnaissance. Oftentimes, attackers don't even need to touch a target system and can use AI to collect extensive intelligence about an organization from public and semi-private sources. For example, it can determine which individuals have administrative access, what systems are publicly exposed and what historical vulnerabilities exist. This reduces the need for noisy scans and increases the chances of a successful, undetected breach. But of course, defenders can use these capabilities too, hence an ongoing game of "cat and mouse" between red teamers and threat actors. Evaluating Offensive Security Vendors AI without human expertise generates "noise," particularly hallucinations, which throw false positives and negatives into the mix, so it needs highly skilled experts who know how to interpret the findings and use the tools effectively. This pool exists as the discipline has evolved from an "art" into a "science," where a global community of elite testers all perform to the same standardized methodologies and regulatory standards. This has helped streamline the logistics of launching high-quality tests quickly, enabling better remediation, retesting and translation of findings into business-relevant language for developers and executives. With organizations assured of consistency across processes, it's up to vendors to differentiate on their ability to simulate modern threats, collaborate closely with internal teams and provide testing agility. Features such as retesting, contextual reporting and access to global talent pools are also critical. Humans Versus AI Pentesting has evolved from a niche security function to a broad organizational priority. Reports no longer go just to security teams; they are reviewed by engineering leaders, product owners and other business stakeholders. Findings are now written in context for the end audience, and AI helps facilitate this translation, ensuring that vulnerabilities are understood and fixed by the right teams. This ensures not only a faster resolution but also that development teams remain focused on delivering secure code from the outset. The biggest question facing the industry is whether AI will replace pentesters. The answer is "yes" for traditional average pentesting and "no" at the top end. AI can excel at automating routine tasks, but skills like red teaming at the highest level are a human endeavor. Elite testers bring knowledge of the best tools to use and the experience that can't be replicated by algorithms. We're seeing that currently, the best results come from hybrid teams where AI handles repetitive, data-intensive tasks and human experts focus on strategy, interpretation and innovation. This is a continuation of a long-term trend whereby so-called "tier one" security analysts were automated some ten years ago. It means smaller teams can achieve more with routine tasks such as scanning, correlation and log analysis handled by AI, while expert humans focus on complex and strategic areas. Cybersecurity Is About People AI is revolutionizing offensive security, bringing with it both immense promise and considerable peril. The tools of the trade have evolved, and so too must the people and processes that govern them. As the attacker-defender arms race accelerates, the role of AI will only grow. But in the end, cybersecurity is still about people. Penetration testing and Red Teaming are driven by highly skilled individuals who understand how adversaries think, and they leverage AI as a tool to sharpen their edge. The adversaries are human—and so too must be the defenders. To truly stay ahead, organizations need to blend elite research talent with smart technology and never lose sight of the human element that defines success in security. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Google Chrome Security Warning — Update Your Browser Before July 23
Google Chrome Security Warning — Update Your Browser Before July 23

Forbes

time2 days ago

  • Forbes

Google Chrome Security Warning — Update Your Browser Before July 23

Update Google Chrome browser before July 23, CISA says. When Google confirmed that a high-severity security vulnerability impacting its Chrome web browser had not only been disclosed but was already under attack from threat actors in the wild, the recommendation to ensure users updated their applications as soon as possible could not have been stressed strongly enough. That recommendation has now become mandatory for certain U.S. federal departments, and the Cybersecurity and Infrastructure Security Agency has warned that all users should apply the necessary update before July 23. You Have 9 Days To Update Chrome, CISA Says With Google Chrome security updates on a regular basis, often addressing critical issues including zero-days, it would be easy to become blunted by update fatigue. That, dear reader, would be a mistake. Not only are Chrome security exploits all too common, but they can also have catastrophic impacts on victims. Thankfully, the update process for consumers is easy enough, as it is automatically applied. All the user needs to do is ensure they have restarted their browser to activate the protections offered. Some users choose not to do this, however, and for the enterprise, it can be a completely different kettle of web-based fish. CISA has urged all organizations to 'reduce their exposure to cyberattacks by prioritizing timely remediation of KEV Catalog vulnerabilities as part of their vulnerability management practice.' The latest addition to the Known Exploited Vulnerabilities catalog is CVE-2025-6554, a type confusion issue in Chrome's V8 JavaScript engine. CISA has given relevant federal agencies until July 23 to update, but that should also be seen as an 'act before' date for all enterprises, organizations and individual users. With just 9 days to go, I would join CISA in strongly urging that everyone ensure that Chrome has been updated to the latest version, which, as I write, is 138.0.7204.100/.101 for Windows and Mac, and 138.0.7204.100 for Linux. What are you waiting for? The clock is ticking.

From Vibe Coding To Vibe Hacking — AI In A Hoodie
From Vibe Coding To Vibe Hacking — AI In A Hoodie

Forbes

time3 days ago

  • Forbes

From Vibe Coding To Vibe Hacking — AI In A Hoodie

Is vibe hacking the next big cyber thing? Artificial intelligence, or at least the AI that we know from the use of large language models and, in particular, the various generative pre-trained transformer services that we have become so accustomed to, has already been weaponized by threat actors. We've seen attack after attack against Gmail users employing AI-powered phone calls, 51% of all spam is now reported as generated by AI, and deepfakes have been a cybersecurity issue for the longest time. But just how advanced is the AI cyberattack threat and, more importantly, how close are we to fully autonomous attacks and vibe hacking emerging from the vibe coding phenomenon? From Vibe Coding To Vibe Hacking — The Reality Of AI In Cyberattacks Vibe coding isn't what a lot of people seem to think it is. I've seen numerous folk, many of whom should know better, describe it as a method of letting AI generate code from nothing and develop an application from scratch, without requiring coding input from the 'programmer' directing it so to do. This is, of course, a nonsense seeded with more than a little reality. Vibe coding makes the life of a developer much easier, delegating some of the programming to AI, based on outcomes, but it doesn't negate the requirement to provide direction and demonstrate a high level of understanding. That said, LLMs and vibe coding are making leaps and bounds in producing surprisingly efficient code. But what about hackers using the same techniques, vibe hacking, if you will, to do the same with cyberattacks? Using LLMs to discover and exploit vulnerabilities, reliably and with malicious impact? According to Michele Campobasso, a senior security researcher at Forescout, there is 'no clear evidence of real threat actors' doing this. Rather, Campobasso said, 'most reports link LLM use to tasks where language matters more than code, such as phishing, influence operations, contextualizing vulnerabilities, or generating boilerplate malware components.' Vibe hacking has a long way to go to catch up to vibe coding, it would seem, according to the latest analysis by Campobasso. 'Between February and April 2025,' Campobasso said, 'we tested over 50 AI models against four test cases drawn from industry-standard datasets and cybersecurity wargames.' The results were, to say the least, informative: 'Attackers still cannot rely on one tool to cover the full exploitation pipeline,' Campobasso said. LLMs produced inconsistent results, with high failure rates. 'Even when models completed exploit development tasks,' Campobasso said, 'they required substantial user guidance.' To conclude, Campobasso stated that we are 'still far from LLMs that can autonomously generate fully functional exploits,' while the 'confident tone' of the models, when incorrect, will mislead the inexperienced attackers most likely to rely upon them. The age of vibe hacking is approaching, although not as fast as the vibe coding phenomenon would imply, and defenders should start preparing now. Luckily, this isn't too difficult, according to Campobasso. 'The fundamentals of cybersecurity remain unchanged: An AI-generated exploit is still just an exploit, and it can be detected, blocked, or mitigated by patching.'

Microsoft Windows Secure Boot Bypass Confirmed — Update Now
Microsoft Windows Secure Boot Bypass Confirmed — Update Now

Forbes

time11-06-2025

  • Forbes

Microsoft Windows Secure Boot Bypass Confirmed — Update Now

Update now as Windows Secure Boot bypass confirmed. The second Tuesday of every month is always a busy one for users of the Microsoft Windows operating system, for it is then when the monthly security rollout happens. Truth be told, Patch Tuesday is less important than Exploit Wednesday; now, threat actors are aware of the confirmed vulnerabilities, and the race is on between attackers and those who would defend against them. We've already seen reports of a zero-day threat to all Windows users, where the attacks started some months ago, and while there are no known exploits of CVE-2025-3052 in the wild, that's no reason to take it any less seriously. Why so? Because this is a Secure Boot bypass that could open up your system to further attacks and compromise. I always get a bit jittery whenever I hear of a new vulnerability that can enable a bypass of the Windows Secure Boot protections. I don't really need to explain why, do I? Suffice to say, Secure Boot is what stops your Windows device from loading insecure operating system images during boot-up. You know, the kind of backdoors that cybercriminals and surveillance states would just love to drop in there. Anyhoo. Please excuse my jitters, then, as I reveal that security researchers at Binarly Research managed to uncover just such a vulnerability impacting the Secure Boot process. Classified by the Common Vulnerabilities and Exposures database as CVE-2025-3052, this one's a doozy: it is capable of turning the protections off and allowing malware to be installed on your Windows PCs and servers. CVE-2025-3052 would appear to impact most devices that support the Unified Extensible Firmware Interface. It is a memory corruption issue that sits within a module signed with Microsoft's third-party UEFI certificate and can be exploited to run unsigned code during the boot process. 'Because the attacker's code executes before the operating system even loads,' the Binarly Research report said, 'it opens the door for attackers to install bootkits and undermine OS-level security defenses.'

Ensuring Financial Data Security In The Quantum Era
Ensuring Financial Data Security In The Quantum Era

Forbes

time22-05-2025

  • Business
  • Forbes

Ensuring Financial Data Security In The Quantum Era

Financial market organizations are used to the idea of speculation. Buying undervalued assets to realize value from them at a later point is a well-established strategy. But – on the other side of a very dark mirror – another kind of speculation is stalking even well-established financial players. Bad actors are already exploring the next horizon of cyber-attacks with the goal of harvesting encrypted data. Today, the encryption is safe and the data is useless to the thieves. But they speculate that, armed at some point in the future with a 'cryptographically-relevant quantum computer (CRQC) – a quantum computer equipped with the right software – they will be able to break the encryption and gain access to the data, with devastating consequences. This point in time is often referred to as Q-Day. While quantum threats will target industries ranging from power utilities to transportation providers, we believe the financial sector will be near the top of the list of targets for threat actors. The potential gains from stealing money or creating mayhem in the markets will be too appealing to pass up. The good news is that Q-Day is not yet upon us, and there are actions that banking, financial services and insurance (BFSI) companies can take now to prepare for the quantum security threats of the future. It is possible that the quantum era could simply look like business as we know it, with full continuity of operations and management of risks. For BFSI companies, data security is a significant challenge that gets more difficult with every passing day, even without the looming threat of a CRQC. Today's financial institutions are using more connected devices than ever. More devices means more potential backdoors or other vulnerabilities that can be exploited. Additionally, a recent report by the US Department of the Treasury found financial institutions are seeing an increase in more sophisticated, AI-powered phishing and social engineering attacks. Where sensitive financial data is stored and managed, and how it is transported for transactions, is also cause for concern. Years ago, financial institutions would have hosted their data workloads in their own on-premises data centers. Now, in a highly digitalized financial world, workloads are often distributed across multiple public and private cloud networks, meaning financial institutions have less visibility and control over the security of their data once it leaves their premises. While financial institutions can (and do) encrypt data to protect it as it travels between clouds, they must trust that their service agreements will hold true and cloud-based data repositories are fully secured to their specifications. Adding to this challenge are the increasingly stringent (but fragmented) regulatory requirements around data sovereignty and privacy, especially for enterprises with operations in multiple countries. It's not easy to determine the best way to comply with DORA, NIS2, OSFI B-13, CPS 230, NIST CSF 2.0 and the many other standards that are all very similar but different in their own ways. Even if an enterprise's headquarters isn't subject to a specific standard, that doesn't necessarily mean its satellite operations, or its globally distributed customer base aren't affected. That said, in today's rapidly evolving geopolitical climate, many BFSI companies want to avoid having their data travel through certain countries, preferring to keep everything — including the people and systems managing their data and devices — within their own jurisdictions. That's not always easy to do, especially with the industry's high levels of AI usage. The financial services sector is among the most mature when it comes to AI adoption, using the technology for a broad range of applications. For example, AI is being used for fraud detection to identify anomalies and suspicious activities in financial transactions. According to Mastercard, AI software can boost a bank's fraud detection rates by an average of 20% — and in some instances, by up to 300%. Additionally, AI-powered transaction monitoring can help cut down 'false positives' (i.e., when a legitimate transaction is mistakenly flagged as a fraudulent one) by more than 85%. But given that most banks don't have the infrastructure in place to build and train their own AI models, where is that AI analysis actually happening? How much sensitive customer data is now being stored and processed off-premises in an AI cloud? How secure are the AI models themselves? Questions like these are enough to keep CTOs, CISOs and risk management teams up at night right now. When AI and quantum computing eventually converge, we can assume BFSI companies will need to adapt even faster to an ever-evolving threat landscape. To protect their data in the AI and quantum era, BFSI companies can take the following actions, starting today: With Q-Day looming, the worst thing banks and other financial institutions can do is nothing. Companies can and must take action now to protect their financial applications, systems and digitalization investments. Of course, conducting infrastructure assessments and upgrading networks takes time as well as the right expertise and skills. But even though the word 'quantum' on its own can feel like a major technology leap, companies don't need to have quantum engineers on staff to set up the systems required to defend against future quantum cybersecurity threats. They also don't need to do this alone. Experienced IT network partners who have successfully deployed quantum-safe networks for financial institutions have the expertise to guide them through the process of architecting their networking and security technology evolution, every step of the way. The result? A solid, secure foundation to protect BFSI companies from today's threats and mitigate the risk of the threats still to come, so these institutions can thrive in the era of quantum computing and AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store