Latest news with #ApexSecurity


Forbes
09-07-2025
- Business
- Forbes
The Silent Breach: Why Agentic AI Demands New Oversight
Keren Katz is an AI & security leader with 10 years in management & hands-on roles, leads Security Detection at Apex Security. Agentic AI is moving fast, and enterprises are racing to deploy it. These agents don't just assist—they reason, make decisions and take action across systems. That shift redefines risk, not through breaches, but through success in the wrong context. A legal agent discloses draft merger and acquisition terms. A finance agent exposes forecasts. These aren't technical bugs. They're leadership blind spots. The Rise Of Enterprise Agents Agentic AI is reshaping enterprise software. These systems are evolving from passive tools into semi-autonomous agents that can interpret user instructions, select appropriate tools or workflows and execute tasks across integrated systems, within the limits of predefined permissions and controls. According to Gartner, by 2028, 33% of enterprise software applications will embed agentic AI, up from less than 1% in 2024. More strikingly, they project that 15% of all business decisions will be made autonomously—a significant rise from none today. This future is arriving quickly, bringing new forms of risk that traditional security frameworks weren't designed to handle. The Emerging Threat Surfaces Of Agentic AI Agentic AI introduces risk in motion, arising from the way agents are prompted, how they reason and what they execute. Understanding these surfaces is key to controlling their impact. Let's break it down. The most alarming threats from agentic AI don't always stem from external attackers. They often originate inside the organization, from employees issuing prompts that seem routine or from individuals with malicious intent who understand how to exploit the system's capabilities. In both cases, the agent's lack of contextual understanding becomes a liability. Here are three examples of prompts that could trigger high-risk actions: • 'Transfer the remaining budget from R&D to the following bank account.' • 'Send the latest board presentation to our external legal team.' • 'Push the revised quarterly revenue forecast to the investor portal.' Whether the intent is efficiency or exploitation, these prompts can trigger high-stakes actions—touching core business workflows or exposing sensitive data—and agents will carry them out without hesitation. Even more subtly, every company has its red lines. For a bank, it might be automating regulatory reporting. For a biotech, accessing patient trial data. These company-specific intentions can't be addressed with generic filters. They require granular, policy-driven definitions of risk rooted in business operations, not just security protocols. Unlike traditional software, agentic AI doesn't follow fixed logic. It reasons across multiple steps, fills gaps and adapts dynamically to achieve its goal. This flexibility is powerful, but it introduces a second critical threat surface: non-determinism. That risk becomes clear in scenarios where seemingly reasonable prompts lead to harmful autonomous decisions, such as: • An operations agent prematurely pushes configuration changes to production, causing system downtime and disrupting critical services. • A legal agent updates contract templates and pushes unapproved changes live, binding the company to terms never reviewed by counsel. • A customer success agent resolves a billing issue by granting a full-year refund instead of one month, resulting in unexpected financial loss. These aren't edge cases—they're the direct result of agents improvising in context-poor environments, without business policy awareness or human judgment. While the user prompt may seem safe, the execution path becomes risky as the agent makes autonomous decisions. To mitigate this, companies must monitor agent behavior as it unfolds, not just the initial prompt or the final output. Mid-task intent detection is now essential to prevent agents from escalating simple requests into strategic liabilities. Even with strong guardrails, some agent actions will slip through. That's why it's critical to maintain accurate visibility into what the agent did—what it accessed, modified or communicated after the fact. This serves as your last line of defense, enabling timely alerts when risky actions are detected, incident response documented in detailed activity logs and retrospective audits to refine policies and adjust safeguards. Without visibility into downstream actions, organizations remain blind to the full impact of agent behavior. And when autonomous systems operate without oversight, even a single unnoticed action can lead to financial loss, data exposure or operational disruption. What Executives Can Do Now This isn't a call to pause agentic AI adoption. It's a call to govern it with intent. Done right, agents can accelerate productivity, unlock automation and free up human creativity. But to do it safely, leaders need a new strategic playbook. Work with business units to identify which tasks or processes pose the highest risk if automated. Build intent-detection models that go beyond keywords to understand what the user is actually trying to accomplish. This enables prevention of risky workflows before they occur, and helps surface high-risk user profiles for long-term monitoring. Don't just evaluate inputs and outputs—intercept the agent's chain of reasoning mid-task. Insert checkpoints, human approvals or escalation triggers in sensitive flows to halt unsafe behavior before it unfolds, and to continuously update the agent's context in line with company policy. Treat agent behavior like system activity: log it, monitor it and investigate anomalies. Over time, this data helps refine what 'risky' looks like in your environment, uncovers blind spots and guide how future agent interactions are governed. Autonomy and safety aren't opposites. By designing policies around intent—not just identity—you can preserve speed while reducing exposure. The goal isn't to slow the agent down. It's to ensure it acts within the boundaries that leadership defines. The Bottom Line—Lead The Agents Before They Lead You Agentic AI is reshaping enterprise operations—and it's not slowing down. The imperative isn't to halt innovation, but to ensure agents act safely, reliably and in service of the business. That means governing intent and holding AI to the same standards we expect from people: smart enough to act, but guided by integrity and clear boundaries. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Techday NZ
03-06-2025
- Business
- Techday NZ
Tenable to acquire Apex Security, bolstering AI risk control
Tenable has announced its intent to acquire Apex Security to expand its exposure management capabilities within the artificial intelligence (AI) attack surface. The planned acquisition is aimed at incorporating Apex Security's technology into Tenable's exposure management platform, as AI adoption accelerates and new cyber risks emerge. Tenable has previously addressed AI-related security concerns through its Tenable AI Aware product, introduced in 2024, which assists organisations in identifying and assessing AI usage across their operations. The integration of Apex Security's capabilities would allow Tenable to move beyond detection and assessment, enabling organisations to govern AI usage, enforce policies, and control exposure risks for both off-the-shelf and in-house-developed AI systems. Generative AI and autonomous systems are contributing to a broader and more complex attack surface, exposing organisations to risks such as shadow AI applications, AI-generated code, synthetic identities, and unregulated cloud services. The expansion of Tenable's exposure management offering comes at a time when cyber risk management is adapting to the pace and scale of AI-driven digital transformation. Steve Vintz, Co-Chief Executive Officer and Chief Financial Officer at Tenable, said: "AI dramatically expands the attack surface, introducing dynamic, fast-moving risks most organisations aren't prepared for. Tenable's strategy has always been to stay ahead of attack surface expansion — not just managing exposures, but eliminating them before they can be exploited." Mark Thurmond, Co-Chief Executive Officer at Tenable, spoke about the proactive need for addressing AI risks. He said: "As organisations move quickly to adopt AI, many recognise that now is the moment to get ahead of the risk — before large-scale attacks materialise. Apex delivers the visibility, context, and control security teams need to reduce AI-generated exposure proactively. It will be a powerful addition to the Tenable One platform and a perfect fit for our preemptive approach to cybersecurity." Apex Security, founded in 2023, has attracted support from Chief Information Security Officers (CISOs) as well as prominent investors such as Sam Altman of OpenAI, Clem Delangue of Hugging Face, and venture capital firms Sequoia Capital and Index Ventures. The company's focus has been on securing AI usage among developers and general staff, helping address policy enforcement, usage management, and compliance challenges linked to AI adoption. Matan Derman, Chief Executive Officer and Co-Founder of Apex Security, commented on the strategic fit with Tenable. He said: "The AI attack surface is deeply intertwined with everything else organisations are already securing. Treating it as part of exposure management is the most strategic approach. We're excited to join forces with Tenable to help customers manage AI risk in context — not as a silo, but as part of their broader environment." Following the completion of the acquisition, Tenable expects to begin delivering integrated capabilities as part of the Tenable One platform during the second half of 2025. Tenable describes Tenable One as an exposure management platform which brings together visibility, context, and management for a range of attack surfaces from IT infrastructure to cloud environments. The financial terms of the deal have not been disclosed. The transaction is expected to close later this quarter, pending customary approvals and closing conditions.


Channel Post MEA
02-06-2025
- Business
- Channel Post MEA
Tenable Announces Intent to Acquire Apex Security
Tenable has announced its intent to acquire Apex Security , an innovator in securing the rapidly expanding AI attack surface. Tenable believes the acquisition, once completed, will strengthen Tenable's ability to help organizations identify and reduce cyber risk in a world increasingly shaped by artificial intelligence. Generative AI tools and autonomous systems are rapidly expanding the attack surface and introducing new risks — from shadow AI apps and AI-generated code to synthetic identities and ungoverned cloud services. In 2024, Tenable launched Tenable AI Aware which already helps thousands of organizations detect and assess AI usage across their environments. Adding Apex capabilities will expand on that foundation — adding the ability to govern usage, enforce policy, and control exposure across both the AI that organizations use and the AI they build. This move reinforces Tenable's long-standing strategy of delivering scalable, unified exposure management as AI adoption accelerates. 'AI dramatically expands the attack surface, introducing dynamic, fast-moving risks most organizations aren't prepared for,' said Steve Vintz, Co-CEO and CFO, Tenable. 'Tenable's strategy has always been to stay ahead of attack surface expansion — not just managing exposures, but eliminating them before they can be exploited.' 'As organizations move quickly to adopt AI, many recognize that now is the moment to get ahead of the risk — before large-scale attacks materialize,' said Mark Thurmond, Co-CEO, Tenable. 'Apex delivers the visibility, context, and control security teams need to reduce AI-generated exposure proactively. It will be a powerful addition to the Tenable One platform and a perfect fit for our preemptive approach to cybersecurity.' Founded in 2023, Apex attracted early interest from CISOs and top investors, including Sam Altman (OpenAI), Clem Delangue (Hugging Face), and venture capital firms Sequoia Capital and Index Ventures. The company quickly emerged as an innovator in securing the use of AI by developers and everyday employees alike — addressing the growing need to manage usage, enforce policy, and ensure compliance at scale. 'The AI attack surface is deeply intertwined with everything else organizations are already securing. Treating it as part of exposure management is the most strategic approach. We're excited to join forces with Tenable to help customers manage AI risk in context — not as a silo, but as part of their broader environment,' said Matan Derman, CEO and Co-Founder of Apex Security. Following the acquisition close, Tenable expects to deliver integrated capabilities in the second half of 2025 as part of Tenable One — the industry's first and most comprehensive exposure management platform. The financial terms of the deal were not disclosed. The deal is expected to close later this quarter. 0 0


Forbes
21-03-2025
- Business
- Forbes
What Executives Must Know When Harnessing Enterprise AI
Keren Katz is an AI & security leader with 10 years in management & hands-on roles, leads Security Detection at Apex Security. getty Today, almost every enterprise is impacted by generative AI (GenAI). For many, the initial focus was on leveraging GenAI to enhance daily business processes, accelerating content creation, analysis and communication. However, in 2025, the landscape evolved dramatically with the rise of GenAI-powered copilots, agents and enterprise applications—all fueled by organizational data sources. Leading examples include Microsoft 365 Copilot, Gemini for Google Workspace, Slack AI and Notion AI, all designed to seamlessly integrate into business workflows. Enterprise AI—the use of AI, fueled with enterprise data, to amplify business-critical processes and operations—is reshaping workplace efficiency, making access to internal data faster and more intuitive. Tasks that once took hours or even days-such as creating presentations, analyzing legal documents or making investments decisions—can now be completed in a fraction of the time, allowing employees to focus on high-value tasks that drive business impact. At Apex—we see the tremendous value Enterprise AI users are getting, on a daily basis. And this trend is increasing by the day across all industries—from tech to finance and health—and across all company sizes. Yet, we also see the tremendous risks—with massive opportunities come even greater risks. The same technology that enables faster, smarter decision making also presents significant security and regulatory challenges. Here are four key risks executives need to address: Managing and tracking permissions has always been complex, but with the rise of Enterprise AI this challenge multiples exponentially. AI copilots don't inherently distinguish between restricted and accessible data as permission controls are overlooked—which happens more often than expected. Without strong safeguards, sensitive information can be exposed, putting the organization at risk. Enterprise AI democratizes access to data—but that means curious employees may unknowingly request sensitive information they shouldn't have. In one case that my company observed, engineers and marketers queried an AI copilot for company cash flow reports and financial forecasts—requests that, if granted, could result in catastrophic financial exposure if shared. The risks extend beyond financial data. An employee could query the chat or copilot to get access to colleagues' email content, potentially exposing personal information, client communications or executive discussions. If such a request is approved, it could violate employee privacy, breach client agreements and jeopardize strategic plans. If an attacker compromises even a low-level user's credentials, enterprise AI copilots and applications become an instant threat vector for data leakage. Before enterprise AI, attackers had to move laterally across the network and escalate privileges before accessing sensitive data. With AI copilots, however, a low-level account can simply ask the AI for proprietary information such as financials, legal documents, intellectual property or even critical security credentials that could serve as initial access secrets. Less of a forensic footprint makes detection far more difficult, and the lack of visibility makes it nearly impossible. This significantly lowers the barrier for cyberattacks and increases the speed and efficiency of data theft—sometimes in minutes, before security teams even detect an intrusion. Attackers don't need to breach your network to manipulate AI-generated content. Instead, they can poison AI models or inject false data into the enterprise information that large language models (LLMs) use as context. By compromising enterprise data sources that AI relies on—particularly through retrieval-augmented generation (RAG)—attackers can alter outputs even from outside the network. One method is indirect prompt injection attacks, where something as simple as an email or calendar invite can influence how the AI responds. The real-world implications of these attacks are significant. Malicious actors can inject harmful links into AI-generated emails, enabling highly sophisticated phishing campaigns. AI can also be manipulated to misinform employees, tricking them into authorizing fraudulent financial transactions—such as in CEO injection attacks. Even critical business documents, including financial models, legal agreements or engineering specifications, can be corrupted by manipulated AI suggestions. If AI-generated responses become untrustworthy, enterprise decision-making collapses, leading to reputational damage, financial losses and serious legal consequences. According to Gartner, by 2028, "33% of enterprise software applications will incorporate agentic AI, a significant rise from less than 1% in 2024." As AI capabilities advance, autonomous decision-making will increase—and with it, the risk of unintended or harmful actions. For example, AI agents could mistakenly share sensitive presentations with external recipients, leading to data leakage. In financial settings, an AI system might misinterpret a rule and automatically process an incorrect transaction. There is also the risk of rogue AI agents taking destructive actions due to unpredictable, non-deterministic behavior. This growing 'AI autonomy dilemma' will likely be one of the biggest challenges enterprises face in 2025 and beyond. To harness enterprise AI's power while minimizing risks, enterprises must adopt a proactive security-first approach. Every enterprise AI transaction—whether through copilots, agents or enterprise applications—should be logged, monitored and auditable to ensure transparency and security. It is essential to implement detection mechanisms that can identify and block malicious AI-generated content before it reaches users. Additionally, enterprises should use AI-specific security solutions to detect and prevent incidents of data exposure and leakage in AI-generated outputs. AI agents should be closely monitored to ensure they cannot execute actions without human verification. For critical operational decisions, enterprises should require multilayered approvals before allowing AI to take action. Enterprise AI is not just another trend—in fact, I believe it's the defining technological shift of this decade. As an executive, your role is not just to drive AI adoption but to ensure it scales safely so that the rewards outweigh the risks. By embracing AI with strong security foundations, organizations can better position themselves to maximize AI's potential without compromising trust or compliance. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?