Latest news with #Faitelson


Techday NZ
10-06-2025
- Business
- Techday NZ
Varonis unveils MCP Server for AI-driven data security tasks
Varonis has announced support for the Model Context Protocol (MCP) Server, enabling customers to integrate AI tools such as ChatGPT, Claude, and GitHub Copilot with its data security platform. AI integration capability The release of the Varonis MCP Server allows users to access and orchestrate the Varonis Data Security Platform using artificial intelligence (AI) clients. Through this capability, customers can extract insights and automate data security tasks by issuing natural language prompts through their preferred AI tools and development environments. According to Varonis, the MCP Server is designed to function as an AI-agnostic engine, translating simple user instructions into actionable, automated outcomes within the platform. The system can accommodate prompts such as retrieving high-severity security alerts, automating remediation processes to address stale guest accounts, or compiling compliance reports on databases containing sensitive employee information across cloud platforms. Automation focus Yaki Faitelson, Co-Founder and Chief Executive Officer of Varonis, emphasised the centrality of automation to the company's approach. Faitelson stated, "Automation is at the heart of everything we do. The Varonis MCP Server marks another leap forward in our agentic AI vision — giving our customers access to Varonis' real-time data security insights and automated remediation from their own AI tools, IDEs, agent builders, and terminals." With this offering, Varonis aims to provide customers the flexibility to use their AI technologies of choice, while leveraging the central data security capabilities of its platform. The compatibility with various AI clients is intended to allow integration into diverse workflows and organisational environments. Supporting features and vision Varonis has previously embedded Athena AI within its user interface, and has incorporated agentic AI across automated features of its Data Security Platform — strategies seen by the company as key to advancing AI-powered data protection. These capabilities, the company suggests, improve the ability of organisations to counter data breaches and manage compliance more efficiently. The company indicates that the MCP Server encapsulates the next stage of development for artificial intelligence within its ecosystem, enhancing the precision and automation of security outcomes through accessible, user-driven prompts. Platform uses and outcomes The Varonis Data Security Platform is deployed by thousands of organisations worldwide, according to company statements. Clients utilise the system for tasks such as data security posture management, data classification, data access governance, data detection and response, data loss prevention, AI security, identity protection, and insider risk management. Varonis reports that the combination of MCP Server capabilities and its AI-driven features is designed to strengthen its customers' capacity to protect sensitive information across environments ranging from software-as-a-service (SaaS) and infrastructure-as-a-service (IaaS) to hybrid cloud implementations. No pricing or detailed rollout information for the MCP Server was provided. Customers can now access the service, though specifics on the trial process or availability in various markets were not mentioned.


Techday NZ
20-05-2025
- Business
- Techday NZ
AI tools expose sensitive data at 99% of organisations
A report from Varonis has found that 99% of organisations have sensitive data exposed to artificial intelligence tools due to security shortcomings. The State of Data Security Report: Quantifying AI's Impact on Data Risk examined the data risk landscape in 1,000 real-world IT environments, focusing on how AI-driven technology may amplify the vulnerability of sensitive information. The findings suggest that widespread issues such as misconfigurations, overly permissive access, and other data security gaps are contributing to the exposure of confidential data. "The productivity gains of AI are real — and so is the data security risk," said Varonis Chief Executive, President, and Co-Founder Yaki Faitelson. "CIOs and CISOs face enormous pressure to adopt AI at warp speed, which is driving the adoption of data security platforms." "AI runs on data, and taking a data-centric approach to security is critical to avoid an AI-related data breach," Faitelson continued. Varonis conducted its analysis by assessing data from nearly 10 billion cloud resources, spanning more than 20 petabytes, across commonly used infrastructure-as-a-service and software-as-a-service applications. These included AWS, Microsoft Azure, Google Cloud, Box, Salesforce, Microsoft 365, Okta, Databricks, Slack, Snowflake, and Zoom, among others. The report found that 99% of organisations surveyed had sensitive data unnecessarily exposed to AI tools. Moreover, 90% of sensitive cloud data, including data used for AI training, was open and accessible to AI-powered tools, raising concerns about the potential for unintended data leakage. The report also revealed that 98% of organisations had unverified applications, including instances of so-called shadow AI, within their environments. This means that unauthorised or unmanaged AI applications are operating in the background, potentially increasing the risk of data breaches and compliance failures. Another key finding highlighted that one in seven organisations did not enforce multi-factor authentication across their SaaS and multi-cloud environments. Organisations may be more susceptible to unauthorised access and related risks without multi-factor authentication. The analysis further noted that 88% of organisations had ghost users—accounts that are no longer in active use but have not been de-provisioned—lingering in their environments. If left unchecked, such accounts can provide an entry point for cybercriminals. The empirical approach of the study sets it apart, as Varonis stated it was based on the analysis of active organisational environments rather than self-reported surveys about AI readiness. This method provided a more accurate reflection of the current state of cloud and data security risks associated with AI adoption. The increasing drive for AI-enabled productivity is evident in IT environments, but the report points out that many organisations may not have implemented the necessary controls for safeguarding sensitive information. The findings suggest that a technical and policy focus on closing security gaps and reducing unnecessary data exposure is required to mitigate the potential risks.