Latest news with #Varonis
Yahoo
4 hours ago
- Business
- Yahoo
Varonis (VRNS) Unlocks Real-Time Data Security with AI-Powered MCP Server
Varonis Systems, Inc. (NASDAQ:VRNS) is one of . Varonis Systems, Inc. (NASDAQ:VRNS), a leader in data security and threat detection, has introduced the Varonis Model Context Protocol (MCP) Server, a new interface that allows customers to connect their preferred AI tools directly to the Varonis Data Security Platform. The innovation marks a significant step in enabling real-time, AI-driven access to enterprise data security operations. With the MCP Server, customers can use natural language prompts through AI clients such as ChatGPT, Claude, and GitHub Copilot to query data posture, trigger remediation, and streamline compliance tasks. The server enables users to carry out complex operations with simple instructions—for example, retrieving recent high-severity alerts, updating ServiceNow tickets, or running cleanup scripts to remove inactive guest accounts. A close up of a software engineer typing on a laptop keyboard, focusing on the code development part of the company. 'Automation is at the heart of everything we do,' said Yaki Faitelson, Co-Founder and CEO of Varonis. 'The Varonis MCP Server marks another leap forward in our agentic AI vision—giving customers access to Varonis' real-time data security insights and automated remediation from their own AI tools, IDEs, agent builders, and terminals.' By embedding Athena AI in its platform and supporting cross-platform automation, Varonis continues to expand its role in modern data protection. The MCP Server furthers the company's mission to deliver secure, intelligent infrastructure that helps organizations proactively defend sensitive information and reduce compliance burdens in complex cloud environments. While we acknowledge the potential of VRNS to grow, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an AI stock that is more promising than VRNS and that has 100x upside potential, check out our report about this cheapest AI NEXT: 10 Best Small Cap Tech Stocks With Biggest Upside Potential and . Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
a day ago
- Business
- Yahoo
The Top 5 Analyst Questions From Varonis's Q1 Earnings Call
Varonis delivered first quarter results that exceeded Wall Street's expectations, with management attributing the performance to accelerating adoption of its SaaS-based data security platform and continued expansion among both new and existing customers. CEO Yaki Faitelson noted that the company's SaaS transition was 'well on track to complete by the end of the year,' and highlighted strong demand for automated data protection solutions as a key growth driver. Management also credited ongoing success in converting legacy customers to SaaS and the launch of offerings tailored for new use cases in cloud and hybrid environments. Is now the time to buy VRNS? Find out in our full research report (it's free). Revenue: $136.4 million vs analyst estimates of $133.4 million (19.6% year-on-year growth, 2.3% beat) Adjusted EPS: $0.01 vs analyst estimates of -$0.05 (significant beat) Adjusted Operating Income: -$6.46 million vs analyst estimates of -$11.98 million (-4.7% margin, 46.1% beat) The company reconfirmed its revenue guidance for the full year of $617.5 million at the midpoint Management raised its full-year Adjusted EPS guidance to $0.16 at the midpoint, a 3.3% increase Operating Margin: -32.1%, up from -41.8% in the same quarter last year Annual Recurring Revenue: $664.3 million at quarter end, up 18.6% year on year Billings: $140.4 million at quarter end, up 19.4% year on year Market Capitalization: $5.7 billion While we enjoy listening to the management's commentary, our favorite part of earnings calls are the analyst questions. Those are unscripted and can often highlight topics that management teams would rather avoid or topics where the answer is complicated. Here is what has caught our attention. Matt Hedberg (RBC) asked about confidence in achieving over 20% ARR growth. CFO Guy Melamed pointed to strong SaaS net retention rates, emphasizing that SaaS customers are 'coming back and buying more,' driving renewed growth. Joel Fishbein (Truist Securities) inquired about MDDR adoption and competitive positioning. CEO Yaki Faitelson described MDDR as essential for automated data breach prevention, highlighting rapid customer adoption and its role in expanding upsell opportunities. Saket Kalia (Barclays) questioned when margins would normalize as the SaaS transition progresses. Melamed explained that operating margin trough is expected this year, with gradual normalization as revenue recognition stabilizes post-transition. Roger Boyd (UBS) asked about new customer adoption of cloud and AI-focused products. Faitelson noted increasing demand for data security in SaaS and cloud repositories, especially as customers deploy AI tools that amplify data exposure risks. Jason Ader (William Blair) sought details on gross margin outlook and the wide non-GAAP operating income range. Melamed explained that the SaaS transition creates short-term volatility, but cost structure improvements are ahead of expectations, with margin normalization expected after transition completion. In coming quarters, the StockStory team will monitor (1) the pace of SaaS transition and mix progression toward 80% of recurring revenue, (2) adoption and upsell of new solutions like database activity monitoring and Agentic AI protection, and (3) improvement in operating margins as the revenue base stabilizes post-transition. Continued customer expansion into cloud and AI use cases will also be an important indicator of execution. Varonis currently trades at $50.94, up from $44.27 just before the earnings. Is there an opportunity in the stock?Find out in our full research report (it's free). Donald Trump's victory in the 2024 U.S. Presidential Election sent major indices to all-time highs, but stocks have retraced as investors debate the health of the economy and the potential impact of tariffs. While this leaves much uncertainty around 2025, a few companies are poised for long-term gains regardless of the political or macroeconomic climate, like our Top 6 Stocks for this week. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025). Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-small-cap company Exlservice (+354% five-year return). Find your next big winner with StockStory today. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Forbes
a day ago
- Forbes
Windows Warning Issued As Printers Used In New Hack Attacks
Hackers are using printers to attack Windows devices. Nobody should be surprised by now at the ingenuity of threat actors looking to hack your accounts and devices. I have recently reported on how SMS attackers can strike without knowing your phone number using the SMS Blaster machine, a smartwatch can be used to hack even highly secure air-gapped networks, and even Windows secure boot protections can be bypassed. What might come as a surprise, however, is the news that a new and ongoing hack attack campaign is enlisting the help of your printer to hack your Windows systems. Here's what you need to know. Windows Users Warned As Microsoft 365 Direct Send Hackers Deploy Printers To Attack A new report by the Varonis Managed Data Detection and Response Forensics team has confirmed an ongoing threat campaign, already known to have targeted at least 70 organizations, the vast majority of which are based in the U.S., using on-premises devices such as printers to exploit a poorly known Microsoft 365 feature to deploy the Windows hacking attack. That feature is Direct Send, allowing devices such as printers and scanners to send email without any authentication. I mean, what could possibly go wrong? Quite a lot, as it happens. 'Threat actors are abusing the feature to spoof internal users and deliver phishing emails without ever needing to compromise an account,' Tom Barnea, a forensics specialist at Varonis, said. The as yet unnamed hackers used this Microsoft 365 Direct Send function in order to target predominantly U.S. organizations with malicious messages that are 'subject to less scrutiny compared to standard inbound email,' according to Barnea. The Varonis investigation has concluded that the ongoing threat campaign appears to have started in May 2025, with a level of 'consistent activity over the past two months.' Mitigating The Windows Printer Attack To mitigate the Microsoft 365 Direct Send attacks, Varonis recommends organizations do the following: Microsoft, meanwhile, said that most Microsoft 365 and Windows customers don't need to use the Direct Send feature, and it is working on an option to disable it by default to protect customers. 'We recommend Direct Send only for advanced customers willing to take on the responsibilities of email server admins,' Microsoft concluded.


Techday NZ
17-06-2025
- Business
- Techday NZ
Varonis boosts ChatGPT Enterprise security with compliance tools
Varonis has announced the integration of its Data Security Platform with the OpenAI ChatGPT Enterprise Compliance API, aiming to provide enhanced data protection and compliance monitoring for enterprise users of ChatGPT. The integration is designed to help organisations using ChatGPT Enterprise automatically identify sensitive data uploads, monitor the content of prompts and responses, and mitigate the risks of data breaches and compliance violations. ChatGPT Enterprise currently serves over 3 million business users, offering productivity tools that are enhanced by access to organisational data. As these AI models become more embedded in daily workflows, maintaining strict data governance becomes increasingly important for companies managing sensitive or regulated information. Expanded security measures The Varonis integration is intended to offer added protection against risks such as compromised accounts, insider threats, and accidental misuse, all of which can result in data security problems or regulatory penalties. The platform supports ongoing adjustment of user permissions and continuously monitors interactions within ChatGPT to limit unnecessary data flows and alert security teams to potentially risky or abnormal behaviours. "ChatGPT is becoming a critical part of how modern teams work. With Varonis, security teams can embrace this shift without losing visibility or control over their sensitive data," said Varonis EVP of Engineering and Chief Technology Officer David Bass. Through its partnership with OpenAI, Varonis delivers both automated security protocols and 24/7 data monitoring, allowing organisations to adopt artificial intelligence-based solutions while maintaining their obligations around privacy and data protection. Key functions The new offering brings several technical capabilities with a focus on automation and real-time oversight. Automated data classification allows Varonis to detect and label sensitive materials that are either uploaded to or generated by ChatGPT Enterprise. Continuous session monitoring ensures that any prompt or response within the ChatGPT environment is reviewed for compliance, preventing inappropriate or risky data from being uploaded or shared inadvertently. The platform also uses behaviour-based threat detection to flag unusual activity, such as large-scale file uploads or unauthorised changes to administrative access, which could indicate a potential breach. Focus on compliance and privacy The integration is positioned to offer both preventative and detective controls for AI-powered environments. These measures aim to ensure that users maximise the operational value of AI tools, such as ChatGPT, while minimising the risks associated with data exposure. The Varonis solution is described as complementing existing OpenAI security and privacy controls, rather than replacing them. This approach enables organisations to deploy generative AI models more confidently, even in regulated sectors or areas handling highly confidential information. Availability and assessment Customers will have access to Varonis for ChatGPT Enterprise in a private preview phase. As part of this launch, organisations can request a Varonis Data Risk Assessment, which reviews current practices and assesses an organisation's readiness for adopting AI in a secure and compliant way. Varonis continues to develop its portfolio of integrations and security tools as part of its core offering. The Data Security Platform sees application across numerous cloud environments, with a focus on automating security outcomes, data detection and response, data loss prevention, and insider risk management.
Yahoo
11-06-2025
- Business
- Yahoo
Everyone's using AI at work. Here's how companies can keep data safe
Companies across industries are encouraging their employees to use AI tools at work. Their workers, meanwhile, are often all too eager to make the most of generative AI chatbots like ChatGPT. So far, everyone is on the same page, right? There's just one hitch: How do companies protect sensitive company data from being hoovered up by the same tools that are supposed to boost productivity and ROI? After all, it's all too tempting to upload financial information, client data, proprietary code, or internal documents into your favorite chatbot or AI coding tool, in order to get the quick results you want (or that your boss or colleague might be demanding). In fact, a new study from data security company Varonis found that shadow AI—unsanctioned generative AI applications—poses a significant threat to data security, with tools that can bypass corporate governance and IT oversight, leading to potential data leaks. The study found that nearly all companies have employees using unsanctioned apps, and nearly half have employees using AI applications considered high-risk. For information security leaders, one of the key challenges is educating workers about what the risks are and what the company requires. They must ensure that employees understand the types of data the organization handles—ranging from corporate data like internal documents, strategic plans, and financial records, to customer data such as names, email addresses, payment details, and usage patterns. It's also critical to communicate how each type of data is classified—for example, whether it is public, internal-only, confidential, or highly restricted. Once this foundation is in place, clear policies and access boundaries must be established to protect that data accordingly. 'What we have is not a technology problem, but a user challenge,' said James Robinson, chief information security officer at data security company Netskope. The goal, he explained, is to ensure that employees use generative AI tools safely—without discouraging them from adopting approved technologies. 'We need to understand what the business is trying to achieve,' he added. Rather than simply telling employees they're doing something wrong, security teams should work to understand how people are using the tools, to make sure the policies are the right fit—or whether they need to be adjusted to allow employees to share information appropriately. Jacob DePriest, chief information security officer at password protection provider 1Password, agreed, saying that his company is trying to strike a balance with its policies—to both encourage AI usage and also educate so that the right guardrails are in place. Sometimes that means making adjustments. For example, the company released a policy on the acceptable use of AI last year, part of the company's annual security training. 'Generally, it's this theme of 'Please use AI responsibly; please focus on approved tools; and here are some unacceptable areas of usage.'' But the way it was written caused many employees to be overly cautious, he said. 'It's a good problem to have, but CISOs can't just focus exclusively on security,' he said. 'We have to understand business goals and then help the company achieve both business goals and security outcomes as well. I think AI technology in the last decade has highlighted the need for that balance. And so we've really tried to approach this hand in hand between security and enabling productivity.' But companies who think banning certain tools is a solution, should think again. Brooke Johnson, SVP of HR and security at Ivanti, said her company found that among people who use generative AI at work, nearly a third keep their AI use completely hidden from management. 'They're sharing company data with systems nobody vetted, running requests through platforms with unclear data policies, and potentially exposing sensitive information,' she said in a message. The instinct to ban certain tools is understandable but misguided, she said. 'You don't want employees to get better at hiding AI use; you want them to be transparent so it can be monitored and regulated,' she explained. That means accepting the reality that AI use is happening regardless of policy, and conducting a proper assessment of which AI platforms meet your security standards. 'Educate teams about specific risks without vague warnings,' she said. Help them understand why certain guardrails exist, she suggested, while emphasizing that it is not punitive. 'It's about ensuring they can do their jobs efficiently, effectively, and safely.' Think securing data in the age of AI is complicated now? AI agents will up the ante, said DePriest. 'To operate effectively, these agents need access to credentials, tokens, and identities, and they can act on behalf of an individual—maybe they have their own identity,' he said. 'For instance, we don't want to facilitate a situation where an employee might cede decision-making authority over to an AI agent, where it could impact a human.' Organizations want tools to help facilitate faster learning and synthesize data more quickly, but ultimately, humans need to be able to make the critical decisions, he explained. Whether it is the AI agents of the future or the generative AI tools of today, striking the right balance between enabling productivity gains and doing so in a secure, responsible way may be tricky. But experts say every company is facing the same challenge—and meeting it is going to be the best way to ride the AI wave. The risks are real, but with the right mix of education, transparency, and oversight, companies can harness AI's power—without handing over the keys to their kingdom. This story was originally featured on