5 days ago
Securing SaaS In The Age Of AI: What CISOs Need To Know
Galit Lubetzky Sharon was Head of the Stategic Center of the IDF's Cyber Defense Division and is now the Co-Founder & CEO of Wing Security.
AI is everywhere. It's driving productivity, accelerating workflows and powering SaaS for every department. But while AI tools are making life easier for teams, they are also creating new opportunities for cybersecurity attacks. The unpleasant truth is that the security implications of AI are growing fast.
CISOs and security teams need to understand where these risks are emerging and get ahead of them fast.
Shadow AI is the new shadow IT.
AI-powered apps are entering your SaaS stack often without approval from your security team. Tools that seem harmless, such as writing assistants, meeting notetakers or document summarizers, can plug directly into your SaaS environment and access sensitive data.
Some of these tools request broad access to emails, file storage or chat platforms. Others quietly collect user inputs. If they are operating outside of monitored processes, they increase your organization's exposure, and you won't even know about it. Make sure you know if the apps in your stack utilize AI and understand the potential risks of that exposure.
AI integrations can go from access to exploitation.
AI tools often require deep access to functions, including admin-level permissions, API keys or OAuth tokens.
Once granted, this access is hard to track and even harder to revoke. If a connected AI tool is compromised, the attacker also inherits its permissions. A single compromised integration can become a foothold into your SaaS ecosystem and allow attackers to move laterally from there. This is why it's so important to be aware of the permissions granted to AI apps and monitor to ensure those permissions are removed when no longer needed.
Weak privacy laws create long-term exposure.
AI privacy regulations are still evolving in many regions. As a result, vendors have broad leeway in how they collect, process and store your company's data.
Without strong legal protections or vendor transparency, sensitive internal information shared with AI tools can end up being stored, reused or even incorporated into the training datasets of your competitors. This means your product road map, brand terminology or financial models could become part of someone else's model training process. It's important to assess the data policy of your AI vendor to make sure it aligns with your company policy.
AI is helping attackers move faster.
On top of the risks discussed above, attackers are also using AI to scale and enhance their attacks. From tailored phishing emails to automating credential stuffing across multiple platforms, AI has lowered the barrier for launching large-scale identity-based attacks and increased their success rate.
These attacks are more efficient, are harder to detect and often mimic legitimate activity with alarming accuracy. What used to be one-off attacks can now be executed at scale with minimal effort. So, the same way that AI is accelerating your work, it is accelerating breaches. There is no time to wait for an airtight security policy around AI. The time to implement strategies and tools is now.
Can you have safe AI in your organization?
AI adoption is not slowing down, and simply avoiding AI is not realistic and not the goal. What you can do is focus on visibility, control and consistent enforcement.
You can only secure what you can see. Identify all AI-powered tools in use across your organization, including embedded features and third-party integrations. A strong SaaS security posture management (SSPM) solution can help uncover what might otherwise go undetected.
AI tools often request more access than they actually need to serve their intended purpose. Review access scopes closely and apply least privilege policies. Pay attention to any tool requesting access to documents, calendars, messaging platforms or admin-level functions. When in doubt, reject.
Most employees want to do the right thing but might not understand the risks. Provide practical, easy-to-follow guidelines and provide training. Do not assume that employees are reading memos or organization-wide emails.
Any tool that processes your company's data is a vendor and should be vetted accordingly. This means conducting risk assessments, reviewing how data is handled and requiring security controls and adherence to compliance standards.
Achieve a safe AI reality.
With AI, the risks are getting more complex, but SaaS security can still be controlled.
My advice is not to fear AI, but to approach it with a clear strategy. By understanding the risks, establishing clear policies and implementing the right tools, you can enable productivity and innovation without compromising on your security.
The threat landscape is changing. Is your SaaS security agile enough to change with it?
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?