01-07-2025
You Or Your Providers Are Using AI—Now What?
Jason Vest, CTO, Binary Defense.
The rise of generative and agentic AI has fundamentally changed how enterprises approach risk management, software procurement, operations and security. But many companies still treat AI tools like any other software-as-a-service (SaaS) product, rushing to deploy them without fully understanding what they do—or how they expose the business.
Whether it's licensing a chatbot, deploying an AI-powered analytics platform or integrating large language model (LLM) capabilities into your workflows, when your organization becomes the recipient of AI, you inherit a set of security, privacy and operational risks that are often opaque and poorly documented. These risks are being actively exploited, particularly by state-sponsored actors targeting sensitive enterprise data through exposed or misused AI interfaces.
Not All AI Is The Same: Know What You're Buying
Procurement teams often treat all AI as a monolith. But there's a world of difference between generative AI (GenAI), which produces original content based on inputs, and agentic AI, which takes autonomous actions based on goals. For example, GenAI might assist a marketing team by drafting a newsletter based on a prompt, while agentic AI could autonomously decide which stakeholder to contact or determine the appropriate remediation action in a security operations center (SOC).
Each type of AI brings its own unique risks. Generative models can leak sensitive data if inputs or outputs are not properly controlled. Agentic systems can be manipulated or misconfigured to take damaging actions, sometimes without oversight.
Before integrating any AI tool, companies need to ask a fundamental question: What data will be accessed, and where could it be exposed? Is this system generating content, or is it taking action on its own? That distinction should guide every aspect of your risk assessment.
Security Starts With Understanding
Security professionals are trained to ask, 'What is this system doing? What data does it touch? Who can interact with it?' Yet, when it comes to AI, we often accept a black box.
Every AI-enabled application your company uses should be inventoried. You need to know:
• What kind of AI is being used (e.g., generative AI or agentic)?
• What data was used to develop the underlying model, and what controls are in place to ensure accuracy?
• Where is the model hosted (e.g., on-premise, vendor-controlled or the cloud)?
• What data is being ingested?
• What guardrails are in place to prevent abuse, leakage or hallucination?
NIST's AI Risk Management Framework and SANS' recent guidance offer excellent starting points for implementing the right security controls. But at a baseline, companies must treat AI like any other sensitive system, with controls for access, monitoring, auditing and incident response.
Why AI Is A Data Loss Prevention (DLP) Risk
One of the most underappreciated security angles of AI is its role in data leakage. Tools like ChatGPT, GitHub Copilot and countless analytics platforms are hungry for data. Employees often don't realize that entering sensitive information into them can result in it being retained, reprocessed or even exposed to others.
Data loss prevention (DLP) is making a comeback, and for good reason. Companies need modern DLP tools that can flag when proprietary code, personally identifiable information (PII) or customer records are being piped into third-party AI models. This isn't just a compliance issue—it's a core security function, particularly when dealing with foreign-developed AI platforms.
China's DeepSeek AI chatbot has raised multiple concerns. South Korean regulators fined DeepSeek's parent company for transferring personal data from South Korean users to China without consent. Microsoft also recently barred its employees from using the platform due to data security risks.
These incidents highlight the broader strategic risks of embedding third-party AI tools into enterprise environments—especially those built outside of established regulatory frameworks.
A Checklist For Responsible AI Adoption
CIOs, CTOs and CISOs need a clear framework for evaluating AI vendors and managing AI internally. Here's a five-part checklist to guide these engagements:
• Is there a data processing agreement in place?
• Who owns the outputs and derivatives of your data?
• What rights does the vendor retain to train their models?
• How will this AI tool be integrated into existing workflows?
• Who owns responsibility for the AI's decisions or outputs?
• Are there human-in-the-loop controls?
• Could the model generate biased, harmful or misleading results?
• Are decisions explainable?
• Have stakeholders from HR and legal teams been consulted?
• Is personal or regulated data entering the model?
• Is the model trained on proprietary or publicly scraped data?
• Are there retention and deletion policies?
• Has the model or its supply chain been tested for adversarial attacks?
• Are prompts and outputs being logged and monitored?
• Can malicious users exploit the model to extract data or alter behavior?
Final Thought: Awareness And Accountability
AI security doesn't start in the SOC. Instead, it should start with awareness across the business. Employees need to understand that an LLM isn't a search engine, and a prompt isn't a safe space. Meanwhile, security teams must expand visibility with tools that monitor AI use, flag suspicious behavior and inventory every AI-enabled app.
You may not have built or hosted the model, but you'll still be accountable when things go wrong, whether it's a data leak or a harmful decision. Don't assume vendors have done the hard work of securing their models. Ask questions. Run tests. Demand oversight.
AI will only grow more powerful and more autonomous. If you don't understand what it's doing today, you certainly won't tomorrow.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?