Latest news with #AIRiskManagementFramework


Forbes
5 days ago
- Business
- Forbes
You Or Your Providers Are Using AI—Now What?
Jason Vest, CTO, Binary Defense. The rise of generative and agentic AI has fundamentally changed how enterprises approach risk management, software procurement, operations and security. But many companies still treat AI tools like any other software-as-a-service (SaaS) product, rushing to deploy them without fully understanding what they do—or how they expose the business. Whether it's licensing a chatbot, deploying an AI-powered analytics platform or integrating large language model (LLM) capabilities into your workflows, when your organization becomes the recipient of AI, you inherit a set of security, privacy and operational risks that are often opaque and poorly documented. These risks are being actively exploited, particularly by state-sponsored actors targeting sensitive enterprise data through exposed or misused AI interfaces. Not All AI Is The Same: Know What You're Buying Procurement teams often treat all AI as a monolith. But there's a world of difference between generative AI (GenAI), which produces original content based on inputs, and agentic AI, which takes autonomous actions based on goals. For example, GenAI might assist a marketing team by drafting a newsletter based on a prompt, while agentic AI could autonomously decide which stakeholder to contact or determine the appropriate remediation action in a security operations center (SOC). Each type of AI brings its own unique risks. Generative models can leak sensitive data if inputs or outputs are not properly controlled. Agentic systems can be manipulated or misconfigured to take damaging actions, sometimes without oversight. Before integrating any AI tool, companies need to ask a fundamental question: What data will be accessed, and where could it be exposed? Is this system generating content, or is it taking action on its own? That distinction should guide every aspect of your risk assessment. Security Starts With Understanding Security professionals are trained to ask, 'What is this system doing? What data does it touch? Who can interact with it?' Yet, when it comes to AI, we often accept a black box. Every AI-enabled application your company uses should be inventoried. You need to know: • What kind of AI is being used (e.g., generative AI or agentic)? • What data was used to develop the underlying model, and what controls are in place to ensure accuracy? • Where is the model hosted (e.g., on-premise, vendor-controlled or the cloud)? • What data is being ingested? • What guardrails are in place to prevent abuse, leakage or hallucination? NIST's AI Risk Management Framework and SANS' recent guidance offer excellent starting points for implementing the right security controls. But at a baseline, companies must treat AI like any other sensitive system, with controls for access, monitoring, auditing and incident response. Why AI Is A Data Loss Prevention (DLP) Risk One of the most underappreciated security angles of AI is its role in data leakage. Tools like ChatGPT, GitHub Copilot and countless analytics platforms are hungry for data. Employees often don't realize that entering sensitive information into them can result in it being retained, reprocessed or even exposed to others. Data loss prevention (DLP) is making a comeback, and for good reason. Companies need modern DLP tools that can flag when proprietary code, personally identifiable information (PII) or customer records are being piped into third-party AI models. This isn't just a compliance issue—it's a core security function, particularly when dealing with foreign-developed AI platforms. China's DeepSeek AI chatbot has raised multiple concerns. South Korean regulators fined DeepSeek's parent company for transferring personal data from South Korean users to China without consent. Microsoft also recently barred its employees from using the platform due to data security risks. These incidents highlight the broader strategic risks of embedding third-party AI tools into enterprise environments—especially those built outside of established regulatory frameworks. A Checklist For Responsible AI Adoption CIOs, CTOs and CISOs need a clear framework for evaluating AI vendors and managing AI internally. Here's a five-part checklist to guide these engagements: • Is there a data processing agreement in place? • Who owns the outputs and derivatives of your data? • What rights does the vendor retain to train their models? • How will this AI tool be integrated into existing workflows? • Who owns responsibility for the AI's decisions or outputs? • Are there human-in-the-loop controls? • Could the model generate biased, harmful or misleading results? • Are decisions explainable? • Have stakeholders from HR and legal teams been consulted? • Is personal or regulated data entering the model? • Is the model trained on proprietary or publicly scraped data? • Are there retention and deletion policies? • Has the model or its supply chain been tested for adversarial attacks? • Are prompts and outputs being logged and monitored? • Can malicious users exploit the model to extract data or alter behavior? Final Thought: Awareness And Accountability AI security doesn't start in the SOC. Instead, it should start with awareness across the business. Employees need to understand that an LLM isn't a search engine, and a prompt isn't a safe space. Meanwhile, security teams must expand visibility with tools that monitor AI use, flag suspicious behavior and inventory every AI-enabled app. You may not have built or hosted the model, but you'll still be accountable when things go wrong, whether it's a data leak or a harmful decision. Don't assume vendors have done the hard work of securing their models. Ask questions. Run tests. Demand oversight. AI will only grow more powerful and more autonomous. If you don't understand what it's doing today, you certainly won't tomorrow. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Associated Press
12-06-2025
- Business
- Associated Press
Workday Achieves Top AI Certifications, Reinforcing Commitment to Responsible AI
Company Achieves ISO 42001 and NIST AI RMF Alignment for Ethical AI PLEASANTON, Calif., June 12, 2025 /PRNewswire/ -- Workday, Inc. (NASDAQ: WDAY), the AI platform for managing people, money, and agents, today announced that it has earned two highly respected third party accreditations for its AI governance program. These certifications affirm Workday's leadership in building AI responsibly and fostering trust in its products and services. Workday has achieved ISO 42001 accreditation, a prestigious international recognition signifying the company's commitment to developing AI responsibly and transparently. The company also received independent attestation of alignment with the National Institute of Standards in Technology AI Risk Management Framework (NIST AI RMF), a rigorous set of best practices developed by the U.S. Department of Commerce that demonstrates the company's ability to manage AI risks effectively when developing AI. Workday proactively and voluntarily underwent these stringent evaluations to provide customers with unparalleled confidence in the company's AI development practices. These accreditations, independently verified by leading assessors Schellman and Coalfire, underscore Workday's dedication to developing AI responsibly, including protections for fundamental human rights, safety, security, and privacy. 'Workday is committed to developing AI that amplifies human potential and inspires trust,' said Dr. Kelly Trindel, chief responsible AI officer, Workday. 'Our robust responsible AI governance program is key to delivering the innovative, trustworthy products our customers expect, and this dual recognition affirms our leadership in this critical area.' In light of rapidly evolving AI standards and regulations, this strategic step directly addresses any concerns about how the company identifies and mitigates potential AI risks to fundamental human rights and safety. 'Workday demonstrated a strong AI governance program along with the internal expertise to manage the risks induced by using AI within their SaaS products,' said Mandy Pote, managing principal, Coalfire. 'During the assessment, Workday not only articulated the design of its AI program but also provided clear documentation and evidence to substantiate its AI risk practices.' 'We are proud to have been Workday's trusted partner in achieving ISO 42001 certification. As a leader in enterprise cloud applications for finance and HR, Workday continues to set the standard for responsible AI in the technology sector,' said Danny Manimbo, principal and ISO practice leader, Schellman. 'This achievement reflects their commitment to embedding trust, transparency, and governance into the very core of their AI-driven innovations—values we are proud to support.' About Workday Workday is the AI platform for managing people, money, and agents. The Workday platform is built with AI at the core to help customers elevate people, supercharge work, and move their business forever forward. Workday is used by more than 11,000 organizations around the world and across industries – from medium-sized businesses to more than 60% of the Fortune 500. For more information about Workday, visit Forward-Looking Statements This press release contains forward-looking statements including, among other things, statements regarding Workday's plans, beliefs, and expectations. These forward-looking statements are based only on currently available information and our current beliefs, expectations, and assumptions. Because forward-looking statements relate to the future, they are subject to inherent risks, uncertainties, assumptions, and changes in circumstances that are difficult to predict and many of which are outside of our control. If the risks materialize, assumptions prove incorrect, or we experience unexpected changes in circumstances, actual results could differ materially from the results implied by these forward-looking statements, and therefore you should not rely on any forward-looking statements. Risks include, but are not limited to, risks described in our filings with the Securities and Exchange Commission ('SEC'), including our most recent report on Form 10-Q or Form 10-K and other reports that we have filed and will file with the SEC from time to time, which could cause actual results to vary from expectations. Workday assumes no obligation to, and does not currently intend to, update any such forward-looking statements after the date of this release, except as required by law. Any unreleased services, features, or functions referenced in this document, our website, or other press releases or public statements that are not currently available are subject to change at Workday's discretion and may not be delivered as planned or at all. Customers who purchase Workday services should make their purchase decisions based upon services, features, and functions that are currently available. © 2025 Workday, Inc. All rights reserved. Workday and the Workday logo are registered trademarks of Workday, Inc. All other brand and product names are trademarks or registered trademarks of their respective holders. View original content to download multimedia: SOURCE Workday Inc.
Yahoo
19-05-2025
- Health
- Yahoo
The Critical Need for Governance, Risk, and Compliance in Healthcare AI
TAMPA, FL / / May 19, 2025 / As artificial intelligence (AI) transforms healthcare, organizations face unprecedented opportunities-and risks. From clinical decision support to patient engagement, AI-enabled technologies promise efficiency and innovation. However, without robust governance, risk management, and compliance (GRC) frameworks, these advancements can lead to ethical dilemmas, regulatory violations, and patient harm. Newton3, a Tampa-based strategic advisory firm, specializes in helping healthcare leaders navigate this complex landscape, ensuring AI deployments are both impactful and accountable. The Risks of Unregulated AI in HealthcareAI applications in healthcare, such as natural language processing for clinical transcription or machine learning for disease diagnosis, carry inherent risks: Bias and Inequity: AI models trained on biased datasets can perpetuate disparities in care. Regulatory Non-Compliance: HIPAA, GDPR, and emerging AI-specific regulations require rigorous adherence. Lack of Transparency: "Black box" algorithms undermine trust in AI-driven decisions. Without GRC programs, healthcare organizations risk financial penalties, reputational damage, patient safety breaches, and, most critically, potential patient harm. The NIST AI Risk Management Framework: A Roadmap for HealthcareThe National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0 and NIST AI 600-1, provide a structured approach to mitigate these risks for both Narrow and General AI. Key steps include: Governance: Establish clear accountability for AI systems, including oversight committees and ethical guidelines. Risk Assessment: Identify and prioritize risks specific to AI use cases (e.g., diagnostic errors in image analysis). Compliance Integration: Align AI deployments with existing healthcare regulations and future-proof for evolving standards. Newton3's GRC NIST Certification Toolkit helps organizations implement this framework, ensuring AI systems are transparent, explainable (XAI), and auditable. Newton3's Role in Shaping Responsible AI Newton3 offers tailored solutions for healthcare leaders, including: AI GRC Training: Equip teams with skills to manage AI risks. Fractional AI Officer Services: Embed GRC expertise into organizational leadership. Platform-Agnostic Advisory: Support unbiased AI strategy, including integrations like Salesforce Agentforce. Call to ActionFor healthcare CEOs and CTOs, the time to act is now. Proactive GRC programs are not just a regulatory requirement-they are a competitive advantage. Contact Newton3 to build a governance strategy that aligns innovation with accountability. About Newton3Newton3 is a Tampa-based strategic advisory firm specializing in AI governance, risk management, and compliance (GRC) within the healthcare sector. The company empowers organizations to maximize the value of their AI investments across platforms like AWS, Google Cloud, Azure, ServiceNow's NOW Platform AI, and Salesforce's Agentforce AI. By embedding GRC frameworks into AI deployments, Newton3 ensures that innovations are not only effective but also ethically sound and compliant with regulatory standards. Their services encompass predictive intelligence, virtual agents, and process optimization, providing methodologies that align AI strategies with organizational goals. Newton3's commitment to risk-aware innovation helps clients navigate the complexities of AI integration, maintaining transparency, security, and regulatory integrity throughout the process. Learn more at DisclaimerThis press release was prepared for syndication by Evrima Chicago, LLC. The views and opinions expressed herein are those of the original authors or sources and do not necessarily reflect the official position of Evrima Chicago. The Evrima editorial team has compiled and formatted this release based on publicly available or provided content. For inquiries, interview requests, or editorial verification, please contact the Evrima Chicago team at PR@ or visit SOURCE: Newton3 AI View the original press release on ACCESS Newswire Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Associated Press
28-04-2025
- Business
- Associated Press
AuditBoard Launches AI Governance Solution to Help Customers Optimize AI Innovation
New solution embeds AI governance into audit and risk platform, strengthens oversight, streamlines compliance, and scales responsible AI adoption SAN FRANCISCO, April 28, 2025 /CNW/ -- AuditBoard, the leading global platform for connected risk, transforming audit, risk, and compliance, today announced a robust new AI governance solution at the RSAC™ Conference in San Francisco, California. This solution enables customers to fast-track their AI risk management programs and drive responsible AI innovation and adoption at scale. AuditBoard's new AI governance solution will help customers meet AI best practices outlined in frameworks like the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF), protecting their organizations from the cyber, reputational, and financial risks associated with noncompliance. 'This solution will help compliance teams address the widespread and urgent need for AI governance we are seeing across all industries,' said Happy Wang, Chief Technology Officer at AuditBoard. 'Our customers can now quickly identify, assess, and mitigate potential risks associated with AI systems, ensuring a more efficient and proactive approach to managing AI.' The growing appetite for AI governance is undeniable as organizations across industries increasingly integrate AI into their processes. A recent survey conducted by AuditBoard and Ascend2 found that 72 percent of audit, risk, and compliance practitioners believe AI will significantly impact their risk management processes. However, while excitement and optimism around AI are palpable, organizations need to strike a balance between embracing AI's opportunities and ensuring they are deploying AI responsibly. Uptake of the technology has been swift, with more than 80 percent of AuditBoard AI customers accepting generative content into their systems of record. To help customers ensure responsible governance of not just AuditBoard AI, but any AI tool or model in their environment, AuditBoard's AI governance solution expedites AI risk management programs and ensures responsible AI usage by: 'My team is responsible for assessing each AI use case at AuditBoard and ultimately giving the green light,' said Anthony Plachy, General Counsel at AuditBoard. 'With the surge in AI tools we are seeing in the market, our use cases across the business have started to increase rapidly. The team was able to build this AI governance solution to help us manage the uptick of requests we have coming in, while ensuring each use case meets our internal AI governance standards. This solution has fundamentally transformed our team's work, and we're confident it will empower our customers to effectively navigate their AI journey.' To see these new capabilities in action, visit the AuditBoard booth at the RSAC conference or visit About AuditBoard AuditBoard's mission is to be the category-defining global platform for connected risk, elevating our customers through innovation. More than 50% of the Fortune 500 trust AuditBoard to transform their audit, risk, and compliance management. AuditBoard is top-rated by customers on G2, Capterra, and Gartner Peer Insights, and was recently ranked for the sixth year in a row as one of the fastest-growing technology companies in North America by Deloitte. Contact: Laura Groshans [email protected] View original content: SOURCE AuditBoard, Inc


Cision Canada
28-04-2025
- Business
- Cision Canada
AuditBoard Launches AI Governance Solution to Help Customers Optimize AI Innovation
New solution embeds AI governance into audit and risk platform, strengthens oversight, streamlines compliance, and scales responsible AI adoption SAN FRANCISCO, April 28, 2025 /CNW/ -- AuditBoard, the leading global platform for connected risk, transforming audit, risk, and compliance, today announced a robust new AI governance solution at the RSAC™ Conference in San Francisco, California. This solution enables customers to fast-track their AI risk management programs and drive responsible AI innovation and adoption at scale. AuditBoard's new AI governance solution will help customers meet AI best practices outlined in frameworks like the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF), protecting their organizations from the cyber, reputational, and financial risks associated with noncompliance. "This solution will help compliance teams address the widespread and urgent need for AI governance we are seeing across all industries," said Happy Wang, Chief Technology Officer at AuditBoard. "Our customers can now quickly identify, assess, and mitigate potential risks associated with AI systems, ensuring a more efficient and proactive approach to managing AI." The growing appetite for AI governance is undeniable as organizations across industries increasingly integrate AI into their processes. A recent survey conducted by AuditBoard and Ascend2 found that 72 percent of audit, risk, and compliance practitioners believe AI will significantly impact their risk management processes. However, while excitement and optimism around AI are palpable, organizations need to strike a balance between embracing AI's opportunities and ensuring they are deploying AI responsibly. Uptake of the technology has been swift, with more than 80 percent of AuditBoard AI customers accepting generative content into their systems of record. To help customers ensure responsible governance of not just AuditBoard AI, but any AI tool or model in their environment, AuditBoard's AI governance solution expedites AI risk management programs and ensures responsible AI usage by: Streamlining AI use case intake, review, and approval processes Establishing a single source of truth for approved AI use cases and models to responsibly federate decision-making Dynamically linking AI risks to vendors, assets, and controls to continuously monitor AI risks across ecosystems "My team is responsible for assessing each AI use case at AuditBoard and ultimately giving the green light," said Anthony Plachy, General Counsel at AuditBoard. "With the surge in AI tools we are seeing in the market, our use cases across the business have started to increase rapidly. The team was able to build this AI governance solution to help us manage the uptick of requests we have coming in, while ensuring each use case meets our internal AI governance standards. This solution has fundamentally transformed our team's work, and we're confident it will empower our customers to effectively navigate their AI journey." To see these new capabilities in action, visit the AuditBoard booth at the RSAC conference or visit AuditBoard. com. About AuditBoard AuditBoard's mission is to be the category-defining global platform for connected risk, elevating our customers through innovation. More than 50% of the Fortune 500 trust AuditBoard to transform their audit, risk, and compliance management. AuditBoard is top-rated by customers on G2, Capterra, and Gartner Peer Insights, and was recently ranked for the sixth year in a row as one of the fastest-growing technology companies in North America by Deloitte.