Latest news with #BillWong


Malaysian Reserve
09-07-2025
- Business
- Malaysian Reserve
AI Systems Are Advancing Faster Than Risk Controls, Warns Info-Tech Research Group in New Risk Management Resource
As AI adoption accelerates across industries, most organizations remain unprepared to manage the complex and emerging risks that come with it. Info-Tech Research Group, a global research and advisory firm, has recently published a resource that introduces a proactive, principle-based framework to help enterprises formalize their AI risk programs, improve governance, and align strategies with business objectives. TORONTO, July 9, 2025 /CNW/ – Organizations are adopting AI at a rapid pace, but many lack the necessary controls to manage the risks that come with these transformative systems. From hallucinations and bias to deepfakes and adversarial threats, AI can introduce novel vulnerabilities that traditional governance frameworks were not designed to address. To help organizations tackle these challenges, global research and advisory firm Info-Tech Research Group has recently published its research insights in the blueprint, Build Your AI Risk Management Roadmap, offering a structured methodology to develop a comprehensive, business-aligned AI risk program. As outlined by Info-Tech's AI risk management framework, failing to manage AI risk proactively can lead to regulatory violations, reputational damage, and lost value. Despite these consequences, many organizations still rely on ad hoc processes, react to issues only after they occur, or silo risk ownership within technical teams without business involvement. 'AI risk is a business risk. Every AI risk has business implications,' says Bill Wong, research fellow at Info-Tech Research Group. 'Accountability cannot rest with AI leaders alone. Business executives must be active participants in identifying, evaluating, and responding to AI risks, and that starts with embedding risk management into governance, strategy, and decision-making processes.' The firm's resource outlines how to evolve fragmented or informal approaches into a structured AI risk management program through four key dimensions: risk governance, risk identification, risk measurement, and risk response. One of the blueprint's themes focuses on aligning the AI risk framework with broader enterprise risk management to ensure the program integrates with organizational strategy and regulatory requirements. To support implementation, Info-Tech has introduced a comprehensive roadmap built around framing AI risks, establishing AI risk governance, identifying and assessing risks, measuring potential impact, defining responses, and creating a roadmap for execution. A key component of the blueprint is the formation of an AI Risk Council (AIRC), which would include cross-functional representation from IT, AI, and business leaders. This council is responsible for assigning ownership, recommending risk tolerance, reviewing risk assessments, and ensuring shared accountability across the organization. Info-Tech's framework also emphasizes the need to establish foundational AI principles, such as explainability and transparency, fairness, data privacy, safety and security, validity and reliability, and accountability. These principles, derived from global frameworks such as those developed by the Organization for Economic Co-operation and Development (OECD), serve as the ethical and operational backbone of responsible AI. Key Processes to Operationalize AI Risk ManagementInfo-Tech's resource is designed to help organizations reduce the number of unidentified risks, build realistic contingency plans, enable cross-functional accountability, and improve regulatory compliance, such as with the EU AI Act's high-risk system requirements. It also supports better decision-making and ongoing monitoring to ensure AI systems remain aligned with organizational goals. The firm's Build Your AI Risk Management Roadmap blueprint outlines the following steps for IT leaders to operationalize AI risk management across the entire organization: Establish Foundational AI Principles – Define the ethical and operational standards that guide the development and deployment of AI. Assess AI Risk Management Maturity – Understand the current state of AI risk governance to identify capability gaps. Create and Assign AI Risk Council Responsibilities – Establish clear accountability for AI risk across leadership and governance teams. Implement an AI Risk Management Framework – Develop an AI risk management program that begins with introducing an AI risk governance program that is aligned to the organization's foundational AI principles Then determine methods for identifying and classifying AI risks, followed by establishing how AI risks will be measured and monitored, and finally adopting a strategy for the actions to take to mitigate a given AI risk. Pursue AI Risk-Mitigation Initiatives – Prioritize actions that reduce the likelihood or impact of AI risks based on feasibility and value. Build an AI Risk Management Roadmap – Translate priorities into a structured, time-bound action plan aligned with business goals. The blueprint promotes a preventative mindset, encouraging organizations to detect, assess, and mitigate AI risks before they materialize, transforming risk management from a reactive obligation into a strategic enabler. To request exclusive and timely commentary from Info-Tech's experts, including Bill Wong, or to request full access to the Build Your AI Risk Management Roadmap blueprint, please contact pr@ About Info-Tech Research GroupInfo-Tech Research Group is one of the world's leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations. To learn more about Info-Tech's divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software-buying insights. Media professionals can register for unrestricted access to research across IT, HR, and software, as well as hundreds of industry analysts through the firm's Media Insiders program. To gain access, contact pr@ For information about Info-Tech Research Group or to access the latest research, visit and connect via LinkedIn and X.
Yahoo
09-07-2025
- Business
- Yahoo
AI Systems Are Advancing Faster Than Risk Controls, Warns Info-Tech Research Group in New Risk Management Resource
As AI adoption accelerates across industries, most organizations remain unprepared to manage the complex and emerging risks that come with it. Info-Tech Research Group, a global research and advisory firm, has recently published a resource that introduces a proactive, principle-based framework to help enterprises formalize their AI risk programs, improve governance, and align strategies with business objectives. TORONTO, July 9, 2025 /PRNewswire/ - Organizations are adopting AI at a rapid pace, but many lack the necessary controls to manage the risks that come with these transformative systems. From hallucinations and bias to deepfakes and adversarial threats, AI can introduce novel vulnerabilities that traditional governance frameworks were not designed to address. To help organizations tackle these challenges, global research and advisory firm Info-Tech Research Group has recently published its research insights in the blueprint, Build Your AI Risk Management Roadmap, offering a structured methodology to develop a comprehensive, business-aligned AI risk program. As outlined by Info-Tech's AI risk management framework, failing to manage AI risk proactively can lead to regulatory violations, reputational damage, and lost value. Despite these consequences, many organizations still rely on ad hoc processes, react to issues only after they occur, or silo risk ownership within technical teams without business involvement. "AI risk is a business risk. Every AI risk has business implications," says Bill Wong, research fellow at Info-Tech Research Group. "Accountability cannot rest with AI leaders alone. Business executives must be active participants in identifying, evaluating, and responding to AI risks, and that starts with embedding risk management into governance, strategy, and decision-making processes." The firm's resource outlines how to evolve fragmented or informal approaches into a structured AI risk management program through four key dimensions: risk governance, risk identification, risk measurement, and risk response. One of the blueprint's themes focuses on aligning the AI risk framework with broader enterprise risk management to ensure the program integrates with organizational strategy and regulatory requirements. To support implementation, Info-Tech has introduced a comprehensive roadmap built around framing AI risks, establishing AI risk governance, identifying and assessing risks, measuring potential impact, defining responses, and creating a roadmap for execution. A key component of the blueprint is the formation of an AI Risk Council (AIRC), which would include cross-functional representation from IT, AI, and business leaders. This council is responsible for assigning ownership, recommending risk tolerance, reviewing risk assessments, and ensuring shared accountability across the organization. Info-Tech's framework also emphasizes the need to establish foundational AI principles, such as explainability and transparency, fairness, data privacy, safety and security, validity and reliability, and accountability. These principles, derived from global frameworks such as those developed by the Organization for Economic Co-operation and Development (OECD), serve as the ethical and operational backbone of responsible AI. Key Processes to Operationalize AI Risk ManagementInfo-Tech's resource is designed to help organizations reduce the number of unidentified risks, build realistic contingency plans, enable cross-functional accountability, and improve regulatory compliance, such as with the EU AI Act's high-risk system requirements. It also supports better decision-making and ongoing monitoring to ensure AI systems remain aligned with organizational goals. The firm's Build Your AI Risk Management Roadmap blueprint outlines the following steps for IT leaders to operationalize AI risk management across the entire organization: Establish Foundational AI Principles – Define the ethical and operational standards that guide the development and deployment of AI. Assess AI Risk Management Maturity – Understand the current state of AI risk governance to identify capability gaps. Create and Assign AI Risk Council Responsibilities – Establish clear accountability for AI risk across leadership and governance teams. Implement an AI Risk Management Framework – Develop an AI risk management program that begins with introducing an AI risk governance program that is aligned to the organization's foundational AI principles Then determine methods for identifying and classifying AI risks, followed by establishing how AI risks will be measured and monitored, and finally adopting a strategy for the actions to take to mitigate a given AI risk. Pursue AI Risk-Mitigation Initiatives – Prioritize actions that reduce the likelihood or impact of AI risks based on feasibility and value. Build an AI Risk Management Roadmap – Translate priorities into a structured, time-bound action plan aligned with business goals. The blueprint promotes a preventative mindset, encouraging organizations to detect, assess, and mitigate AI risks before they materialize, transforming risk management from a reactive obligation into a strategic enabler. To request exclusive and timely commentary from Info-Tech's experts, including Bill Wong, or to request full access to the Build Your AI Risk Management Roadmap blueprint, please contact pr@ About Info-Tech Research GroupInfo-Tech Research Group is one of the world's leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations. To learn more about Info-Tech's divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software-buying insights. Media professionals can register for unrestricted access to research across IT, HR, and software, as well as hundreds of industry analysts through the firm's Media Insiders program. To gain access, contact pr@ For information about Info-Tech Research Group or to access the latest research, visit and connect via LinkedIn and X. View original content to download multimedia: SOURCE Info-Tech Research Group Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Cision Canada
09-07-2025
- Business
- Cision Canada
AI Systems Are Advancing Faster Than Risk Controls, Warns Info-Tech Research Group in New Risk Management Resource
As AI adoption accelerates across industries, most organizations remain unprepared to manage the complex and emerging risks that come with it. Info-Tech Research Group, a global research and advisory firm, has recently published a resource that introduces a proactive, principle-based framework to help enterprises formalize their AI risk programs, improve governance, and align strategies with business objectives. TORONTO, July 9, 2025 /CNW/ - Organizations are adopting AI at a rapid pace, but many lack the necessary controls to manage the risks that come with these transformative systems. From hallucinations and bias to deepfakes and adversarial threats, AI can introduce novel vulnerabilities that traditional governance frameworks were not designed to address. To help organizations tackle these challenges, global research and advisory firm Info-Tech Research Group has recently published its research insights in the blueprint, Build Your AI Risk Management Roadmap, offering a structured methodology to develop a comprehensive, business-aligned AI risk program. As outlined by Info-Tech's AI risk management framework, failing to manage AI risk proactively can lead to regulatory violations, reputational damage, and lost value. Despite these consequences, many organizations still rely on ad hoc processes, react to issues only after they occur, or silo risk ownership within technical teams without business involvement. "AI risk is a business risk. Every AI risk has business implications," says Bill Wong, research fellow at Info-Tech Research Group. "Accountability cannot rest with AI leaders alone. Business executives must be active participants in identifying, evaluating, and responding to AI risks, and that starts with embedding risk management into governance, strategy, and decision-making processes." The firm's resource outlines how to evolve fragmented or informal approaches into a structured AI risk management program through four key dimensions: risk governance, risk identification, risk measurement, and risk response. One of the blueprint's themes focuses on aligning the AI risk framework with broader enterprise risk management to ensure the program integrates with organizational strategy and regulatory requirements. To support implementation, Info-Tech has introduced a comprehensive roadmap built around framing AI risks, establishing AI risk governance, identifying and assessing risks, measuring potential impact, defining responses, and creating a roadmap for execution. A key component of the blueprint is the formation of an AI Risk Council (AIRC), which would include cross-functional representation from IT, AI, and business leaders. This council is responsible for assigning ownership, recommending risk tolerance, reviewing risk assessments, and ensuring shared accountability across the organization. Info-Tech's framework also emphasizes the need to establish foundational AI principles, such as explainability and transparency, fairness, data privacy, safety and security, validity and reliability, and accountability. These principles, derived from global frameworks such as those developed by the Organization for Economic Co-operation and Development (OECD), serve as the ethical and operational backbone of responsible AI. Key Processes to Operationalize AI Risk Management Info-Tech's resource is designed to help organizations reduce the number of unidentified risks, build realistic contingency plans, enable cross-functional accountability, and improve regulatory compliance, such as with the EU AI Act's high-risk system requirements. It also supports better decision-making and ongoing monitoring to ensure AI systems remain aligned with organizational goals. The firm's Build Your AI Risk Management Roadmap blueprint outlines the following steps for IT leaders to operationalize AI risk management across the entire organization: Establish Foundational AI Principles – Define the ethical and operational standards that guide the development and deployment of AI. Assess AI Risk Management Maturity – Understand the current state of AI risk governance to identify capability gaps. Create and Assign AI Risk Council Responsibilities – Establish clear accountability for AI risk across leadership and governance teams. Implement an AI Risk Management Framework – Develop an AI risk management program that begins with introducing an AI risk governance program that is aligned to the organization's foundational AI principles Then determine methods for identifying and classifying AI risks, followed by establishing how AI risks will be measured and monitored, and finally adopting a strategy for the actions to take to mitigate a given AI risk. Pursue AI Risk-Mitigation Initiatives – Prioritize actions that reduce the likelihood or impact of AI risks based on feasibility and value. Build an AI Risk Management Roadmap – Translate priorities into a structured, time-bound action plan aligned with business goals. The blueprint promotes a preventative mindset, encouraging organizations to detect, assess, and mitigate AI risks before they materialize, transforming risk management from a reactive obligation into a strategic enabler. To request exclusive and timely commentary from Info-Tech's experts, including Bill Wong, or to request full access to the Build Your AI Risk Management Roadmap blueprint, please contact [email protected]. About Info-Tech Research Group Info-Tech Research Group is one of the world's leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations. To learn more about Info-Tech's divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software-buying insights. Media professionals can register for unrestricted access to research across IT, HR, and software, as well as hundreds of industry analysts through the firm's Media Insiders program. To gain access, contact [email protected].


Forbes
23-06-2025
- Business
- Forbes
Navigating The Generative AI Technology Stack
Bill Wong - AI Research Fellow, Info-Tech Research Group. Generative AI is transforming the technology landscape, introducing new large language models (LLMs), development tools and a range of new or enhanced applications. As adoption grows, organizations must focus on building a robust technology stack to support these applications—ensuring they meet performance and scalability demands. When evaluating vendor solutions for generative AI, it's important to understand the core components of the supporting technology stack. This stack includes several layers, with applications sitting at the top. Key layers and examples include: • Application Layer: Business applications such as CRM, ERP and marketing/sales tools, along with industry-specific solutions for sectors like healthcare, legal and financial services. • Data And AI Tools: Platforms and tools used for managing data and developing machine learning models, including LLMs. • LLMs: The foundational AI models that generate outputs, insights and recommendations. • Data Layer: Systems for storing and managing both structured and unstructured data, such as databases and data lakes. • Infrastructure Layer: The hardware and core software needed to support AI workloads—this includes compute, storage and networking resources. There are two primary approaches for delivering generative AI applications: 1. Component-Based (Loosely Coupled) Approach: In this model, the vendor provides the application itself but allows the client to choose the underlying components—such as the LLM, data platform and infrastructure. This approach offers maximum flexibility, enabling organizations to tailor the solution to their specific needs. However, it also requires more time, effort and resources to integrate the components and optimize performance. This is currently the most common approach in the market. 2. Integrated (Tightly Coupled) Approach: This model provides a complete, pre-integrated solution that includes the application, LLM, data platform and infrastructure. Often delivered as a software-as-a-service (SaaS) offering, this approach prioritizes ease of deployment and speed to value. While it offers less customization than the component-based approach, it reduces complexity for the client. While not an exhaustive list, examples of this model include Microsoft 365 Copilot, Salesforce Einstein and Amazon Q. Criteria For Evaluating Generative AI Applications When evaluating generative AI applications, it's essential to assess them against a core set of criteria to ensure they align with business goals and technical requirements. Key categories include: Selection should be driven by specific business needs and prioritized use cases. For example, when evaluating AI-powered CRM solutions, relevant capabilities might include customer segmentation, sentiment analysis, personalized content generation, predictive lead scoring and user experience factors (such as flexibility or ease of integration). Applications should meet required performance and scalability standards. Evaluation criteria may include response time or latency, ability to support concurrent users (concurrency), scalability across workloads and resource utilization. Cost considerations should factor in both initial and ongoing expenses, including licensing fees, support and maintenance costs, and infrastructure or usage-based charges (if applicable). Alignment To AI Guiding Principles Generative AI applications should align with your organization's broader AI and business strategies. These guiding principles help ensure that AI systems are deployed responsibly and effectively, while also providing a framework that IT can use to implement risk-mitigation measures. These should be tailored to fit the specific values and goals of your organization. Core principles typically include: • Safety And Security: AI systems must be resilient, secure and safe throughout their entire life cycle—from development through deployment and operation. • Data Privacy: Personal and sensitive company data must be protected to ensure anonymity, confidentiality and compliance with data protection regulations. • Explainability And Transparency: AI systems should be as transparent as possible in their operations and offer explanations that end users can understand and trust. • Fairness And Bias Detection: AI systems should be designed to identify and mitigate bias in data and algorithms, promoting fairness and improving decision accuracy. • Validity And Reliability: AI-generated outputs must be consistently accurate, reliable and valid. • Accountability: Clear responsibility must be established for AI system outcomes. Organizations should define who is accountable for the design, performance and oversight of each system. While some organizations may refer to these as "responsible AI principles," the specific terminology is less important than ensuring these principles are customized and aligned with organizational strategies and values. In Summary The evaluation framework should be tailored to reflect your organization's unique context—including its AI guiding principles, the specific use cases the solution is expected to address, performance and scalability SLAs, and the available budget for acquiring and operating the application. The degree of flexibility to optimize application performance will vary depending on the architecture of the solution—some applications offer limited configurability, while others provide significant control over components such as infrastructure and model selection. When assessing vendor solutions, it's important to prioritize evaluation criteria based on the organization's strategic goals. In my experience working with C-level executives, two factors consistently stand out as most critical in the selection process: business capabilities and the alignment with AI guiding principles, ensuring the solution reflects the organization's ethical, governance and risk frameworks for AI. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?