logo
#

Latest news with #responsibleAI

Cassava Technologies partners with the South African Artificial Intelligence Association to boost local access to Artificial Intelligence (AI) compute services
Cassava Technologies partners with the South African Artificial Intelligence Association to boost local access to Artificial Intelligence (AI) compute services

Zawya

time3 days ago

  • Business
  • Zawya

Cassava Technologies partners with the South African Artificial Intelligence Association to boost local access to Artificial Intelligence (AI) compute services

Cassava Technologies ( a global technology leader of African heritage, is pleased to announce that it has signed a Memorandum of Understanding (MOU) with the South African AI Association (SAAIA), an industry body focused on growing responsible AI adoption, to deliver artificial intelligence (AI) solutions and GPU-as-a-Service (GPUaas) across the African continent. In terms of the agreement, SAAIA's more than 3,000 AI practitioners, comprising entrepreneurs, researchers, and members of the wider business community in South Africa, will have access to Cassava's data centre GPUs to develop and deploy local AI solutions and initiatives. The two organisations will also collaborate on initiatives aimed at supporting the regional and broader African AI ecosystem. 'We are proud to partner with SAAIA to support the growth of Africa's AI ecosystem. By extending our advanced AI infrastructure and capabilities to SAAIA's growing community of AI professionals, we're enabling greater access to the compute power required to build, test, and scale innovative local solutions. We believe this partnership will deliver meaningful value to both organisations and, more importantly, to the business and research communities driving AI development on the continent,' said Ziaad Suleman, CEO of Cassava Technologies South Africa and Botswana. As South Africa's leading AI ecosystem builder, the South African Artificial Intelligence Association is focused on promoting the advancement of responsible AI in the country by uniting thousands of AI practitioners across the commercial, government, academic, startup, and NGO sectors. SAAIA also hosts the largest AI event in Africa, AI Expo Africa, and serves as a driving force behind trade and investment in the continent's rapidly expanding smart technology segment. 'SAAIA is pleased to be partnering with Cassava Technologies in strengthening AI in South Africa. Supporting local AI entrepreneurs is a key pillar of SAAIA, and access to GPU-as-a-Service is a key enabler to growing the emerging AI startup ecosystem,' said SAAIA Founder and Chairman, Dr Nick Bradshaw. Cassava's collaboration with SAAIA reinforces its commitment to providing world-class digital solutions and advancing responsible AI adoption, innovation, and growth in Africa. It follows Cassava's recent announcement of plans to build Africa's first AI factory, providing local businesses, governments, and researchers with access to cutting-edge AI computing capacity. This aligns with Cassava's vision of being the leading digital solutions provider in its chosen markets, empowering Africans to thrive in the digital economy. Distributed by APO Group on behalf of Cassava Technologies. About Cassava Technologies: Cassava Technologies is a global technology leader of African heritage providing a vertically integrated ecosystem of digital services and infrastructure enabling digital transformation. Headquartered in the UK, Cassava has a presence across Africa, the Middle East, Latin America and the United States of America. Through its business units, namely, Cassava AI, Liquid Intelligent Technologies, Liquid C2, Africa Data Centres, and Sasai Fintech, the company provides its customers' products and services in 94 countries. These solutions drive the company's ambition of establishing itself as a leading global technology company of African heritage.

How Leaders Can Choose The Right AI Auditing Services
How Leaders Can Choose The Right AI Auditing Services

Forbes

time25-06-2025

  • Business
  • Forbes

How Leaders Can Choose The Right AI Auditing Services

AI related law concept shown by robot hand using lawyer working tools in lawyers office with legal ... More astute icons depicting artificial intelligence law and online technology of legal law regulationsNow that the 'big four' accounting firms— Deloitte, PwC, Ernst & Young, and KPMG— are beginning to offer AI audit services, what do leaders need to know about choosing the right AI audit services and about responsible AI (RAI)? The first step would be understanding key vulnerabilities from implementing AI systems, and how to mitigate such risks. It is important to understand the unintended consequences from black box AI systems and lack of transparency in deployment of such AI. In consumer facing industries, unintended consequences of deploying black box AI systems without due attention to how such systems are trained and what data is being used to train such systems can result in harm to consumers, such as price discrimination or quality discrimination. Disparate impact laws allow individuals to sue for such unintentional discrimination. Next, leaders need to understand frameworks to manage such risks. The National Institute of Standards and Technology offers an AI risk management framework, which outlines a comprehensive risks. Management frameworks help leaders to better manage risks to individuals, organizations, and society associated with AI, like standards in other industries that mandate transparency. When rightly used, AI audits can be effective in examining whether an AI system is lawful, ethical, and technically robust. However, there are vast gaps in how companies understand these principles and integrate them into their organizational goals and values. A 2022 study by the Boston Consulting Group and the Sloan Management Review found that RAI programs typically neglect three dimensions—fairness and equity, social and environmental impact mitigation, and human plus AI—because they are difficult to address. Responsible AI principles cannot be in a vacuum but need to be tied to a company's broader goals for being a responsible business. For example, is top management intentionally connecting RAI with its governance, methods, and processes?Have Clear Goals For AI Audits Standard frameworks used in procurement of technology typically focus on performance, cost, and quality considerations. However, evaluating tools also requires values such as equity, fairness, and transparency. Leaders need to envision values such as trustworthiness and alignment with organizational mission, human-AI teaming, explainability and interpretability in deploying AI. A study by researchers Yueqi Li and Sanjay Goel found significant knowledge gaps around AI audits. These gaps stem from immature AI governance implementation and insufficient operationalization of AI governance processes. A cohesive approach to AI audits requires a foundation of ethical principles integrated into AI governance. To take one example, a financial institution could explicitly mandate fairness as a criterion in AI-enabled decision-making models. For that we would first need a clear and consistent criterion of fairness, and one that can be supported by the principle of law and by a settled body of trade and commerce practice. Second, we need clear standards that can establish if norms of fairness are violated, which could be used as a stress test to determine whether AI based models are indeed fair. Auditing predictions of automated business decisions using fairness criteria will allow companies to establish if their policies are disadvantageous to some groups more than the others. If a bank is interested in predicting who to lend to, adding fairness as a criterion does not mean that the bank would have to stop screening borrowers altogether. It would necessitate that the bank does avoid metrics that would constitute a more stringent burden on some groups of borrowers (holding different groups of people to different standards). Business person holding AI box for technology and Artificial Intelligence concept. Internet of ... More Thinking and data analysis. Algorithmic stress tests before deploying black box AI models allow us to visualize different scenarios that not only help in establishing the goals of the fairness audit. It may also allow decision makers to specify different performance criteria (both from a technical perspective but also from a business objective performance). Such stress tests would allow vendors to quantify legal and operational constraints in the business, history of practices in the industry, and policies to protect confidential data, to name a few. Companies such as Microsoft and Google have used AI 'red teams' to stress test their Cross-functional Leadership Can Leverage With AI The above-mentioned BCG/SMR survey identified a key role for leaders, with most organizations that are in the leading stage of RAI maturity have both an individual and a committee guiding their RAI strategy. Increasingly, practitioners are also calling for Institutional Review Boards for the use of AI. Low frequency, but high business impact decisions, such as the choice of credit rating models, needs a systematic process to build consensus. An RAI champion, working with a cross-departmental team, could be entrusted with such a responsibility. The institutional review board needs to map algorithmic harms into an organization's risk framework. Smaller organizations can rely on best practice checklists developed by auditing bodies and industry standards organizations. Recognizing when a human decision maker is needed and when automated decisions can be employed will be increasingly important as we learn to navigate the algorithmic era. It is equally important to understand how business processes demarcate the boundaries between judgement exercised by a human actor and what is automated. The IRB can consider questions such as who should set these boundaries, is it the responsibility of division heads or mid-level managers. The AI ethics team and the legal team need to consider what are the policy implications of such boundaries and the legal implications of such a Foundation for AI Audits Three key aspects need to be understood before leaders embark on AI audits: Define goals: Understand AI audit is not about the technology itself, but how AI is intertwined with organizational values Establish AI governance: Before undertaking AI audits, we need a comprehensive AI governance framework in place. Establish cross-functional teams: Algorithmic risks need to be understood in the context of the organization's own risk profile. Cross-functional teams are key to build this understanding AI is increasingly intertwined with almost every aspect of business. Leaders should be cognizant of the algorithmic harms from the lack of transparency and oversight in AI, alongside the considerable benefits of digital transformation. Establishing the right governance frameworks and auditing AI will ensure transparency in AI model development, deployment, and use.

LAUNCH OF INTERTEK AI², THE WORLD'S FIRST END-TO-END AI ASSURANCE PROGRAMME
LAUNCH OF INTERTEK AI², THE WORLD'S FIRST END-TO-END AI ASSURANCE PROGRAMME

Yahoo

time24-06-2025

  • Business
  • Yahoo

LAUNCH OF INTERTEK AI², THE WORLD'S FIRST END-TO-END AI ASSURANCE PROGRAMME

Artificial intelligence (AI) is changing the way the world works Corporates are investing significantly in AI to step up customer service and boost productivity There are significant ethical, compliance, and quality risks with AI, a new and unproven technology Companies need to adopt responsible AI practices to grow their business in the right way Intertek launches Intertek AI², the world's first independent end-to-end AI assurance programme Intertek AI² enables organisations to power ahead with smarter, safer, trusted AI solutions A video is available on our website: LONDON, June 24, 2025 /CNW/ -- Intertek, a leading Total Quality Assurance provider to industries worldwide, announces the launch of Intertek AI², the world's first independent end-to-end AI assurance programme enabling organisations to power ahead with smarter, safer, trusted AI solutions. AI is rapidly accelerating in all parts of society, quickly changing the way the world works and triggering significant risks for governments, corporates, consumers and employees. With more than 130 years of quality and safety expertise across a wide range of industries, the launch of Intertek AI² expands Intertek's industry-leading offering of ATIC solutions, providing a comprehensive, risk-based AI assurance programme built around industry-leading solutions and addressing governance, transparency, security, and safety. Services include: Governed AI services establish risk and quality management frameworks, AI governance structures, regulatory compliance strategies, and oversight mechanisms ensuring accountability and adherence to evolving requirements including EU AI Act obligations and ISO42001. Transparent AI services develop technical documentation meeting regulatory standards, implement appropriate explainability levels for different applications, and creates communication strategies making AI behaviour understandable to diverse stakeholders. Secure AI services deliver cybersecurity tailored to AI systems, red teaming exercises identifying vulnerabilities and failure modes, threat monitoring and incident response planning, and security architecture guidance addressing unique AI vulnerabilities. Safe AI services provide comprehensive testing and validation using AI-specific methodologies, data quality assessment and improvement, independent performance verification, and bias detection and mitigation across diverse populations and use cases Leveraging Intertek's multi-industry value chain TQA expertise and network of more than 1,000 laboratories and offices in over 100 countries, Intertek AI² positions the Group as the ATIC industry leader in trusted AI across safety, security, sustainability, and compliance. Learn more about Intertek AI²: André Lacroix, CEO of Intertek Group, commented: "AI is reshaping our world at an unprecedented pace as organisations race to integrate AI into their systems and products to take their customer service to new heights and unleash new levels of productivity. Intertek AI² is the world's first independent end-to-end AI assurance programme to enable organisations to power ahead with smarter, safer and trusted AI solutions." About Intertek Intertek is a leading Total Quality Assurance provider to industries worldwide. Our network of more than 1,000 laboratories and offices in more than 100 countries, delivers innovative and bespoke Assurance, Testing, Inspection and Certification solutions for our customers' operations and supply chains. Intertek is a purpose-led company to Bring Quality, Safety and Sustainability to Life. We provide 24/7 mission-critical quality assurance solutions to our clients to ensure that they can operate with well-functioning supply chains in each of their operations. Our Customer Promise is: Intertek Total Quality Assurance expertise, delivered consistently, with precision, pace and passion, enabling our customers to power ahead safely. View original content: SOURCE Intertek View original content:

Trump praised by faith leaders for AI leadership as they warn of technology's 'potential peril'
Trump praised by faith leaders for AI leadership as they warn of technology's 'potential peril'

Fox News

time01-06-2025

  • Business
  • Fox News

Trump praised by faith leaders for AI leadership as they warn of technology's 'potential peril'

Evangelical leaders praised President Donald Trump for his leadership on artificial intelligence ("AI") in an open letter published last week, while cautioning him to ensure the technology is developed responsibly. Dubbing Trump the "AI President," the religious leaders wrote that they believe Trump is there by "Divine Providence" to guide the world on the future of AI. The signatories said they are "pro-science" and fully support the advancement of technology which benefits their own ministries around the world. "We are also pro-economic prosperity and economic leadership for America and our friends. We do not want to see the AI revolution slowing, but we want to see the AI revolution accelerating responsibly," the letter says. The faith leaders warned about the technology advancing at an out-of-control pace that could cause "potential peril" for mankind. They cited concerns raised by industry leaders Elon Musk, Bill Gates and Sam Altman, warning that AI would take jobs away in most industries and could eventually cause human suffering. The U.S. should not hesitate in its efforts to "win the AI race," the pastors told Trump, but cautioned that victory mustn't come at any cost. "As people of faith, we believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control," the letter states. "The spiritual implications of creating intelligence that may one day surpass human capabilities raises profound theological and ethical questions that must be thoughtfully considered with wisdom. One does not have to be religious to recognize religion as a type of compounding wisdom over the centuries, and virtually all religious traditions warn against a world where work is no longer necessary or where human beings can live their lives without any guardrails," the leaders wrote. They urged Trump to develop an advisory council or delegate authority to an existing agency or council "which would convene leaders who will pay attention especially not only to what AI CAN do but also what it SHOULD do." A group of 18 pastors and faith leaders signed on to the letter, which was spearheaded by prominent Christian leaders, Rev. Johnnie Moore, president of the Congress of Christian Leaders, and Rev. Samuel Rodriguez, President of the National Hispanic Christian Leadership Council. The letter comes weeks after Pope Leo XIV compared the advancements in AI to the Industrial Revolution and called on the Catholic Church to confront the challenges AI poses to human dignity, labor and society. In April, Trump signed an executive order to implement AI education in the classroom to create "educational and workforce development opportunities for America's youth." The AI order, Trump's latest pro-AI measure, established a White House task force for AI and education that will work with federal agencies and the private sector to help draft AI programs for schools.

The Future Is Explainability – Why AI Must Earn Our Trust
The Future Is Explainability – Why AI Must Earn Our Trust

Forbes

time30-05-2025

  • Business
  • Forbes

The Future Is Explainability – Why AI Must Earn Our Trust

As enterprises shift from AI experimentation to scaled implementation, one principle will separate hype from impact: explainability. This evolution requires implementing 'responsible AI' frameworks that effectively manage deployment while minimizing associated risks. The responsible AI approach, termed in the industry as 'explainability,' creates a balanced methodology that is ethical, pragmatic, and deliberate when integrating AI technologies into core business functions. Responsible AI shifts past generative AI's buzz (LLMs, voice/image generators) by harmonizing AI applications with corporate objectives, values, and risk tolerance. This approach typically features purpose-built systems with clearly defined outcomes. Forward-thinking companies making sustained investments prioritize automating routine tasks to decrease human dependency while enabling AI to manage repetitive processes. However, they maintain a balance where humans remain informed of system changes and actively oversee them. And in my view, this is the key to maturing AI. Explainability helps business leaders overseeing data analytics better interpret decisioning as concerns have become essential as businesses pursue AI's promised cost savings and increased automation. Explainability helps demystify AI decision-making. Business leaders overseeing analytics need visibility into why an AI system makes certain recommendations. This transparency is key as organizations scale their AI deployments and seek to build internal trust. According to McKinsey & Company, explainability increases user engagement and confidence, which are vital ingredients for successful, enterprise-wide adoption. As businesses embrace automation to drive efficiency and cost savings, interpretability becomes essential for governance, compliance, and decision support. Explainability agents are a new class of AI models designed to interpret and communicate the reasoning behind complex AI decisions, particularly in black-box systems such as deep neural networks. These agentic AI assistants are autonomous, goal-driven, and capable of adapting to changing conditions in real-time. Take, for example, a manufacturer managing MRO (maintenance, repair, and operations) inventory. An explainability agent can continuously reassess stocking levels by analyzing supply, demand, asset usage, and work orders. It can then suggest dynamic adjustments and explain the rationale behind each one. This improves efficiency and empowers supply chain leaders to make informed, confident decisions. As enterprises grow more sophisticated in their AI adoption, they recognize the limits of generic, pre-trained models. Instead, they're embracing purpose-built AI that: The goal is to improve timelines, cut costs, and increase productivity, responsibly and at scale. Responsible AI also involves rigorous risk management. A recent National Institute of Standards & Technology (NIST) report highlights how AI systems trained on evolving data can behave unpredictably, creating legal, reputational, or operational vulnerabilities. Responsible AI means designing systems that are explainable, testable, and aligned with human oversight, not just accurate. For example, responsible AI systems can segment sensitive data to prevent it from being processed by third-party large language models (LLMs). In another case, a supply chain AI platform might explain every recommendation with data-backed context, allowing users to see what the AI suggests and why it matters. This transparency builds user trust, facilitates informed decision-making, and accelerates execution by ensuring stakeholders align with AI-driven strategies. Ultimately, it empowers organizations to unlock AI's full potential, without losing control. AI doesn't need to be mysterious. With explainability agents and purpose-built systems, businesses can harness the power of AI in a transparent, ethical, and results-driven way. Enterprise users shouldn't just use AI—they should be able to understand and trust it. In the next phase of AI adoption, companies that prioritize responsible, agentic AI will reap long-term value while remaining resilient, agile, and accountable.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store