logo
Integral Ad Science earns first AAM ethical AI certification

Integral Ad Science earns first AAM ethical AI certification

Techday NZ6 days ago
Integral Ad Science has received the first Ethical Artificial Intelligence Certification from the Alliance for Audited Media.
The certification is regarded as a milestone as artificial intelligence becomes more prevalent in the digital advertising sector. The Alliance for Audited Media's framework assesses a company's AI governance, data quality, risk mitigation, bias controls, and oversight processes.
Certification process
The certification is based on the Alliance for Audited Media's Ethical AI Framework. This framework covers areas such as disclosure, human oversight, privacy, bias mitigation, and risk management. The evaluation involved a comprehensive audit of Integral Ad Science's AI governance, including policies, AI risk management procedures, and oversight controls at multiple organisational levels. The auditors also examined the company's product-level methodologies and checked whether effective quality control mechanisms were in place for both the supporting data and the AI models' overall performance.
AI is a central component of Integral Ad Science's approach to digital advertising. The company's AI and machine learning platforms process up to 280 billion interactions each day, integrating AI into products for tasks such as real-time prediction, decision-making, fraud protection, brand safety, and attention measurement. These AI capabilities support solutions such as Total Media Quality, Quality Attention, and Fraud Solutions.
Industry recognition
Integral Ad Science is also the holder of TrustArc's Responsible AI certification and participates in ISO 42001 standards for AI management systems. According to the company, it is one of the few firms globally to hold both of these certifications.
Kevin Alvero, Chief Compliance Officer at Integral Ad Science, said, "As the first company to receive AAM's certification for ethical AI use, we are paving the way for the responsible use of AI within the advertising industry as a whole. AAM has a long history of providing transparency and assurance to the media and advertising industries, and we are pleased to be recognised as a leader in this area."
This recognition places an emphasis on transparency and the responsible implementation of AI practices in an industry that increasingly relies on automated data-driven solutions for media measurement and optimisation.
AI in practice
Integral Ad Science's use of AI is built into its long-term strategy, enabling enhanced analytical capabilities for its customers and partners. The company's proprietary digital advertising platform is designed to leverage large-scale data analytics, which supports its offering of actionable media insight for global brands, publishers, and digital platforms.
Richard Murphy, Chief Executive Officer, President, and Managing Director at the Alliance for Audited Media, commented, "We congratulate IAS for becoming the first organisation to achieve AAM's Ethical AI Certification. By certifying to AAM's framework, IAS is demonstrating how AI can be implemented to drive innovation and efficiency while maintaining trust with advertisers and partners. Their commitment to responsible AI practices backed by independent validation sets a new standard for accountability in the industry."
Broader context
The certification comes at a time of growing concern about AI's role in critical sectors such as digital advertising. Businesses across the industry are advancing the adoption of algorithmic solutions and machine learning methods to improve operational efficiency and advertising outcomes. In tandem, regulators and industry bodies are calling for strengthened oversight, transparency, and accountability in the use of such systems.
The Ethical AI Certification from the Alliance for Audited Media is designed to recognise and encourage industry practices that align with responsible AI governance, transparency, and bias mitigation, with the intention of setting a benchmark for ethical standards in media and advertising.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The AI doctor will see you … soon
The AI doctor will see you … soon

Newsroom

time2 hours ago

  • Newsroom

The AI doctor will see you … soon

Comment: Artificial intelligence is already widely used in healthcare. There are now more than 1000 Federal Drug Administration-authorised AI systems in use in the US, and regulators around the world have allowed a variety of AI systems to support doctors and healthcare organisations. AI is being used to support radiologists examining X-rays and MRI scans by highlighting abnormal features, and to help predict how likely someone is to develop a disease based on their genetics and lifestyle. It is also integrated with consumer technology that many people use to manage their health. If you own an Apple watch, it can use AI to warn you if you develop an abnormal heart rhythm. More recently, doctors (including many GPs in Aotearoa New Zealand) have adopted AI to help them to write their medical notes. An AI system listens into the GP-patient conversation and then uses a large language model such as ChatGPT to turn the transcript of the audio into a summary of the consultation. This saves the doctor time and can help them pay closer attention to what their patient is saying rather than concentrating on writing notes. But there are still lots of things we don't know about the future of AI in health. I was recently invited to speak at the Artificial Intelligence in Medicine and Imaging conference at Stanford University, and clinicians in the audience asked questions that are quite difficult to answer. For example, if an AI system used by a doctor makes a mistake (ChatGPT is well known for 'hallucinating' incorrect information), who is liable if the error leads to a poor outcome for the patient? It can also be difficult to accurately assess the performance of AI systems. Often studies only assess AI systems in the lab, as it were, rather than in real world use on the wards. I'm the editor-in-chief of a new British Medical Journal publication, BMJ Digital Health & AI, which aims to publish high-quality studies to help doctors and healthcare organisations determine which types of AI and digital health technologies are going to be useful in healthcare. We've recently published a paper about a new AI system for identifying which artery is blocked in a heart attack, and another on how GPs in the UK are using AI for transcribing their notes. One of the most interesting topics in AI research is whether generative AI is better than a doctor for general purpose diagnosis. There seems to be some evidence emerging that AI may be starting to be better than doctors at diagnosing patients when given descriptions of complex cases. The surprising thing about this research is that it found that an AI alone might be more accurate than when a doctor uses an AI to help them. This may be because some doctors don't know how to use AI systems effectively, indicating that medical schools and training colleges should incorporate AI training into medical education programmes. Another interesting development is the use of AI avatars (simulated humans) for patient pre-consultations and triage, something that seems likely to be implemented within the next few years. The experience will be very similar to talking with a human doctor and the AI avatar could then explain to the real doctor what that they found and what they would recommend as treatment. Though this may save time, a balance will need to be struck between efficiency and patients' preferences – would you prefer to see an AI doctor now or wait longer to see a human doctor? The advancement of AI in healthcare is very exciting but there are risks. Often new technology is implemented without considering so-called human factors. These can have a big impact on whether mistakes are made using the new system, or even whether the system will get used at all. Clinicians and patients quickly stop using systems that are hard to use or that don't fit into their normal work routines. The best way to prevent this is to use 'human-centred design', where real people – doctors and patients – are included in the design process. There is also a risk that unregulated AI systems are used to diagnose patients or make treatment decisions. Most AI systems are highly regulated – patients can be reassured that any AI involved in their care is being used safely. But there is a risk that governments may not keep up with the accelerating development of AI systems. Rapid, large-scale adoption of inaccurate healthcare-related AI systems could cause a lot of problems, so it is very important governments invest in high-quality AI research and robust regulatory processes to ensure patient safety. Chris Paton will be giving a public lecture about AI in healthcare at the Liggins Institute on August 14 at 6pm. Register here.

EY & ACCA urge trustworthy AI with robust assessment frameworks
EY & ACCA urge trustworthy AI with robust assessment frameworks

Techday NZ

time14 hours ago

  • Techday NZ

EY & ACCA urge trustworthy AI with robust assessment frameworks

EY and the Association of Chartered Certified Accountants (ACCA) have released a joint policy paper offering practical guidance aimed at strengthening confidence in artificial intelligence (AI) systems through effective assessments. The report, titled "AI Assessments: Enhancing Confidence in AI", examines the expanding field of AI assessments and their role in helping organisations ensure their AI technologies are well governed, compliant, and reliable. The paper is positioned as a resource for business leaders and policymakers amid rapid AI adoption across global industries. Boosting trust in AI According to the paper, comprehensive AI assessments address a pressing challenge for organisations: boosting trust in AI deployments. The report outlines how governance, conformity, and performance assessments can help businesses ensure their AI systems perform as intended, meet legal and ethical standards, and align with organisational objectives. The guidance comes as recent research highlights an ongoing trust gap in AI. The EY Response AI Pulse survey found that 58% of consumers are concerned that companies are not holding themselves accountable for potential negative uses of the technology. This concern has underscored the need for greater transparency and assurance around AI applications. "Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." Marie-Laure Delarue, EY's Global Vice-Chair, Assurance, expressed the significance of the current moment for AI: "AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks. It is hard to overstate the importance of ensuring safe and effective adoption of AI. Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." She continued, "As businesses navigate the complexities of AI deployment, they are asking fundamental questions about the meaning and impact of their AI initiatives. This reflects a growing demand for trust services that align with EY's existing capabilities in assessments, readiness evaluations, and compliance." Types of assessments The report categorises AI assessments into three main areas: governance assessments, which evaluate the internal governance structures around AI; conformity assessments, determining compliance with laws, regulations and standards; and performance assessments, which measure AI systems against specific quality and performance metrics. The paper provides recommendations for businesses and policymakers alike. It calls for business leaders to consider both mandatory and voluntary AI assessments as part of their corporate governance and risk management frameworks. For policymakers, it advocates for clear definitions of assessment purposes, methodologies, and criteria, as well as support for internationally compatible assessment standards and market capacity-building. Public interest and skills gap Helen Brand, Chief Executive of ACCA, commented on the wider societal significance of trustworthy AI systems. "As AI scales across the economy, the ability to trust the technology is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business. We look forward to collaborating with policymakers and others in this fascinating and important area." The ACCA and EY guidance addresses several challenges related to the current robustness and reliability of AI assessments. It notes that well-specified objectives, clear assessment criteria, and professional, objective assessment providers are essential to meaningful scrutiny of AI systems. Policy landscape The publication coincides with ongoing changes in the policy environment on AI evaluation. The report references recent developments such as the AI Action Plan released by the Trump administration, which highlighted the importance of rigorous evaluations for defining and measuring AI reliability and performance, particularly in regulated sectors. As AI technologies continue to proliferate across industries, the report argues that meaningful and standardised assessments could support the broader goal of safe and responsible AI adoption both in the private and public sectors. In outlining a potential way forward, the authors suggest both businesses and governments have roles to play in developing robust assessment frameworks that secure public confidence and deliver on the promise of emerging technologies.

AI-driven DNS threats & malicious adtech surge worldwide
AI-driven DNS threats & malicious adtech surge worldwide

Techday NZ

time14 hours ago

  • Techday NZ

AI-driven DNS threats & malicious adtech surge worldwide

Infoblox has published its 2025 DNS Threat Landscape Report, revealing increases in artificial intelligence-driven threats and widespread malicious adtech activity impacting organisations worldwide. DNS exploits rising The report draws on real-time analysis of more than 70 billion daily DNS queries across thousands of customer environments, providing data on how adversaries exploit DNS infrastructure to deceive users, evade detection, and undermine brand trust. Infoblox Threat Intel has identified over 660 unique threat actors and more than 204,000 suspicious domain clusters to date, with 10 new actors highlighted in the past year alone. The findings detail how malicious actors are registering unprecedented numbers of domains, using automation to enable large-scale campaigns and circumvent traditional cyber defences. In the past 12 months, 100.8 million newly observed domains were identified, with 25.1% classed as malicious or suspicious by researchers. According to Infoblox, the vast majority of these threat-related domains (95%) were unique to a single customer environment, increasing difficulty for the wider industry to detect and stop these threats. Malicious adtech and evasive tactics The analysis highlights the growing influence of malicious adtech, with 82% of customer environments reportedly querying domains associated with blacklisted advertising services. Malicious adtech schemes frequently rely on traffic distribution systems (TDS) to serve harmful content and mask the true nature of destination sites. Nearly 500,000 TDS domains were recorded within Infoblox networks over the year. Attackers are also harnessing DNS misconfigurations and deploying advanced techniques such as AI-enabled deepfakes and high-speed domain rotation. These tactics allow adversaries to hijack existing domains or impersonate prominent brands for phishing, malware delivery, drive-by downloads, or scams such as fraudulent cryptocurrency investment schemes. TDS enables threats to be redirected or disguised rapidly, hindering detection and response efforts. "This year's findings highlight the many ways in which threat actors are taking advantage of DNS to operate their campaigns, both in terms of registering large volumes of domain names and also leveraging DNS misconfigurations to hijack existing domains and impersonate major brands. The report exposes the widespread use of traffic distribution systems (TDS) to help disguise these crimes, among other trends security teams must look out for to stay ahead of attackers," said Dr. Renée Burton, head of Infoblox Threat Intel. Infoblox notes that traditional forensic-based, post-incident detection - also termed a "patient zero" approach - has proven less effective as attackers increase their use of new infrastructures and frequently rotate domains. As threats emerge and evolve at pace, reactive techniques may leave organisations exposed before threats are fully understood or shared across the security industry. AI, tunnelling and the threat intelligence gap DNS is also being leveraged for tunnelling, data exfiltration, and command and control activities. The report documents daily detections of activity involving tools such as Cobalt Strike, Sliver, and custom-built malware, which typically require machine learning algorithms to identify due to their obfuscation methods. Infoblox Threat Intel's research suggests that domain clusters - groups of interrelated domains operated by the same actor - are a significant trend. During the past year, security teams uncovered new actors and observed the continued growth of domain sets used for malicious activities. Proactive security recommended The report advocates a shift towards preemptive protection and predictive threat intelligence, emphasising the limitations of relying solely on detection after the fact. The data indicates that using Infoblox's protective DNS solution, 82% of threat-related queries were blocked before they could have a harmful impact, suggesting that proactive monitoring and early intervention can help counter adversarial tactics. Infoblox researchers argue that combining protective solutions with continuous monitoring of emerging threats is essential to providing security teams the necessary resources and intelligence to disrupt malicious campaigns before significant damage occurs. The report brings together research insights from the past twelve months to map out attack patterns and equip organisations with up-to-date knowledge on DNS-based threats, with a particular focus on the evolving role of harmful adtech in the modern threat landscape.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store