
AI Appreciation Day calls for ethical & human-centred innovation
Laura Ellis, Vice President of Data & AI at Rapid7, highlights the transformative reach of AI in the business landscape. "AI has completely changed how businesses operate. It streamlines processes and helps teams make smarter decisions, leading to better outcomes for customers," she said. Ellis stressed the importance of recognising the human effort behind these advancements, remarking, "It is important that every day, not just on AI Appreciation Day, we honour the people who tirelessly dedicate their time, knowledge, and drive to building and leveraging these technologies." She called for a responsible approach, urging that technology remain "human-centric, transparent, and ethical, so it can continue to drive meaningful impact."
The legal profession has not been immune to AI's influence, with substantial shifts in workflows and professional roles. David Fischl, a partner specialising in corporate and commercial law at Hicksons | Hunt & Hunt, noted that the sector is moving beyond generic AI-driven "chat-and-answer" tools towards highly tailored systems integrated into legal practice areas. At his firm, Fischl reports the adoption of specialised AI tools has transformed processes like document review and chronology creation, streamlining previously time-intensive tasks.
This, he explained, allows junior lawyers to "spend more time on high-level legal reasoning earlier in their careers, building stronger lawyers faster" and has helped foster a new breed of hybrid professionals: "the 'AI strategy lawyer', a hybrid role of legal professionals who understand how to integrate AI into workflows to deliver real value without compromising on the legal expertise that clients trust."
Clients, according to Fischl, are reaping tangible benefits from AI deployment in legal services, such as quicker turnaround, enhanced offerings, and greater pricing predictability. He encouraged continued curiosity and engagement with emerging AI technologies, cautioning that the "real power of AI lies in its ability to augment and not replace the unique skills and judgment lawyers bring to their clients."
AI's dual edge is particularly sharp in the field of cybersecurity, where it is both a transformative tool for defenders and a potent asset for attackers. Fabio Fratucello, Field CTO World Wide at CrowdStrike, described how the proliferation of AI is enabling adversaries to "automate social engineering, misinformation campaigns, and credential harvesting at unprecedented speed and scale." He cited CrowdStrike's own research, which found sophisticated attackers using large language models to conduct highly convincing phishing and business email compromise campaigns.
Yet, Fratucello remains optimistic about AI's potential to boost defensive capabilities. With teams overwhelmed by increasing alert volumes and a shortage of skilled analysts, "security teams must leverage AI to protect their organisations and move from reactive response to proactive threat disruption." He pointed to solutions like CrowdStrike's Charlotte AI Agentic Detection Triage, capable of autonomously validating and prioritising threats with a reported accuracy above 98%, freeing up analysts to focus on more complex threat detection and mitigation. Built with checks and balances, Charlotte AI "allows organisations to define how and when automated decisions are made, giving analysts full control to set thresholds, determine when human review is required, and maintain oversight." Fratucello suggested that AI Appreciation Day should inspire more organisations to embrace such technologies "to take back control, reduce burnout, and decisively shift the AI advantage back in their favour."
Micah Heaton, Executive Director of Microsoft Product and Innovation Strategy at BlueVoyant, offered a philosophical perspective on the day's significance. "AI Appreciation Day isn't about machines. It's about us. It's about the choices we make at machine speed that still echo at human scale," he stated. Heaton underlined the ethical dimension of AI, insisting that responsibility "isn't a checkbox. It's the only thing standing between progress and catastrophe." He emphasised that the shaping of AI's trajectory is a collective responsibility, urging, "If we want AI to work in the right direction, we have to bring every voice to the table. We have to build with intention, wield it with moral clarity, and protect people with the same ferocity we protect their data."
As AI embeds deeper into the fabric of modern life, the commentaries from industry leaders converge on a common theme: responsible innovation. While AI accelerates efficiencies and creates new opportunities, its broader success relies on a commitment to ethics, transparency, human oversight, and inclusivity. AI Appreciation Day thus becomes not only an occasion to acknowledge technological progress, but a call to action for conscientious stewardship of the technologies shaping society's future.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
5 hours ago
- Techday NZ
EY & ACCA urge trustworthy AI with robust assessment frameworks
EY and the Association of Chartered Certified Accountants (ACCA) have released a joint policy paper offering practical guidance aimed at strengthening confidence in artificial intelligence (AI) systems through effective assessments. The report, titled "AI Assessments: Enhancing Confidence in AI", examines the expanding field of AI assessments and their role in helping organisations ensure their AI technologies are well governed, compliant, and reliable. The paper is positioned as a resource for business leaders and policymakers amid rapid AI adoption across global industries. Boosting trust in AI According to the paper, comprehensive AI assessments address a pressing challenge for organisations: boosting trust in AI deployments. The report outlines how governance, conformity, and performance assessments can help businesses ensure their AI systems perform as intended, meet legal and ethical standards, and align with organisational objectives. The guidance comes as recent research highlights an ongoing trust gap in AI. The EY Response AI Pulse survey found that 58% of consumers are concerned that companies are not holding themselves accountable for potential negative uses of the technology. This concern has underscored the need for greater transparency and assurance around AI applications. "Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." Marie-Laure Delarue, EY's Global Vice-Chair, Assurance, expressed the significance of the current moment for AI: "AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks. It is hard to overstate the importance of ensuring safe and effective adoption of AI. Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity." She continued, "As businesses navigate the complexities of AI deployment, they are asking fundamental questions about the meaning and impact of their AI initiatives. This reflects a growing demand for trust services that align with EY's existing capabilities in assessments, readiness evaluations, and compliance." Types of assessments The report categorises AI assessments into three main areas: governance assessments, which evaluate the internal governance structures around AI; conformity assessments, determining compliance with laws, regulations and standards; and performance assessments, which measure AI systems against specific quality and performance metrics. The paper provides recommendations for businesses and policymakers alike. It calls for business leaders to consider both mandatory and voluntary AI assessments as part of their corporate governance and risk management frameworks. For policymakers, it advocates for clear definitions of assessment purposes, methodologies, and criteria, as well as support for internationally compatible assessment standards and market capacity-building. Public interest and skills gap Helen Brand, Chief Executive of ACCA, commented on the wider societal significance of trustworthy AI systems. "As AI scales across the economy, the ability to trust the technology is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business. We look forward to collaborating with policymakers and others in this fascinating and important area." The ACCA and EY guidance addresses several challenges related to the current robustness and reliability of AI assessments. It notes that well-specified objectives, clear assessment criteria, and professional, objective assessment providers are essential to meaningful scrutiny of AI systems. Policy landscape The publication coincides with ongoing changes in the policy environment on AI evaluation. The report references recent developments such as the AI Action Plan released by the Trump administration, which highlighted the importance of rigorous evaluations for defining and measuring AI reliability and performance, particularly in regulated sectors. As AI technologies continue to proliferate across industries, the report argues that meaningful and standardised assessments could support the broader goal of safe and responsible AI adoption both in the private and public sectors. In outlining a potential way forward, the authors suggest both businesses and governments have roles to play in developing robust assessment frameworks that secure public confidence and deliver on the promise of emerging technologies.


Techday NZ
5 hours ago
- Techday NZ
AI-driven DNS threats & malicious adtech surge worldwide
Infoblox has published its 2025 DNS Threat Landscape Report, revealing increases in artificial intelligence-driven threats and widespread malicious adtech activity impacting organisations worldwide. DNS exploits rising The report draws on real-time analysis of more than 70 billion daily DNS queries across thousands of customer environments, providing data on how adversaries exploit DNS infrastructure to deceive users, evade detection, and undermine brand trust. Infoblox Threat Intel has identified over 660 unique threat actors and more than 204,000 suspicious domain clusters to date, with 10 new actors highlighted in the past year alone. The findings detail how malicious actors are registering unprecedented numbers of domains, using automation to enable large-scale campaigns and circumvent traditional cyber defences. In the past 12 months, 100.8 million newly observed domains were identified, with 25.1% classed as malicious or suspicious by researchers. According to Infoblox, the vast majority of these threat-related domains (95%) were unique to a single customer environment, increasing difficulty for the wider industry to detect and stop these threats. Malicious adtech and evasive tactics The analysis highlights the growing influence of malicious adtech, with 82% of customer environments reportedly querying domains associated with blacklisted advertising services. Malicious adtech schemes frequently rely on traffic distribution systems (TDS) to serve harmful content and mask the true nature of destination sites. Nearly 500,000 TDS domains were recorded within Infoblox networks over the year. Attackers are also harnessing DNS misconfigurations and deploying advanced techniques such as AI-enabled deepfakes and high-speed domain rotation. These tactics allow adversaries to hijack existing domains or impersonate prominent brands for phishing, malware delivery, drive-by downloads, or scams such as fraudulent cryptocurrency investment schemes. TDS enables threats to be redirected or disguised rapidly, hindering detection and response efforts. "This year's findings highlight the many ways in which threat actors are taking advantage of DNS to operate their campaigns, both in terms of registering large volumes of domain names and also leveraging DNS misconfigurations to hijack existing domains and impersonate major brands. The report exposes the widespread use of traffic distribution systems (TDS) to help disguise these crimes, among other trends security teams must look out for to stay ahead of attackers," said Dr. Renée Burton, head of Infoblox Threat Intel. Infoblox notes that traditional forensic-based, post-incident detection - also termed a "patient zero" approach - has proven less effective as attackers increase their use of new infrastructures and frequently rotate domains. As threats emerge and evolve at pace, reactive techniques may leave organisations exposed before threats are fully understood or shared across the security industry. AI, tunnelling and the threat intelligence gap DNS is also being leveraged for tunnelling, data exfiltration, and command and control activities. The report documents daily detections of activity involving tools such as Cobalt Strike, Sliver, and custom-built malware, which typically require machine learning algorithms to identify due to their obfuscation methods. Infoblox Threat Intel's research suggests that domain clusters - groups of interrelated domains operated by the same actor - are a significant trend. During the past year, security teams uncovered new actors and observed the continued growth of domain sets used for malicious activities. Proactive security recommended The report advocates a shift towards preemptive protection and predictive threat intelligence, emphasising the limitations of relying solely on detection after the fact. The data indicates that using Infoblox's protective DNS solution, 82% of threat-related queries were blocked before they could have a harmful impact, suggesting that proactive monitoring and early intervention can help counter adversarial tactics. Infoblox researchers argue that combining protective solutions with continuous monitoring of emerging threats is essential to providing security teams the necessary resources and intelligence to disrupt malicious campaigns before significant damage occurs. The report brings together research insights from the past twelve months to map out attack patterns and equip organisations with up-to-date knowledge on DNS-based threats, with a particular focus on the evolving role of harmful adtech in the modern threat landscape.

RNZ News
8 hours ago
- RNZ News
Dire need for AI support in primary, intermediate schools survey shows
A NZ Council for Education Research survey of teachers and students found that there was "a dire need" for guidance on best practice for AI in schools. Photo: UnSplash/ Taylor Flowe Primary school children say using AI sometimes feels like cheating and teachers warn their "Luddite" colleagues are "freaking out" about the technology. The insights come from an NZ Council for Education Research survey that warns primary and intermediate schools need urgent support for using Artificial Intelligence in the classroom. The council said its survey of 266 teachers and 147 pupils showed "a dire need" for guidance on best practice. It found teachers were experimenting with generative AI tools such as ChatGPT for tasks like lesson planning and personalising learning materials to match children's interests and skills, and many of their students were using it too though generally at home rather than in the classroom. But the survey of teachers and also found most primary schools did not have AI policies. "Teachers often don't have the appropriate training, they are often using the free models that are more prone to error and bias, and there is a dire need for guidance on best practice for using AI in the primary classroom," report author David Coblentz said. Coblentz said schools needed national guidance and students needed lessons in critical literacy so they understood the tools they were using and their in-built biases. He said in the meantime schools could immediately improve the quality of AI use and teacher and student privacy by avoiding free AI tools and using more reliable AI. The report said most of the teachers who responded to the survey said they had noted mistakes in AI-generated information. Most believed less than a third of their pupils, or none at all, were using AI for learning but 66 percent were worried their students might become too reliant on the technology. Most of the mostly Year 7-8 students surveyed in four schools had heard of AI, and less than half said they had never used it. Those who did use AI mostly did so outside of school. "Between one-eighth and one-half of users at each school said they asked AI to answer questions "for school or fun" (12%-50%). Checking or fixing writing attracted moderate proportions everywhere (29%-45%). Smaller proportions used AI for idea generation on projects or homework (6%-32%) and for gaming assistance (12%-41%). Talking to AI "like a friend" showed wide variation, from one in eight (12%) at Case A to nearly half (47%) at the all-girls' Case D," the survey report said. Across the four schools, between 55 and 72 percent agreed "Using AI sometimes feels like cheating" and between 38 and 74 percent agreed "Using AI too much can make it hard for kids to learn on their own". Roughly a quarter said they were better at using AI tools than the grown-ups they knew. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.