Clear laws needed as AI usage in health increases
To embed this content on your own webpage, cut and paste the following:
See terms of use.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
an hour ago
- RNZ News
International award-winning MOTAT exhibition Te Puawananga
arts te ao Māori about 1 hour ago MOTAT's Te Puawananga Exhibtion was recently named International Exhibition of the Year at the Museum and Heritage Awards in London. As it turned one, it beat the Egyptian Museum in Cairo and Seattle's Museum of Pop. Te Puawananga hopes to reconnect young people with science through a cultural lens and tackle the declining interest in science and technology by recognising and integrating Matauranga Maori and Western Science. With tactile and hands-on activities it hopes to make science relevant and inclusive while engaging children with STEM topics. The exhibit was designed in partnership with Maori educators and artists. The judges of the awards described it as a "vibrant, culturally connected space that seamlessly blends science and Maori culture." Auckland-based multi-disciplinary artist, curator, sculptor and strong cultural and Iwi advocate, Pita Turei, was heavily involved in the development of the exhibtion. He speaks to Culture 101

RNZ News
3 hours ago
- RNZ News
NZ's new AI strategy is long on 'economic opportunity' but short on managing ethical and social risk
By By Andrew Lensen* of Photo: Supplied/Callaghan Innovation The government's newly unveiled National AI Strategy is all about what its title said: "Investing with Confidence". It tells businesses that Aotearoa New Zealand is open for AI use, and that our "light touch" approach won't get in their way. The question now is whether the claims made for AI by Minister of Science, Innovation and Technology Shane Reti - that it will help boost productivity and enable the economy to grow by billions of dollars - can be justified. Generative AI - the kind powering ChatGPT, CoPilot, and Google's video generator Veo 3 - is certainly earning money. In its latest funding round in April, OpenAI was valued at US$300 billion . Nvidia, which makes the hardware that powers AI technology, just became the first publicly traded company to surpass a $4 trillion market valuation . It'd be great if New Zealand could get a slice of that pie. New Zealand doesn't have the capacity to build new generative AI systems, however. That takes tens of thousands of NVIDIA's chips, costing many millions of dollars that only big tech companies or large nation states can afford. What New Zealand can do is build new systems and services around these models, either by fine-tuning them or using them as part of a bigger software system or service. The government isn't offering any new money to help companies do this. Its AI strategy is about reducing barriers, providing regulatory guidance, building capacity, and ensuring adaptation happens responsibly. But there aren't many barriers to begin with. The regulatory guidance contained in the strategy essentially said "we won't regulate". Existing laws are said to be "technology-neutral" and therefore sufficient. As for building capacity, the country's tertiary sector is more under-funded than ever, with universities cutting courses and staff. Humanities research into AI ethics is also ineligible for government funding as it doesn't contribute to economic growth. The issue of responsible adoption is perhaps of most concern. The 42-page " Responsible AI Guidance for Businesses " document, released alongside the strategy, contains useful material on issues such as detecting bias, measuring model accuracy, and human oversight. But it is just that - guidance - and entirely voluntary. This puts New Zealand among the most relaxed nations when it comes to AI regulation, along with Japan and Singapore . At the other end is the European Union, which enacted its comprehensive AI Act in 2024, and has stood fast against lobbying to delay legislative rollout. The relaxed approach is interesting in light of New Zealand being ranked third-to-last out of 47 countries in a recent survey of trust in AI . In another survey from last year, 66 percent of New Zealanders reported being nervous about the impacts of AI . Some of the nervousness can be explained by AI being a new technology with well documented examples of inappropriate use, intentional or not. Deepfakes as a form of cyberbullying have become a major concern. Even the ACT Party, not generally in favour of more regulation, wants to criminalise the creation and sharing of non-consensual, sexually explicit deepfakes. Generative image, video, and music creation is reducing the demand for creative workers, even though it is their very work that was used to train the AI models. But there are other, more subtle issues, too. AI systems learn from data. If that data is biased, then those systems will learn to be biased, too. New Zealanders are right to be anxious about the prospect of private sector companies denying them jobs, entry to supermarkets , or a bank loan because of something in their pasts. Because modern deep learning models are so complex and impenetrable, it can be impossible to determine how an AI system made a decision. And what of the potential for AI to be used online to mislead voters and discredit the democratic process, as the New York Times has reported, may have occurred already in at least 50 cases. The strategy is essentially silent on all of these issues. It also doesn't mention Te Tiriti o Waitangi/Treaty of Waitangi. Even Google's AI summary tells me this is the nation's founding document, laying the groundwork for Māori and the Crown to coexist. AI, like any data-driven system, has the potential to disproportionately disadvantage Māori if it involves systems from overseas designed (and trained) for other populations. Allowing these systems to be imported and deployed in Aotearoa New Zealand in sensitive applications - healthcare or justice, for example - without any regulation or oversight risks worsening inequalities even further. What's the alternative? The EU offers some useful answers. It has taken the approach of categorising AI uses based on risk : This feels like a mature approach New Zealand might emulate. It wouldn't stymie productivity much - unless companies were doing something risky. In which case, the 66 percent of New Zealanders who are nervous about AI might well agree it's worth slowing down and getting it right. Andrew Lensen is a Senior Lecturer in Artificial Intelligence at Te Herenga Waka - Victoria University of Wellington -This story was originally published on The Conversation.


Newsroom
3 hours ago
- Newsroom
NZ can't afford to be careless with its AI strategy
Opinion: The Government's new strategy for AI was announced last week to a justifiably flat reception. As far as national-level policy goes, the document is severely lacking. One of the main culprits is prominently displayed at the end of Science, Innovation and Technology Minister Shane Reti's foreword: 'This document was written with the assistance of AI.' For those with some experience of AI, this language is generally recognised to be a precursor to fairly unexceptional outputs. The minister's commitment to walking the talk on AI, as he says, could have been seen as admirable if the resulting output was not so clumsy, and did not carry so many of the hallmarks of AI-generated content. To be blunt, the document is poorly written, badly structured, and under-researched. It cites eight documents in total, half of which are produced by industry – an amount of research suitable for a first year university student. It makes no effort to integrate arguments or sources critical of AI, nor does it provide any balanced assessment. This same carelessness is exhibited in the web version of the document which has scarcely been edited, and includes a number of errors like 'gnerative AI' as opposed to generative AI. It also contains very little actual strategy or targets. It reads more like a dossier from Meta, Open AI or Anthropic and is filled with just as much industry language. In short, it is entirely unsuitable to be the defining strategic document to guide New Zealand's engagement with what it accurately defines as 'one of the most significant technological opportunities of our time'. Especially not in a global climate where there is an ever-growing appreciation for the potential harms of AI, as seen in the growing number of class actions in the United States, or resources like the AI Incident Database. AI harm and job displacement are very real and important problems. Yet, in the Strategy for AI they are described as dystopian scenarios being used by the media to compound uncertainty. The problem is not necessarily that AI was used to assist the production of the document, it is the extent to which it was used, and how. AI has a number of useful applications such as spellchecking, assisting with structure, and providing counter-points which can help further flesh out your writing. However, it is inappropriate to use generative AI to produce national-level policy. What is particularly alarming is that anyone with a ChatGPT licence and about a minute of spare time could very easily produce a document similar in content, tone and structure to the government's strategy. Thankfully bad policy can be improved, and hopefully this will be eventually. But, by far the most damning aspect of the strategy is the underlying notion that generative AI should have a key role in developing policy in New Zealand. There is an unappealing hubris in thinking that New Zealand's public servants, many of whom are phenomenally skilled, deeply caring, and out of work, could be replaced or meaningfully augmented by such a ham-handed and poorly thought out application of generative AI. Unfortunately, it is likely that the strategy's fast and efficient rollout will be seen by the Government as a success regardless of the quality of the output. This will no doubt embolden it to continue to use generative AI as an aid in the production of policy in future. This is a real cause for concern, as it could be used to justify even more cuts to the public service and further undermine the function of our democracy. Use of generative AI in the development of policy also raises fundamental questions as to what our public service is and should be. It would seem imprudent to employ our public servants on the basis of their care, knowledge, expertise and diligence and then require them to delegate their work to generative AI. A public service defined solely by the pace at which they can deliver, as opposed to the quality of that delivery, is at best antithetical to the goals of good government.