logo
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

Scoopa day ago
We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points:
For universities…
· AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered;
· AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests;
For researchers…
· AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge;
· AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research;
· AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation;
· AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers;
For students…
· AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them;
· AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care;
· AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust;
For Aotearoa New Zealand…
· AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself;
· AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires.
Signed by:
Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago
Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago
Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington
Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago
Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago
Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago
Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University
Leon Salter, Senior Lecturer, Communications Programme, University of Auckland
Angela Feekery, Senior Lecturer, Communications Programme, Massey University
Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University
Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University
Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University
Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington
Nicholas Holm, Associate Professor, Media Studies Programme, Massey University
Sean Phelan, Associate Professor, Massey University
Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University
Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland
Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland
Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington
Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington
Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington
Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington
Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington
Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology
Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology
Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sensitive data exposure rises with employee use of GenAI tools
Sensitive data exposure rises with employee use of GenAI tools

Techday NZ

timea day ago

  • Techday NZ

Sensitive data exposure rises with employee use of GenAI tools

Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

Scoop

timea day ago

  • Scoop

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology

Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster
Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster

Techday NZ

time2 days ago

  • Techday NZ

Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster

Children and teenagers are behind some of the most aggressive and profitable cyberattacks in the world, and many are getting away with it because they know they're unlikely to face serious consequences. It comes as John Hultquist, Chief Analyst at Google's Threat Intelligence Group, spoke with TechDay exclusively to reveal who exactly is behind these attacks. "We're talking tens of millions - if not hundreds of millions - of dollars that these kids are making," Hultquist said. "There's clearly a financial motive, but it's also about reputation. They feed off the praise they get from peers in this subculture." The average cybercriminal today is not a shadowy figure backed by a government agency, but often a teenager with a high tolerance for risk and little fear of repercussions. And according to Hultquist, that combination is proving incredibly difficult for law enforcement to counter. "There's no deterrent," he said. "They know they're unlikely to face serious consequences, and they exploit that. One reason I wouldn't do cybercrime - aside from the ethical one - is I don't want to go to jail. These kids know they probably won't." His concern is echoed by Mandiant Consulting's latest global data. In 2024, 55% of cyberattacks were financially motivated, the majority involving ransomware or extortion. Mandiant also observed that teen-driven groups like UNC3944 (aka Scattered Spider) are behind many of the most damaging breaches, often relying on stolen credentials and social engineering to bypass defences. "Younger actors are willing to cross lines even the Russian criminals won't - threatening families, for example," Hultquist said. "They don't worry about norms outside their subculture. Inside their world, they're being praised." Even when authorities know who is behind an attack, bringing them to justice is rarely fast. "Building a case takes years. In the meantime, they can do serious damage," he said. The urgency is underscored by the pace at which attackers now move. According to Mandiant, the median global dwell time - the time it takes to detect an intruder - has dropped to just 11 days, and in ransomware cases, often as little as 6 days. More than 56% of ransomware attacks are discovered within a week, showing just how rapidly these operations unfold. Though many of these actors operate independently, some operate in blurred lines between criminal enterprises and state-sanctioned campaigns. Hultquist explained that governments - particularly in Russia and Iran - often outsource cyber operations to criminal groups, giving them protection in exchange for service. "It's a Faustian bargain," he said. "The government lets them continue their criminal activity as long as they're also doing work on its behalf." Google's acquisition of Mandiant in 2022 has enabled Hultquist and his team to monitor global threats more effectively by combining Google's in-house security team with Mandiant's threat intelligence capabilities. This merger formed the Google Threat Intelligence Group, which Hultquist described as a "juggernaut". "We've got great visibility on threats all over the world," he said. "We get to see the threats targeting Google users." That level of access and scale has allowed Google's team to take cyber defence to unprecedented levels. In one recent case, they used an AI model to uncover and neutralise a zero-day vulnerability before attackers could use it. "It literally found the zero-day," Hultquist said. "The adversary was preparing their attack, and we shut it down. It doesn't get any better than that." AI is becoming both an asset and a threat. While Google uses it to pre-emptively defend systems, attackers are beginning to leverage it to enhance their own capabilities. Fake images, videos, and text have long been used in phishing and disinformation campaigns, but Hultquist said the next phase is far more concerning. "We've seen malware that calls out to AI to write its own commands on the fly," he said. "That makes it harder to detect because the commands are always changing." He warned that AI could soon automate entire intrusions, allowing cybercriminals to break into networks, escalate privileges, and deploy ransomware faster than defenders can respond. "If someone can move through your network at machine speed, they might ransom you before you even know what's happening," he said. "Your response window gets smaller and smaller." As attackers evolve, many defenders still rely on outdated mental models, particularly when it comes to cloud security. "People are still thinking like they're defending old-school, on-prem systems," Hultquist said. "One of the biggest problems in cloud is identity - especially third-party access. That's where your crown jewels might be, and you don't always have full control." And while some worry about cyber threats to governments, Hultquist said the private sector is often the true target. "If a country retaliates against the Five Eyes, they're not going after military or intelligence," he said. "They'll go after privately held critical infrastructure. That's always been the asymmetrical advantage." Despite the constant evolution of threats, Hultquist said progress has been made on both sides. He recalled the early days of Chinese state-backed attacks, where errors in spelling and grammar made their emails laughable - and traceable. "We used to print them out and tack them to our cubicle walls," he said. "Now, they're incredibly sophisticated. But the reason they've improved is because we've gotten better. Our defences have evolved." And according to Hultquist, that cat-and-mouse game won't be ending anytime soon. "We're not fighting the laws of physics like safety engineers," Hultquist said. "Our adversaries adapt. If we fix everything, they'll just change to overcome it."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store