logo
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop02-06-2025
It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet?
"My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people."
Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim.
His career at the forefront of machine learning began at its inception - before the first Pacman game was released.
But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence.
Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse.
The Good: 'It's going to do wonderful things for us'
Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says.
According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade.
Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses.
"In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them."
He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time.
Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic."
"If he said 25 years I'd believe it."
The Bad: 'Autonomous lethal weapons'
Despite these benefits, Hinton warns of pressing risks that demand urgent attention.
"Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes."
He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war.
"This shows," says Hinton of his former employers, "the company's principals were up for sale."
He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind".
He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI".
Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent.
The Apocalyptic: 'We'd no longer be needed'
At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet?
"I think it would be a bad thing for people - because we'd no longer be needed."
Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise.
"If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose."
'Ask a chicken'
For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive.
This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team.
Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now."
It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms.
He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken."
Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Another space to punch above our weight
Another space to punch above our weight

Newsroom

time16 hours ago

  • Newsroom

Another space to punch above our weight

Comment: A new semester has kicked off, and I am teaching my half of 'An Introduction to Rocket Science', a new(ish) course at the University of Auckland. It's about rocket science in the broad sense. On the one hand, human activity in space is governed by the iron laws of physics – home ground for me. But it is equally shaped by political and economic imperatives. Putting the course together has been a fascinating journey. Know, for example, how military rockets made their way to Europe? Iron rockets were used by the army of Mysore to such great effect against the British in India that the British copied them, developing them into Congreve rockets. These provided the 'rockets' red glare' of the US national anthem and they were subsequently deployed here in Aotearoa New Zealand against the legendary Hone Heke. (Who was largely unbothered by them.) In my first lecture, I wanted to set the scene with a lightning review of the 'space race', and opened with a clip of John F. Kennedy's 'We Choose to Go To the Moon' speech. I felt the need to check that my audience, most of whom were born this century, knew who Kennedy was – whereas I remember the very last of the moon landings. Reminders of your age (relative to your students) are an occupational hazard for professors. However, it also underlined the extent to which until very recently 'space' mostly happened somewhere other than New Zealand. For today's students, however, Rocket Lab has changed that. One of its achievements is to exist simultaneously as a Kiwi and an American enterprise but it launches from Mahia and builds rockets here in Auckland. Surprisingly, I keep meeting New Zealanders who are only vaguely aware of just how singular an achievement this is. My students are – that's why most of them signed up for this class. But even many of them did not know that our small country is Number 3 in the global league table for orbital launches in 2025. They did know after I showed them this plot, though. We are beaten by the United States and China, but not by anyone else. The truly surprising revelation, though, is what happens when you deploy that Kiwi standard from every Olympics – the 'per capita' plot. By this measure, we truly stand alone. As a physicist, it can sometimes feel that my field gets short shrift in a country whose agricultural exports are often seen as our defining characteristic. But rockets run on physics, so the success of our space sector is particularly sweet music to my ears. Better still, Rocket Lab may not be a one-off; Dawn Aerospace is testing an un-crewed vehicle that will fly to the edge of space. Dawn would likely get more attention from our media if it were not sharing a small country with a company that would be a big deal in whichever country it made its home. That said, Dawn is clearly going for a 'slow and steady' approach. It cannily supports itself by building thrusters for satellites and this revenue stream gives it more breathing room than start-ups that go all-in on developing a single product. Dawn's Aurora spaceplane, off Aoraki/Mount Cook. Photo: Supplied In New Zealand our current Government has made it clear that it values only science with immediate economic returns. Conversely, Kennedy said very little about the financial benefits of space exploration in his speech, but focused on its capacity to inspire. At one point he asks 'Why climb the highest mountain?' and almost any New Zealander will spot the reference to Sir Edmund Hillary, who had summited Everest less than a decade earlier. Between political instability, economic constraints and climate uncertainty this not an easy time to be a young person. I am far from an uncritical enthusiast when it comes to space activity, but Kennedy recognised that space, like mountains, has a hold on the human imagination that transcends balance sheets. Consequently, it is worth pausing to appreciate the extent to which New Zealand and New Zealanders have achieved something remarkable in this arena. And I am looking forward to seeing what we – and my students – will do next. Originally published on Richard Easther's blog, Excursion Set

Sensitive data exposure rises with employee use of GenAI tools
Sensitive data exposure rises with employee use of GenAI tools

Techday NZ

time2 days ago

  • Techday NZ

Sensitive data exposure rises with employee use of GenAI tools

Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

Scoop

time2 days ago

  • Scoop

Statement On AI In Universities From Aotearoa Communication & Media Scholars Network

We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store