logo
Artificial intelligence – the panacea to all ills, or an existential threat to our world?

Artificial intelligence – the panacea to all ills, or an existential threat to our world?

Daily Maverick19-06-2025

'Once men turned their thinking over to the machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.' – Frank Herbert, Dune, 1965
In the early 19th century, a group of disgruntled factory workers in industrial England began protesting against the introduction of mechanised looms and knitting frames into the factories.
Fearful of losing their jobs, they smashed machines and engaged in acts of sabotage. They were dealt with harshly through imprisonment and even execution. They became known as the Luddites.
At the time, it was not the technology they were most concerned about, but rather the loss of their livelihoods. Ironically, today, the word Luddite has become something of an accusation, a complaint about those who, because they are seen as not understanding a new technology, are deemed to be anti-technology. Even anti-progress.
The 2020s have seen rapid progress in the development of a 'new' technology – artificial intelligence (AI). But the history of AI can be traced back to the middle of the 20th century, and so is perhaps not very new at all.
At the forefront of the current process has been the release of Large Language Models (LLMs) – with ChatGPT being the most prominent – that allow, at the click of a single request, an essay on the topic of your choice. LLMs are simply one type of AI and are not the same as artificial general intelligence (AGI).
Unlike current LLMs, which perform a single task, AGI would be able to reason, be creative and use knowledge across many domains – be more human-like, in essence. AGI is more of a goal, an end point in the development of AI.
LLMs have already been hugely disruptive in education, with university lecturers and school teachers scrambling to deal with ChatGPT-produced essays.
Views about the dangers of AI/AGI tend to coalesce into the doomer and the boomer poles. Crudely, and I am oversimplifying here, the 'doomers' worry that we face an existential threat to our existence were AI to be designed in a way that is misaligned with human values. Boomers, on the other hand, believe AI will solve all our problems and usher in an age of abundance, where we will all be able to work less without seeing a drop in our quality of life.
The 'doomer' narrative originates with Oxford University philosopher Nick Bostrom, who introduced a thought experiment called the ' paperclip maximiser '. Bostrom imagines a worst-case scenario where we create an all-powerful AGI agent that is misaligned with our values.
In the scenario, we request the AGI agent to maximise the production of paperclips. Bostrom worries that the command could be taken literally, with the AGI agent consuming every last resource on Earth (including humans) in its quest to maximise the production of paperclips.
Another take on this thought experiment is to imagine that we ask an all-powerful AGI agent to solve the climate breakdown problem. The quickest and most rational way of doing this would, of course, be to simply rid planet Earth of eight billion human beings.
What do we have to fear from LLMs?
LLMs have scraped the internet for every bit of data, stolen the data, and fed off the intellectual property of writers and artists. But what exactly do we have to fear from LLMs? I would suggest very little (unless, of course, you are a university lecturer in the humanities).
LLMs such as ChatGPT are (currently) little more than complex statistical programs that predict what word follows the word before, based on the above-mentioned internet scraping. They are not thinking.
In fact, some people have argued that everything they do is a hallucination. It is just that the hallucination is more often than not correct and appropriate.
Francois Chollet, a prominent AI researcher, has described LLMs in their current form as a ' dead end ' in the quest for AGI. Chollet is so confident of this that he has put up a $1-million prize for any AI system that can achieve even basic human skills in something he calls the abstraction and reasoning corpus (ARC) test.
Essentially, the ARC is a test of what is called fluid intelligence (reasoning, solving novel problems, and adaptation). Young children do well on ARC tasks. Most adults complete all tasks. Pure LLMs achieve around 0%. Yes – 0%. The $1-million prize does not even require that AGI systems match the skills of humans. Just that they achieve 85%. The prize is yet to be claimed.
People are the problem
If LLMs are (currently) a dead end in the quest for AGI, what should we be worried about? As is always the case, what we need to be afraid of is people. The people in control of this technology. The billionaires, the tech bros, and the dystopian conspiracy theorists.
High on my list is Mark Zuckerberg. The man who invented Facebook to rate the attractiveness of college women, and whose company profited enormously from the echo chamber it created. In Myanmar, this resulted in the ethnic cleansing of the Rohingya people in 2017.
At the beginning of 2025, Zuckerberg showed the depth of his commitment to diversity and integrity in his slavering capitulation to Donald Trump. Jokes aside about whether Zuckerberg is actually a robot, in recent pronouncements, what he seems to want is a world of atomised and alienated people, who out of quiet desperation turn to his dystopian hell where robots – under his control – will be trained to become 'our friends '.
And my personal favourite – Elon Musk. Musk, the ketamine-fuelled racist apologist for the Great Replacement Theory. A man who has committed securities fraud, and accused an innocent man of being a paedophile because the man had the nerve and gall to (correctly) state that Musk's submarine could not negotiate an underwater cave in Thailand.
More recently, estimates are that Musk's destruction of USAid will lead to the deaths of about 1,650,000 people within a year because of cuts to HIV prevention and treatment, as well as 500,000 annual deaths due to cuts to vaccines.
I, for one, do not want this man anywhere near my children, my family, my community, my country.
OpenAI
Sam Altman, the CEO of the world's largest plagiarism machine, OpenAI, recently stated that he would like a large part of the world's electricity grid to run his LLM/AI models.
Karen Hao, in her recently published book Empire of AI, makes a strong case for OpenAI being a classic colonial power that closely resembles (for example) the British East India Company, founded in 1600 (and dissolved in 1874).
Altman recently moved squarely into Orwellian surveillance when OpenAI bought io, a product development company owned by Jonny Ive (designer of the iPhone). While the first product is a closely guarded secret, it is said to be a wearable device that will include cameras and microphones for environmental detection. Every word you speak, every sound you hear, and every image you see will be turned into data. Data for OpenAI.
Why might Altman want this? Money, of course. But for Altman and Silicon Valley, money is secondary to data, to surveillance and the way they are able to parlay data into power and control (and then money). He will take our data, further train his ChatGPT models with it, and in turn use this to better surveil us all.
And for the pleasure of working for, and giving our data to OpenAI? Far from being paid for the data you produce, you will have to buy the gadget, be monitored 24/7, and have your life commodified and sold.
As Shoshana Zuboff said in her magisterial book, The Age of Surveillance Capitalism, 'Forget the cliché that if it's free, 'you are the product'. You are not the product; you are the abandoned carcass. The 'product' derives from the surplus that is ripped from your life.'
The problem was never the cotton loom. The Luddites knew this in the 19th century. It was always about livelihood loss and people (the industrialists).
Bostrom has it badly wrong when he imagines an all-powerful AGI entity that turns against its human inventors. But about the paperclips, he might be correct.
Zuckerberg, Musk and Altman are our living and breathing paperclip maximisers. With their political masters, they will not flinch at turning us all into paperclips and sacrificing us on the altar of their infinite greed and desire for ever-increasing surveillance and control. DM

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT's CEO on AI trust: a surprising confession you need to hear
ChatGPT's CEO on AI trust: a surprising confession you need to hear

IOL News

timea day ago

  • IOL News

ChatGPT's CEO on AI trust: a surprising confession you need to hear

Surprising confession, ChatGPT's CEO didn't expect people to trust AI this Much Image: RON AI Would it be fair to say we live in the matrix? A world where we turn to our smartphones for everything from tracking steps to managing chronic illnesses, it's no surprise that artificial intelligence (AI) has quickly become a daily companion. Need mental health support at 2am? There's an AI chatbot for that. Trying to draft a tricky work email? AI has your back. But what happens when we lean so far into this tech that we forget to question it? That's exactly the concern raised by OpenAI CEO Sam Altman, the man behind ChatGPT himself. During a candid moment on the OpenAI Podcast earlier this month, Altman admitted, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' Yes, the guy who helped create ChatGPT is telling us to be cautious of it. But what does 'AI hallucination' even mean? In AI lingo, a 'hallucination' isn't about seeing pink elephants. Yahoo reports that, in simple terms, an AI hallucination is when the machine gives us information that sounds confident but is completely false. Imagine asking ChatGPT to define a fake term like 'glazzof' and it creates a convincing definition out of thin air just to make you happy. Now imagine this happening with real topics like medical advice, legal opinions, or historical facts. This is not a rare glitch either. According to a study published by Stanford University's Center for Research on Foundation Models, AI models like ChatGPT hallucinate 15% to 20% of the time, and the user may not even know. The danger lies not in the errors themselves, but in how convincingly the tool presents them. Altman's remarks are not merely cautionary but resonate as a plea for awareness. 'We need societal guardrails,' Altman stated, emphasising that we are on the brink of something transformative. 'If we're not careful, trust will outpace reliability.' Image: Pexels Why do we trust AI so much? Part of the reason is convenience. It's fast, polite, always available, and seemingly informed. Plus, tech companies have embedded AI into every corner of our lives, from the smart speaker in our kitchen to our smartphone keyboard. But more than that, there's a psychological comfort in outsourcing our decisions. Research indicates that people trust AI because it reduces decision fatigue. When life feels overwhelming, especially post-pandemic, we lean into what feels like certainty, even if that certainty is artificial. That mental shortcut is called "cognitive fluency". The smoother information sounds, the more our brain tags it as true, a bias confirmed by a 2022 MIT-Stanford collaboration that tracked user interactions with chatbots in real time. Reliance on questionable data isn't just an intellectual risk. It can snowball into: Decision fatigue: Medication errors , such as following an AI-generated supplement regimen that conflicted with their prescriptions. Amplified anxiety: When the easy answer eventually unravels, we feel betrayed and trust our judgment less, notes cognitive scientist Prof. Emily Bender of the University of Washington Recent Pew Research data shows that 35% of U.S. adults have already used generative AI like ChatGPT for serious tasks, including job applications, health questions, and even parenting advice. The risk of blind trust Here's where things get sticky. AI isn't human. It doesn't 'know' the truth. It merely predicts the next best word based on vast amounts of data. This makes it prone to repeating biases, inaccuracies, and even fabricating facts entirely. Mental health and tech dependency More than just a tech issue, our blind trust in AI speaks volumes about our relationship with ourselves and our mental health. Relying on a machine to validate our decisions can chip away at our confidence and critical thinking skills. We're already in an age of rising anxiety, and outsourcing judgment to AI can sometimes worsen decision paralysis. The World Health Organization (WHO) has also flagged the emotional toll of tech overuse, linking digital dependency to rising stress levels and isolation, especially among young adults. Add AI into the mix, and it becomes easy to let the machine speak louder than your inner voice. Altman didn't just throw the problem on the table; he offered a warning that feels like a plea: 'We need societal guardrails. We're at the start of something powerful, and if we're not careful, trust will outpace reliability.' Here are three simple ways to build a healthier relationship with AI: Double-check the facts, don't assume AI is always right. Use trusted sources to cross-reference. Keep human input in the loop, especially for big life decisions. Consult professionals (doctors, career coaches, financial advisors) when it matters most. Reflect before you accept, a sk yourself: 'Does this align with what I already know? What questions should I ask next?'

How will our food system be impacted by AI?
How will our food system be impacted by AI?

IOL News

time2 days ago

  • IOL News

How will our food system be impacted by AI?

AI is currently generating widespread discussion, from debates about its potential to end humanity to excitement over innovations like ChatGPT, with its rapid progress likened to the world wide web's seismic shift 32 years ago. However, what's often overlooked is Africa's absence in the data that powers AI. Less than 5% of AI training data pertains to Africa and less than 3% of that 5% comes from the continent itself, highlighting a significant gap in representation. This year, the 2025 Food Indaba opens that conversation to the public with a powerful theme - 'AI and the Food System.' From 7–20 July, more than 20 events across Cape Town will explore how AI might influence farming, food access, nutrition education and the way we think about food justice in Africa. In this context, what does AI mean for Africa and what does it mean for African food systems? Will it help dismantle the power structures that reinforce injustice and drive bad outcomes, or will it turbo-charge them? What difference can we make in re-shaping our food system to be more healthy, sustainable and just if we have more understanding and more agency in shaping AI in ways that respond to our needs? With more than 20 events planned to run over two weeks in venues across the city, the 2025 Food Indaba will explore the connections between AI and the Food System. 'This is a pivotal moment for those working to change the Food System. To learn about what AI can do, how they can best use these tools to reach their goals, and how they can ensure that the valuable data they are generating becomes part of the existing models and helps improve the way AI understands and represents Africa in food and all other aspects of our lives,' said South African Urban Food and Farming Trust CEO, Kurt Ackermann. This year, the event programme is broken down into a professional focus, with events taking place during the week and a general public focus, with events clustered largely around weekends. Two workshops on AI and Food Entrepreneurship will be hosted by the Oribi Incubator at Makers Landing and this year's full day conference brings together AI thought leaders from across the country, to share their insights into the current state of AI and how those in the food system could be harnessing the power of AI. Additional events include the addition of Food Systems Walking Tours in Bellville and Langa, in addition to the Cape Town CBD Food Systems Walking Tour. The Art Cafe High Tea series at the 16 on Lerotholi Gallery in Langa returns, as does the Dialogues through Food chef-led dining series. Everyone is invited to enjoy herbal tea with an urban farmer at the Oranjezicht City Farm and the Lerotholi Urban Farm as part of Tea with a Farmer. This year for kids ages 6 to 10 the Food Indaba has partnered with the Cape Town Science Centre to run a series of chemistry and cooking workshops. There will again be a focus on Teens and the Food System, for ages 12 to 19. For the first time, the Food Indaba will offer combo ticket options for those wishing to book bundles of events or even a full-access pass. Ticket prices start at R120. Discounted early bird tickets are now live at for key Food Indaba 2025 events.

The new imperialism — AI's true price is exploitation and brutal extraction
The new imperialism — AI's true price is exploitation and brutal extraction

Daily Maverick

time5 days ago

  • Daily Maverick

The new imperialism — AI's true price is exploitation and brutal extraction

An instant New York Times bestseller traces the arc of artificial intelligence not as a story of innovation, but of control: over data, environmental resources, people, and ultimately, the future. On the surface, isn't this an exhilarating moment? 'Generative AI is thrilling: a creative aid for instantly brainstorming ideas and generating writing; a companion to chat with late into the night to ward off loneliness; a tool that could perhaps one day be so effective at boosting productivity that it will increase top-line economic activity,' writes US tech journalist Karen Hao. 'Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources.' Hao's description comes early on in her new book 'Empire of AI: Inside the reckless race for total domination', which has caused a sensation in Silicon Valley. It charts the rise of OpenAI, the company responsible for ChatGPT, and in so doing records how its founder, Sam Altman, and his colleagues have traded early idealism for something much darker. What started as a non-profit focused on building safe artificial general intelligence (AGI) has rapidly transformed into a profit-chasing, opaque tech behemoth in an arms race against its competitors, which will end — well, somewhere nobody currently is capable of understanding. The focus of Hao's book: how the AI race is recreating the familiar contours of colonial-era exploitation by constructing a kind of empire in real time, built not on land or oil, but on compute power, data and labour. And like empires of old, it functions through brutal extraction. Developing world targeted for dirt-cheap labour To train large language models like ChatGPT, what is required are humans. Ideally, humans who speak English and are willing to work for a pittance. In Kenya, Hao reports, OpenAI has outsourced work for its content moderation systems to local workers earning barely more than the minimum wage. Their task: to read and categorise thousands of graphic, disturbing text descriptions so the company can build safety filters for its chatbot. 'Hundreds of thousands of grotesque text-based descriptions,' she writes, have to be sorted into different categories: bestiality; adults raping children. The job is profoundly psychologically scarring — but what do the tech oligarchs care? It's not Americans doing this work. 'With the many other countries that the tech industry relegates to this role, Kenya shares a common denominator: It is poor, in the Global South, with a government hungry for foreign investment from richer countries,' writes Hao. Venezuela is another example. Hao explains how the country's economic collapse created a workforce desperate enough to accept almost any wage: 'Venezuela suddenly checked off the perfect mix of conditions for which to find an inexhaustible supply of cheap labour: Its population had a high level of education, good internet connectivity, and, now, a zealous desire to work for whatever wages.' The book recounts the story of one Venezuelan woman working up to 22 hours a day just to make ends meet. The tasks for which she earned pennies were small, repetitive, and exhausting — labelling datasets, transcribing audio, annotating images. In other words, the invisible labour that makes AI appear magical. South Africa, of course, has been targeted too, with facial recognition software developers circling the country like vultures in search of valuable data about black faces. Writes Hao: 'Facial recognition companies from all over the world were jostling to get a foothold in [South Africa] to collect valuable face data, especially after the industry had received significant criticism about their products' failures to accurately detect darker-skinned individuals.' Environmental toll still unknown All this extraction requires a physical backbone. Hao devotes a section of the book to the vast data centres that underpin modern AI. The amount of water, electricity and raw materials required to keep AI systems running at scale is immense, and growing. Altman told a conference in June that a 'significant fraction' of the world's total power should ideally go towards running AI. 'Hyperscalers call their data centres 'campuses' — large tracts of land that rival the largest Ivy League universities, with several massive buildings densely packed with racks on racks of computers. Those computers emanate an unseemly amount of heat, like a labouring laptop a million times over. To keep them from overheating, the buildings also have massive cooling systems — large fans, air conditioners, or systems that evaporate water to cool down the servers,' writes Hao. These centres require so much water that the tech companies are increasingly looking to developing countries to make a Faustian deal: we'll pay you to host our data centres, and in exchange leach vast quantities of water from your system. (A court stopped a planned Google data centre construction in Chile last year after an outcry from citizens about the water cost.) Exactly what the environmental toll is is still unknown, because companies like OpenAI refuse to allow close monitoring. This will all pay off … maybe. Exactly what is all this harm in aid of? Productivity and prosperity, we are constantly told. But Hao writes of a June 2024 global study from the Upwork Research Institute, which found that 77% of workers said AI tools had added to their workload due to the amount of time they now had to spend reviewing AI-generated content while under pressure to do more work. Hao also cites the Nobel economics laureates Daron Acemoglu and Simon Johnson, who have surveyed transformative technologies throughout history and concluded that they very rarely bring widespread prosperity. One example: the invention of the cotton gin in the 1790s, which brought farmers untold wealth and established the American South as the largest global exporters of cotton. Guess who did not benefit in the slightest? 'With the surge in cotton production, enslaved Black people were forced to work longer hours and physically coerced by even harsher means to squeeze out every ounce of their labour.' This book's warning is not about robots rising up. It is about the humans already in charge. The real threat, Hao argues, is not future annihilation but present-day exploitation. It is not that AI will become autonomous, but that it is already being wielded by a small elite to consolidate wealth and power. This is the bargain we are told to accept, writes Hao: 'The staggering price society needs to pay for what it is developing will someday be worth it.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store