logo
Crossed Wires: Artificial Intelligence reality check – not as smart as it thinks it is

Crossed Wires: Artificial Intelligence reality check – not as smart as it thinks it is

Daily Maverick16-06-2025

Apple researchers' recent paper, The Illusion of Thinking, challenges the hype around AI, revealing its limitations in solving complex problems.
If one is to believe Sam Altman and other AI boosters and accelerationists, the era of abundance is almost upon us. AI is about to relieve us all of drudgery, ill health, poverty and many other miseries before leading us to some promised land where we will shed our burdens and turn our attention to loftier concerns. Any day now.
And so the publication of a paper by Apple researchers this month arrived as a refreshing dose of realism. It was titled The Illusion of Thinking and it broke the AI Internet. It concluded that ChatGPT-style GenAI models (like Claude, Gemini, DeepSeek and others) can only solve a constrained set of problems and tend to collapse spectacularly when complexity is introduced. The implications of the paper are clear – the underlying technologies that have so supercharged the AI narrative and fueled so much hyperbole have a long way to go before anyone attains the holy grail of Artificial General Intelligence (AGI) and the imagined utopia of techno-optimists.
For anyone with time and grit, here is the paper. One of the examples cited concerns the well-known 'Tower of Hanoi' problem, which involves stacking variously sized disks on a vertical rod. Any reasonably smart nine-year-old can find a solution, a very short computer program can describe the solution but, left to its own devices, GenAI cannot come up with a general solution to the problem. As more and more disks are added, the AI becomes a blithering idiot. It has no idea what it is doing. It is not able to 'generalise' from a few disks to many.
This leads to the inescapable conclusion that, if a child or a very short algorithm can best the most advanced 'reasoning' models from ChatGPT and Claude, then we are far from AGI. No matter what Sam Altman says.
It is not as if a whole slew of clever researchers are blind to this fact. There are some researchers busy trying to embed ethics and alignment into AI so that humans can survive its evolution without too much pain or possible extinction. There are some researchers who are taking what we have now and applying it to current real-world problems in science, education, healthcare or the sludge of institutional processes. And there are some who are saying: This version of AI, this 'deep learning' machine that has captured everyone's attention – it is simply not good enough. They are looking to invent something that breaks free of the constraints which Apple's paper so brutally highlights.
There are some clever band-aids available to patch over the obvious weaknesses of current AI models, such as a widely used technique called RL (Reinforcement Learning), which boosts learning after the AI has been trained, but these partial fixes do not address the basic weakness of the core architecture – they address the symptoms and not the cause.
It doesn't take an expert to know that humans learn in many different ways, all the way back to our warm launchpad in the womb. We have genetic programs gifted by our ancestors, we learn from our senses, we learn by example, we learn by trial and error, we learn by being taught by others, we learn by accident, we learn by intent, and then we also learn to reason, to generalize, to deduce, to infer. It is probably fair to say that we humans are learning machines – running all day, every day, from the moment of conception. Our learning may well be faulty, our memories inaccurate, our lessons sometimes haphazard, our failures manifold – but learn we do, always and forever.
It is in this area that the current crop of AI techniques are exposed as having only a thin veneer of competence. Take ChatGPT, at least in its text version. It has learnt how to predict the next word from a historical store of human-created documents reduced to gigantic matrices of statistically related words. There is not much more to it than that, even though its usefulness has astounded everyone.
But really, compare this with what our species does as we go about our daily business – learning, learning, learning, both to our benefit and sometimes to our detriment – all the time, unable to stop for even a microsecond. AI models are simply embarrassing next to that. Babies are smarter, primates are smarter. Dogs are smarter. The challenge of 'continuous autonomous learning' has yet to be met in current AI models.
Before I go overboard about the absurdity of the AGI-is-nearly-here claim, I should throw some light on what has been achieved, especially via the GenAI technologies. These are sometimes confusingly called Large Language Models (they now go way beyond mere language). What they can do is truly unprecedented. I use them all day, every day. They are novel and brilliant assistants. They are much, much smarter or faster than I am at doing a whole slew of important things. But I am much, much smarter than they are when it comes to a huge number of other things.
AGI, as now commonly defined, means the point at which AI equals (or betters) humans at all cognitive (as opposed to physical) tasks. I spend a large part of my day reading about the advances at the edge of this fabulous field, which is probably the most important technological development in human history. There is fabulous stuff coming down the line. A cure for cancer, perhaps. Infinite cheap energy. Healthy and long lives.
But will it be better than humans at all cognitive tasks? Not today. Not this year. Not next year. Not until AI is spawned as we are and learns as we do.
Like the witches riddle in Shakespeare's Macbeth, perhaps only when AI is of woman born. DM

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Is Everywhere. Now There's a Summit to Make Sense of It All
AI Is Everywhere. Now There's a Summit to Make Sense of It All

Daily Maverick

timean hour ago

  • Daily Maverick

AI Is Everywhere. Now There's a Summit to Make Sense of It All

AI is everywhere, and whether you're trying to get ahead of it or just make sense of it, the noise is real. What if there was one place to cut through the hype? One summit where you could explore how AI is transforming your business, your career, and your life? Phase 2 Speakers Announced: 3 Stages, 40+ Speakers, 2 Days Artificial intelligence is no longer a futuristic concept, it's already reshaping the way we work, hire, trade, treat illness, learn, and live. Whether you're ready or not, AI is in your inbox, your sales pipeline, your investment strategy, your doctor's office, your child's classroom, and embedded in the data trails we leave behind every day. Consider this: nearly 80% of hospitals now use AI to enhance patient care and streamline operations ( – AI Healthcare Statistics, 2024). And according to McKinsey, generative AI could add up to US$4.4 trillion (R80 trillion) annually to the global economy. Yet as this powerful technology accelerates progress, it also raises urgent questions, about inequality, data privacy, misinformation, and the rapid disruption of entire industries. Enter AI Empowered, a bold new summit inspired by EO Cape Town, designed to move beyond the buzzwords and into what's actually happening now, and what's coming by 2030. Held at the Cape Town International Convention Centre CTICC on 7–8 August 2025, this two-day event features global and local thought leaders from companies like Google, Salesforce, MIT Centre for Collective Intelligence, Woolworths, Discovery, Amazon Web Services, Pepkor and more. This is not just a tech summit. It's a two-day deep dive into the urgent, messy, exciting reality of AI today, with practical tools, eye-opening debates, and access to global and local experts who are shaping what's next, all under one roof. From smart cities to deep moral questions, this is the type of event where we look up from the laptop and ask: What kind of future are we building, and who gets to shape it? Day 1: AI and Your Business Day 1 is focused on ways to integrate AI into your business strategy. We'll unpack how artificial intelligence is revolutionising everything from customer journeys, sales and marketing, talent acquisition and your team organogram, to creative workflows and financial forecasting. Phase 2 speaker additions include: The Smart Future of Retail: Jose Rodrigues, Chief Data Analytics Officer at Woolworths, Michael Yolland, Head of Artificial Intelligence at Pepkor IT and Louise Liddell, Senior Solutions Architect at AWS. How intuitive, privacy-conscious AI agents are transforming industries and unlocking new levels of human potential: Tyler Reed, founder of pioneering AI company xgmi. Business coach, strategist, and business builder Mike Scott, APAC Director at Warp Development, on how non-technical founders should think about AI, not as a toolset, but as a source of leverage. Workshop: Making sense of AI to generate tangible returns for your business, with Gaurav Devsarmah, MBA & AI/ML Practitioner, Head of AI at Warp Development. Day 1 will be opened by Alan Winde, Premier of the Western Cape. Day 2: AI and the world around you. From cities and climate, to ethics, education, health and the shadows where fraud lurks On 8 August, we zoom out to explore the seismic shifts AI is triggering across society. It's no longer just about business, it's about how AI is reshaping the very systems we rely on to live, work, learn, and thrive. What happens when machines start diagnosing illness, or managing traffic flow in cities? This day dives deep into the human and societal implications of AI, with bold talks and challenging conversations on subjects like: Rethinking education in the age of AI: Shirley Eadie, Founding Director at Whole Human Studios. African Language Solutions: Thapelo Nthite, Co-Founder & CEO, Botlhale AI Solutions and J ade Abbott, Co-founder & Chief Technology Officer at Lelapa AI. Local Applications of AI: TB detection through lung sound analysis, Braden van Breda, Chief Executive Officer at AI Diagnostics. Redefining Legal: The AI-Driven Future of Law: Yvonne Wakefield, Chief Executive Officer at Caveat, with Kyle Torrington, Co-Founder & Director at Taylor Torrington and Associates Law Firm. AI in journalism: Karen Allen, Journalist and Founder at Karen Allen International and Chris Roper, Senior Strategist at Code for Africa. With speakers being announced weekly, visit Ai Empowered for updates. Expect 1,500 attendees each day, live demos, three dynamic stages, interactive activations, and hands-on masterclasses, AI Empowered, inspired by EO Cape Town, is set to become a cornerstone of Africa's innovation calendar. Cape Town's global appeal, creative spirit, and booming tech culture make it the perfect host city-and the perfect excuse to stay for the weekend. Tickets start at R3,600 for both days, and are on sale now at Proudly inspired by EO Cape Town, in partnership with CapeTalk, Daily Maverick, and produced by One-eyed Jack. DM

Meta spending big on AI talent but will it pay off?
Meta spending big on AI talent but will it pay off?

eNCA

time4 hours ago

  • eNCA

Meta spending big on AI talent but will it pay off?

SAN FRANCISCO - Mark Zuckerberg and Meta are spending billions of dollars for top talent to make up ground in the generative artificial intelligence race, sparking doubt about the wisdom of the spree. OpenAI boss Sam Altman recently lamented that Meta has offered $100-million bonuses to engineers who jump to Zuckerberg's ship, where hefty salaries await. A few OpenAI employees have reportedly taken Meta up on the offer, joining Scale AI founder and former chief executive Alexandr Wang at the Menlo Park-based tech titan. Meta paid more than $14-billion for a 49 percent stake in Scale AI in mid-June, bringing Wang on board as part of the deal. Scale AI labels data to better train AI models for businesses, governments and labs. "Meta has finalised our strategic partnership and investment in Scale AI," a Meta spokesperson told AFP. "As part of this, we will deepen the work we do together producing data for AI models and Alexandr Wang will join Meta to work on our superintelligence efforts." US media outlets have reported that Meta's recruitment effort has also targeted OpenAI co-founder Ilya Sutskever; Google rival Perplexity AI, and hot AI video startup Runway. Meta chief Zuckerberg is reported to have sounded the charge himself due to worries Meta is lagging rivals in the generative AI race. The latest version of Meta AI model Llama finished behind its heavyweight rivals in code writing rankings at an LM Arena platform that lets users evaluate the technology. Meta is integrating recruits into a new team dedicated to developing "superintelligence," or AI that outperforms people when it comes to thinking and understanding. - 'Mercenary' - Tech blogger Zvi Moshowitz felt Zuckerberg had to do something about the situation, expecting Meta to succeed in attracting hot talent but questioning how well it will pay off. "There are some extreme downsides to going pure mercenary... and being a company with products no one wants to work on," Moshowitz told AFP. "I don't expect it to work, but I suppose Llama will suck less." While Meta's share price is nearing a new high with the overall value of the company approaching $2 trillion, some investors have started to worry. Institutional investors are concerned about how well Meta is managing its cash flow and reserves, according to Baird strategist Ted Mortonson. "Right now, there are no checks and balances" with Zuckerberg free to do as he wishes running Meta, Mortonson noted. The potential for Meta to cash in by using AI to rev its lucrative online advertising machine has strong appeal but "people have a real big concern about spending," said Mortonson. Meta executives have laid out a vision of using AI to streamline the ad process from easy creation to smarter targeting, bypassing creative agencies and providing a turnkey solution to brands. AI talent hires are a long-term investment unlikely to impact Meta's profitability in the immediate future, according to CFRA analyst Angelo Zino. "But still, you need those people on board now and to invest aggressively to be ready for that phase" of generative AI, Zino said. According to The New York Times, Zuckerberg is considering shifting away from Meta's Llama, perhaps even using competing AI models instead. Penn State University professor Mehmet Canayaz sees potential for Meta to succeed with AI agents tailored to specific tasks at its platform, not requiring the best large language model. "Even firms without the most advanced LLMs, like Meta, can succeed as long as their models perform well within their specific market segment," Canayaz said.

ChatGPT's CEO on AI trust: a surprising confession you need to hear
ChatGPT's CEO on AI trust: a surprising confession you need to hear

IOL News

time3 days ago

  • IOL News

ChatGPT's CEO on AI trust: a surprising confession you need to hear

Surprising confession, ChatGPT's CEO didn't expect people to trust AI this Much Image: RON AI Would it be fair to say we live in the matrix? A world where we turn to our smartphones for everything from tracking steps to managing chronic illnesses, it's no surprise that artificial intelligence (AI) has quickly become a daily companion. Need mental health support at 2am? There's an AI chatbot for that. Trying to draft a tricky work email? AI has your back. But what happens when we lean so far into this tech that we forget to question it? That's exactly the concern raised by OpenAI CEO Sam Altman, the man behind ChatGPT himself. During a candid moment on the OpenAI Podcast earlier this month, Altman admitted, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' Yes, the guy who helped create ChatGPT is telling us to be cautious of it. But what does 'AI hallucination' even mean? In AI lingo, a 'hallucination' isn't about seeing pink elephants. Yahoo reports that, in simple terms, an AI hallucination is when the machine gives us information that sounds confident but is completely false. Imagine asking ChatGPT to define a fake term like 'glazzof' and it creates a convincing definition out of thin air just to make you happy. Now imagine this happening with real topics like medical advice, legal opinions, or historical facts. This is not a rare glitch either. According to a study published by Stanford University's Center for Research on Foundation Models, AI models like ChatGPT hallucinate 15% to 20% of the time, and the user may not even know. The danger lies not in the errors themselves, but in how convincingly the tool presents them. Altman's remarks are not merely cautionary but resonate as a plea for awareness. 'We need societal guardrails,' Altman stated, emphasising that we are on the brink of something transformative. 'If we're not careful, trust will outpace reliability.' Image: Pexels Why do we trust AI so much? Part of the reason is convenience. It's fast, polite, always available, and seemingly informed. Plus, tech companies have embedded AI into every corner of our lives, from the smart speaker in our kitchen to our smartphone keyboard. But more than that, there's a psychological comfort in outsourcing our decisions. Research indicates that people trust AI because it reduces decision fatigue. When life feels overwhelming, especially post-pandemic, we lean into what feels like certainty, even if that certainty is artificial. That mental shortcut is called "cognitive fluency". The smoother information sounds, the more our brain tags it as true, a bias confirmed by a 2022 MIT-Stanford collaboration that tracked user interactions with chatbots in real time. Reliance on questionable data isn't just an intellectual risk. It can snowball into: Decision fatigue: Medication errors , such as following an AI-generated supplement regimen that conflicted with their prescriptions. Amplified anxiety: When the easy answer eventually unravels, we feel betrayed and trust our judgment less, notes cognitive scientist Prof. Emily Bender of the University of Washington Recent Pew Research data shows that 35% of U.S. adults have already used generative AI like ChatGPT for serious tasks, including job applications, health questions, and even parenting advice. The risk of blind trust Here's where things get sticky. AI isn't human. It doesn't 'know' the truth. It merely predicts the next best word based on vast amounts of data. This makes it prone to repeating biases, inaccuracies, and even fabricating facts entirely. Mental health and tech dependency More than just a tech issue, our blind trust in AI speaks volumes about our relationship with ourselves and our mental health. Relying on a machine to validate our decisions can chip away at our confidence and critical thinking skills. We're already in an age of rising anxiety, and outsourcing judgment to AI can sometimes worsen decision paralysis. The World Health Organization (WHO) has also flagged the emotional toll of tech overuse, linking digital dependency to rising stress levels and isolation, especially among young adults. Add AI into the mix, and it becomes easy to let the machine speak louder than your inner voice. Altman didn't just throw the problem on the table; he offered a warning that feels like a plea: 'We need societal guardrails. We're at the start of something powerful, and if we're not careful, trust will outpace reliability.' Here are three simple ways to build a healthier relationship with AI: Double-check the facts, don't assume AI is always right. Use trusted sources to cross-reference. Keep human input in the loop, especially for big life decisions. Consult professionals (doctors, career coaches, financial advisors) when it matters most. Reflect before you accept, a sk yourself: 'Does this align with what I already know? What questions should I ask next?'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store