
Why Machines Aren't Intelligent
OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO).
Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine.
As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving.
CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months.
Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic.
Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed.
These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans.
With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly.
Yet this would be a mistake.
Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence.
The fundamental distinction lies in several key characteristics that machines demonstrably lack.
Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations.
Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect.
Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative.
Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will.
Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data.
Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data,
Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight.
Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training.
And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities.
Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy.
Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans.
Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines.
In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent".
We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense.
The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting.
While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
12 minutes ago
- Yahoo
$515M Bet on Musk, AI, and Space: Glade Brook Goes All-In While Others Retreat
Glade Brook Capital Partners just pulled in $515 million for its fourth fund and it's placing its bets on the edge of what's next: AI, space, defense, and fintech. The firm, which already holds stakes in SpaceX, Stripe, and Elon Musk's Neuralink, plans to double down on late-stage tech startups navigating some of the most volatile capital markets in recent memory. Founder Paul Hudson didn't sugarcoat the conditions, calling it the most challenging fundraising environment he's seen in his career even as the fund ended up oversubscribed. Despite the headwinds, Glade Brook is leaning into momentum. It's already backing Musk's xAI and Artisan AI, signaling a clear tilt toward artificial intelligence. That comes on top of a portfolio that includes ramping fintech player Ramp and previously backed giants like Uber (NYSE:UBER) and Airbnb (NASDAQ:ABNB). And in a move that tracks closely with broader market appetite, the firm is betting that innovation in complex, capital-intensive sectors like AI and defense could offer asymmetric upside if timed right. Tesla (NASDAQ:TSLA) remains part of the broader Musk-aligned orbit Glade Brook is exposed to, adding even more edge to its late-stage tech focus. Their 2021 fund a $430 million vehicle reportedly ranks in the top 5% of its vintage, according to private Cambridge Associates data. While past performance doesn't guarantee future returns, it does suggest Glade Brook knows how to play late-stage cycles. In a market where many VCs are stuck in wait-and-see mode, Glade Brook appears to be playing offense betting that when the next wave breaks, it'll be the firms already in position that ride it the farthest. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


USA Today
13 minutes ago
- USA Today
AI knows we shouldn't trust it for everything. I know because I asked it.
I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. Since the emergence of artificial intelligence a few years ago, I've been split between two sides of the ongoing debate. Is AI evil or the next revolutionary advance in society? On the one hand, I'm a typical AI skeptic. I worry that its prevalence is harming critical thinking skills and creativity, and I am very concerned about its environmental impact. Conversely, I'm a child of the internet. I know this conversation has happened before, and I know when I'm getting left behind. I've heard enough friends discuss using ChatGPT in their daily lives to know that AI is here to stay, so I might as well get accustomed to it. I had also been using AI in small doses already: Every time I use Google, its AI technology Gemini summarizes what I need an answer to. I used to use AI to transcribe my interviews. My work uses Microsoft Teams, which has its own AI called Copilot. But I had yet to dive headfirst into the world of ChatGPT, OpenAI's chatbot that launched in 2022 and effectively changed the way AI is used by everyday people. With the blessing of my editor, I decided it was time to get familiar with the tool that's probably going to take my job one day. I opened the app, created an account and introduced myself as a journalist. 'Hi Sara!' ChatGPT replied. 'Great to meet you. I'm ready for your questions – ask away whenever you're ready.' Did ChatGPT immediately go woke, or was it just agreeing with me? To start, I launched into a series of questions about Zohran Mamdani, the Democratic candidate for New York City mayor known for his progressive politics. I told ChatGPT that I generally agree with Mamdani's politics, and asked if the chatbot believed democratic socialism could work in New York City. 'Yes, democratic socialism can work in New York City – at least in specific, meaningful ways – but there are structural, political, and fiscal tensions that make its implementation uneven and often contentious,' the AI responded. It then gave me a list of reasons why it could work (organizing leftists, public opinion and current elected officials) and a list of roadblocks to making that happen (state government, finances and strong opposition from bigwigs in the city). This made me immediately skeptical of what ChatGPT was telling me. I had made it clear that I supported Mamdani's policies – was it just telling me what I wanted to hear? I asked another question – this time about Mamdani's chances in the November election. In seconds, the chatbot pulled up a series of polls showing him in the lead. I asked what neighborhoods he'd need to win – something I know journalists have previously broken down. To my surprise, ChatGPT linked to actual news articles that delved into these topics. Research that would have taken me several hours to find the information. On the other hand, it also cited Wikipedia multiple times – a practice that I've been told not to engage in since at least middle school. Opinion: Zohran Mamdani rallied Gen Z voters. We can't abandon him now. Overall, I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. AI seemed able to break down my questions into thoughtful answers in a way that standard search never could, unless I was very specific in what I searched for. ChatGPT knows humans might not trust AI Next, I moved on to why I was really there – to see if ChatGPT could actually be a useful tool for me. 'I'm writing a column about you, and the inevitability of AI becoming part of daily life,' I typed in the chat. 'Do you think this is where things are headed?' It responded by calling my column idea 'fascinating.' Flattering. 'Yes, it does look increasingly inevitable that AI will become a normalized, embedded part of daily life,' the bot responded. Of course it believes that. It also doesn't seem to be wrong. Opinion: AI is changing our world. At what point will it change our reality? 'If I'm honest, I'm a little wary of using you ‒ it seems like AI can be a force for good, but relying on it too heavily can harm human critical thinking skills,' I wrote. 'You're absolutely right to be wary,' ChatGPT responded. 'AI is a tool, not a replacement for human judgment, and there is a real risk that relying on it too much can blunt critical thinking, creativity, and independent analysis ‒ especially in fields like journalism, where those skills are core.' AI knows it shouldn't do my job. Also, AI wants to do my job. It feels strange to know that this chatbot is somewhat self-aware. It didn't think it was a good idea for humans to rely on it, yet it kept trying to write an outline for my column or give me short quips that I could use in my column. The line between 'tool' and 'crutch' seems awfully thin. Aside from the fact that it would be unethical to have ChatGPT write this column, I also don't find that the program creates particularly compelling writing. The heavy reliance on dashes and italics got old quickly. It also seemed to struggle with turns of phrase. I told it an old journalism idiom – "If your mother says she loves you, check it out" – which it regurgitated as "if (AI) says my mother loves me, I'll still check the birth certificate." Opinion: School cell phone bans are a distraction. The real crisis isn't in your kid's hand. Another thing that stuck out to me was how complimentary ChatGPT was. It called my questions 'excellent'; it told me how important journalism is as a career. I appreciated the ego boost, noticing that it made me want to use the chatbot even more. After all, who doesn't like being told that they're intelligent and interesting? I can't lie. I get it now. I understand the allure of AI. I began thinking of all the ways I could use ChatGPT – replying to reader emails, synthesizing the week's important events, maybe even responding to people on Hinge. In the end, I had to stop myself – I fear that becoming too reliant on AI would dull my senses in the long run, destroying my capacity for creativity and leading me to forget why I love writing in the first place. When I declined to let it write my column and told it I'd be working with my editor, it told me that this was a good idea. "Your readers will benefit from the fact that you're approaching this moment with curiosity and caution," it told me. "That's where good journalism lives." I still have a healthy skepticism of ChatGPT and AI's newfound place in our culture. I believe we should all be cautious when using it – after all, there are plenty of instances of AI being wrong. At the same time, I do see the benefit: It's quick, thorough and conversational. I understand why so many people I know use it. You don't have to use AI, the same way you don't have to use the Internet. When you do use it, be skeptical of the information the program provides. Try to limit the times you use it to reduce its environmental impact. Just be aware of the fact that this is where the future is headed, whether we like it or not. Follow USA TODAY columnist Sara Pequeño on X, formerly Twitter: @sara__pequeno


Tom's Guide
13 minutes ago
- Tom's Guide
Claude 4 vs ChatGPT: Which AI assistant is right for you?
Not all chatbots are created equal. OpenAI's ChatGPT and Anthropic's Claude are two of the most capable AI assistants available today, but they're built with different strengths, personalities and design philosophies. Whether you're looking for a creative collaborator, a coding partner or an all-in-one productivity tool, knowing how they compare can help you choose the right one for your style and needs. I've been testing AI chatbots for years, so let me break down the ChatGPT vs Claude differences that you need to know. While many users are eagerly anticipating the release of ChatGPT-5, OpenAI's current flagship model is GPT-4o (short for 'Omni'), and the company's most advanced base model to date. It supports multimodal input and output, meaning it can understand and respond to text, image, and audio, and it enables real-time interactions, including natural voice conversations and image generation. Anthropic's Claude 4, released in May 2025, comes in two versions. Claude Sonnet 4 is the free, fast, general-purpose model built for everyday use, while Claude Opus 4 is a premium model designed for deep reasoning, advanced coding tasks and large-scale context handling. It's built to power more complex workflows and long-form problem-solving. ChatGPT is available on the web, iOS and Android, and it's tightly integrated into Microsoft's ecosystem through Copilot in Bing, Word, Excel and other Office apps. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Claude 4, meanwhile, is accessible via on the web, iOS and Android. Developers and enterprise users can also access Claude via the Anthropic API, as well as through platforms like GitHub Copilot, AWS Bedrock and Google Cloud's Vertex AI. ChatGPT, powered by GPT-4o, offers strong conversational memory and supports multimodal input, including text, images, audio, file uploads and code execution. Users can personalize interactions with memory settings and even build custom GPTs for specific tasks. Although Claude does not have a memory feature, Claude 4 supports up to 200,000 tokens of context, allowing it to handle and retain significantly more information in a single conversation; the equivalent of more than 500 pages of text. ChatGPT offers fast, responsive performance and is particularly strong when paired with tools like Microsoft Copilot. Its reasoning abilities are enhanced by plugins, file uploads, and its new Agent feature, which can browse the web, execute tasks and automate workflows across apps. Claude Sonnet is similarly fast and excels at maintaining depth and continuity in conversations, making it a strong writing partner or analytical assistant. Claude Opus takes things further with support for multi-hour workflows, legal reasoning, code explanation and complex research tasks. On industry benchmarks like SWE-bench, Claude Opus ranks among the top performers for software engineering use cases. When it comes to multimodal support, ChatGPT offers more with in-chat image generation, visual context and real-time voice conversations with memory. Claude supports image input and visual understanding, meaning users can upload an image and the chatbot will assist with comprehension, but it cannot generate images. Also, Claude does not offer real-time conversational chat. Claude 4, especially in its Sonnet form, stands out for its natural, thoughtful tone. It often feels more like a helpful assistant offering clear explanations, thoughtful suggestions and a calm, conversational flow. If you're writing, brainstorming or trying to make sense of something complex, Claude tends to feel more human in how it is professional, polished and highly adaptable, with the ability to shift tones based on a user's preference or task. It is arguably more chatty than Claude, which appeals to some users more than others. ChatGPT supports a broad set of integrations including custom GPTs, an extensive plugin ecosystem, and Microsoft Copilot. Its new ChatGPT Agent feature (currently in early access) allows it to take real action on your behalf: browsing the web, interacting with forms, generating presentations and more all within a secure sandboxed environment. Claude 4 doesn't yet offer a native agent feature through its user interface, but it does support tool use and extended workflows through its API. It also integrates with GitHub Copilot, AWS Bedrock and other developer platforms, making it a strong option for those building their own agentic systems. If you're looking for an all-in-one AI assistant, ChatGPT is the more versatile choice. It can speak, see, listen, browse, generate images, analyze documents and automate tasks. Its voice mode feels remarkably human and is useful for a hands-free experience. The chatbot's ability to create and understand images adds another layer of functionality that Claude doesn't natively support. ChatGPT also integrates seamlessly with Microsoft products and lets users build custom GPTs tailored to their specific needs. And for power users, the new ChatGPT Agent adds another level of autonomy and actionability that's unmatched in most other consumer-facing AI tools. Claude 4 shines in natural language understanding and thoughtful conversation. It's ideal for users who prioritize clarity, warmth and coherence in writing, editing and reasoning. Whether you're crafting long-form content or analyzing dense documents, Claude delivers consistently thoughtful and easy-to-follow responses. Its 200K-token context window makes it the better choice for handling long inputs like research papers or legal contracts. Claude Opus, in particular, is built for high-stakes reasoning tasks and delivers top-tier performance for technical, legal and academic work. Even better, Claude Sonnet 4 is free, making it an excellent entry point for users who want fast, capable AI without paying for a subscription. Both ChatGPT and Claude are chatbots that I use every day. But they serve different goals. ChatGPT is my go-to for versatility, real-time interaction and multimodal tasks while Claude is what I use most often for summarizing, reasoning, deep context and analytics. Knowing what each excels at can help you choose the right assistant, or mix and match based on what you're trying to accomplish. Which chatbot do you prefer and why? Let me know in the comments. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.