logo
Why Machines Aren't Intelligent

Why Machines Aren't Intelligent

Forbes7 days ago
Abstract painting of man versus machine, cubism style artwork. Original acrylic painting on canvas.
OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO).
Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine.
As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving.
CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months.
Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic.
Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed.
These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans.
With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly.
Yet this would be a mistake.
Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence.
The fundamental distinction lies in several key characteristics that machines demonstrably lack.
Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations.
Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect.
Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative.
Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will.
Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data.
Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data,
Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight.
Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training.
And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities.
Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy.
Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans.
Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines.
In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent".
We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense.
The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting.
While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China proposes global cooperation body on AI
China proposes global cooperation body on AI

Yahoo

time12 minutes ago

  • Yahoo

China proposes global cooperation body on AI

SHANGHAI, July 27, 2025 /PRNewswire/ -- A news report from The Chinese government has proposed the establishment of a world AI cooperation organization as part of its efforts to bolster open, inclusive and equitable artificial intelligence development and governance globally. Premier Li Qiang announced the initiative when addressing the opening ceremony of the 2025 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. Li emphasized the need for collaborative approaches to global AI governance to ensure that intelligent technologies are developed for the good of all. More than 1,000 high-level representatives from upwards of 40 countries and international organizations attended the opening ceremony of the three-day event, themed "Global Solidarity in the AI Era". Li said that the risks and challenges brought by AI development, such as machine hallucinations, deep fakes and AI proliferation, have aroused widespread attention. There is an urgent need to further consolidate a societal consensus on how to achieve a balance between development and security in the AI sector, he said. Li said greater emphasis should be placed on collective governance to ensure that AI ultimately benefits humanity. Given that global AI governance appears fragmented, it is essential to enhance coordination and alignment among countries around the world to establish a framework and rules for global AI governance with a broad consensus, he added. Noting that China places great importance on global AI governance and actively participates in promoting multilateral and bilateral cooperation in this regard, the premier said the nation is willing to offer more Chinese solutions to the international community and contribute more Chinese wisdom to global AI governance. In 2023, China proposed the Global AI Governance Initiative to promote an open, inclusive and fair approach to the development and governance of AI technologies. Last year, China proposed the AI Capacity-Building Action Plan for Good and for All to bridge the AI and digital divides, and to ensure the Global South benefits equitably from AI development. China actively promotes open source development, and is willing to collaborate with countries around the world to promote progress in software and hardware technologies, intensify open source initiatives, and collectively propel AI development to higher levels, Li said. Also on July 26, the conference adopted the Global AI Governance Action Plan, which calls on all parties to work in solidarity to advance AI development and governance. For more information: View original content to download multimedia: SOURCE Sign in to access your portfolio

Learn How to Use ChatGPT to Automate Your Business
Learn How to Use ChatGPT to Automate Your Business

Entrepreneur

time13 minutes ago

  • Entrepreneur

Learn How to Use ChatGPT to Automate Your Business

Streamline operations, boost productivity, and future-proof your skills with 25+ hours of hands-on training for just $19.97. Disclosure: Our goal is to feature products and services that we think you'll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners. Smart professionals aren't just learning to use AI — they're using it to take back their time. If your day is filled with repetitive tasks, disjointed workflows, or too many tools that don't talk to each other, this ChatGPT and Automation E-Degree could be the pivot point you need. Designed for entrepreneurs, freelancers, and business pros, this $19.97 course bundle covers more than 25 hours of hands-on training across 12 lectures. You'll go beyond the basics of ChatGPT and explore how more than 20 top AI tools can help you solve real-world challenges in business, marketing, and development. From building customized GPT workflows to automating routine processes and visualizing data in a more impactful way, the content is tailored to actual use cases, not just theory. You'll learn how to communicate better with AI tools, use automation to streamline your workload, and even discover how generative AI can help with creativity for your brand or product. Even better, this deal includes lifetime access. That means you can return to the material whenever you're ready to tackle a new project or apply what you've learned in a new context. Whether you're looking to sharpen your tech skills, improve productivity, or future-proof your career, this course helps you get there with tools and strategies you can implement right away. Grab the full ChatGPT and Automation E-Degree on sale for $19.97 and start building a smarter, faster business. StackSocial prices subject to change.

AI Is Taking Over Your Search Engine. Here's a Look Under the Hood
AI Is Taking Over Your Search Engine. Here's a Look Under the Hood

CNET

time13 minutes ago

  • CNET

AI Is Taking Over Your Search Engine. Here's a Look Under the Hood

For decades, the way we find information on the internet changed only in small ways. Doing a traditional Google search today doesn't feel all that different from when, in the 1990s, you would Ask Jeeves. Sure, a lot has changed under the hood, the results are likely far more relevant and the interface has some new features, but you're still typing in keywords and getting a list of websites that might hold the answer. That way of searching, it seems, is starting to go the way of AltaVista, may it rest in peace. In May, Google announced the rollout of its new AI Mode for search, which uses a generative AI model (based on the company's Gemini large language model) to give you conversational answers that feel a lot more like having a chat and less like combing through a set of links. Other companies, like Perplexity and OpenAI, have also deployed search tools based on gen AI. These tools, which merge the functionality of a chatbot and a traditional search engine, are quickly gaining steam. You can't even escape AI by doing just a regular Google search: AI Overviews have been popping up atop those results pages since last year, and about one in five searches are now showing this kind of summary, according to a Pew Research Center report. I'm surprised it's not even more than that. These newfangled search tools feel a lot like your typical chatbot, like ChatGPT, but they do things a little differently. Those differences share a lot of DNA with their search engine ancestors. Here's a look under the hood at how these new tools work, and how you can use them effectively. Everything Announced at Google I/O 2025 Everything Announced at Google I/O 2025 Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:13 / Duration 15:40 Loaded : 6.33% 00:13 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 15:27 Share Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Everything Announced at Google I/O 2025 Search engines vs. AI search: What's the difference? The underlying technology of a search engine is kinda like an old library card catalog. The engine uses bots to crawl the vast expanses of the internet to find, analyze and index the endless number of web pages. Then, when you do a search to ask who played Dr. Angela Hicks on ER, because you're trying to remember what else you've seen her in, it will return pages for things like the cast of ER or the biography of the actor, CCH Pounder. From there, you can click through those pages, whether they're on Wikipedia or IMDB or somewhere else, and learn that you know CCH Pounder from her Emmy-winning guest appearance on an episode of The X-Files. "When customers have a certain question, they can type that question into Google and then Google runs their ranking algorithms to find what content is the best for a particular query," Eugene Levin, president of the marketing and SEO tool company Semrush, told me. Generally, with a traditional search, you have to click through to other websites to get the answer you're looking for. When I was trying to figure out where I recognized CCH Pounder from, I clicked on at least half a dozen different sites to track it down. That included using Google's video search -- which combs an index of videos across different hosting platforms -- to find clips of her appearance on The X-Files. Google announced AI Mode at its I/O developer conference in May. Google/Screenshot by Joe Maldonado/CNET These multiple searches don't necessarily have to happen. If I just want to know the cast of ER, I can type in "cast of ER" and click on the Wikipedia page at the top. You'll usually find Wikipedia or another relevant, trustworthy site at or near the top of a search result page. That's because a main way today's search algorithms work is by tracking which sites and pages get most links from elsewhere on the web. That model, which "changed the game for search" when Google launched it in the 1990s, was more reliable than indexing systems that relied on things like just how many times a keyword appeared on a page, said Sauvik Das, associate professor at Carnegie Mellon University's Human-Computer Interaction Institute. "There's lots of cookie recipes on the web, but how do you know which ones to show first?" Das said. "Well, if a bunch of other websites are linking to this website for the keywords of 'cookie recipe,' that's pretty difficult to game." AI-powered search engines work a little differently, but operate on the same basic infrastructure. In my quest to see where I recognized CCH Pounder from, I asked Google's AI Mode, literally, "Where do I recognize the actress who plays Dr. Angie Hicks on ER from?" In a conversation that felt far more like chatting with a bot than doing searches, I narrowed it down. The first result gave me a list of shows and movies I hadn't seen, so I asked for a broader list, which featured her guest appearances on other shows. Then I could ask for more details about her X-Files appearance, and that narrowed it down. While the way I interacted with Google was different, the search mechanisms were basically the same. AI Mode just used its Gemini model to develop and process dozens of different web searches to gather the information needed, Robby Stein, vice president of product for Google Search, told me. "A user could've just queried each of those queries themselves." Basically, AI Mode did the same thing I did, just a lot faster. So many searches, so little time The approach here is called "query fan-out." The AI model takes your request and breaks it down into a series of questions, then conducts searches to answer those components of the request. It then takes the information it gathers from all those searches and websites and puts it together in an answer for you. In a heartbeat. Those searches are using the same index that a traditional search would. "They work on the same foundation," Levin said. "What changes is how they pull information from this foundation." This fan-out process allows the AI search to pull in relevant information from sites that might not have appeared on the first page of traditional search results, or to pull a paragraph of good information from a page that has a lot more irrelevant information. Instead of you going down a rabbit hole to find one tiny piece of the answer you want, the AI goes down a wide range of rabbit holes in a few seconds. "They will anticipate, if you're looking for this, what is the next thing you might be interested in?" Levin said. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts The number of searches the AI model will do depends on the tool you're using and on how complicated your question is. AI Mode that uses Google's Deep Search will spend more time and conduct more searches, Stein said. "Increasingly, if you ask a really hard question, it will use our most powerful models to reply," Stein said. The large language models that power these search engines also have their existing training data to pull from or use to guide their searches. While a lot of the information is coming from the up-to-date content it finds by searching the web, some may come from that training data, which could include reams of information ranging from websites like this one to whole libraries of books. That training data is so extensive that lawsuits over whether AI companies actually had the right to use that information are quickly multiplying. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) AI search isn't just a chatbot Not relying on training data is one thing that sets an AI-powered search engine apart from a traditional chatbot, even though the underlying language model might be largely the same. While ChatGPT Search will scour the internet for relevant sites and answers, regular ChatGPT might rely on its own training data to answer your question. "The right answer might be in there," Das said. "It might also hallucinate a likely answer that isn't anywhere in the pre-training data." The AI search uses a concept called retrieval-augmented generation to incorporate what it finds on the internet into its answer. It collects information from a source you point it to (in this case, the search engine index) and tells it to look there instead of making something up if it can't find it in its training data. "You're telling the AI the answer is here, I just want you to find where," Das said. "You get the top 10 Google results, and you're telling the AI the answer is probably in here." Perplexity offers AI-powered search through its app and through a newly announced browser. Stefani Reynolds/Bloomberg via Getty Images Can you really trust AI search results? These AI-powered search tools might be more reliable than just using a chatbot itself, because they're pulling from current, relevant information and giving you links, but you still have to think critically about it. Here are some tips from the experts: Bring your human skepticism Consider how bad people are at telling when you're sarcastic on the internet. Then think about how bad a large language model might be at it. That's how Google's AI Overviews came up with the idea to put glue on pizza -- by pulling information from a humorous Reddit post and repeating it as if it were real culinary advice. "The AI doesn't know what is authentic and what is humorous," Das said. "It's going to treat all that information the same." Remember to use your own judgement and look for the sources of the information. They might not be as accurate as the LLM thinks, and you don't want to make important life decisions based on somebody's joke on an internet forum that a robot thought was real. AI can still make stuff up Even though they're supposed to be pulling from search results, these tools can still make things up in the absence of good information. That's how AI Overviews started creating fake definitions for nonsensical sayings. The retrieval-augmented generation might reduce the risk of outright hallucinations but doesn't eliminate it, according to Das. Remember that an LLM doesn't have a sense of what the right answer to a question is. "It's just predicting what is the next English word that would come after this previous stream of other English words or other language words," Das said. "It doesn't really have a concept of truthiness in that sense." Check your sources Traditional search engines are very hands-off. They will give you a list of websites that appear relevant to your search and let you decide whether you want to trust them. Because an AI search is consolidating and rewriting that information itself, it may not be obvious when it's using an untrustworthy source. "Those systems are not going to be entirely error-free, but I think the challenge is that over time you will lose an ability to catch them," Levin said. "They will be very convincing and you will not know how to really go and verify, or you will think you don't need to go and verify." But you can check every source. But that's exactly the kind of work you were probably hoping to avoid using this new system that's designed to save you time and effort. "The problem is if you're going to do this analysis for every query you perform in ChatGPT, what is the purpose of ChatGPT?" Levin said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store