logo
Only Critical Thinking Ensures AI Makes Us Smarter, Not Dumber

Only Critical Thinking Ensures AI Makes Us Smarter, Not Dumber

Forbes21 hours ago
AI needs more human thinking, not less, but are we giving it the opposite?
We're entering a new era where artificial intelligence can generate content faster than we can apply critical thinking. In mere seconds, AI can summarize long reports, write emails in our tone and even generate strategic recommendations. But while these productivity gains are promising, there's an urgent question lurking beneath the surface: Are we thinking less because AI is doing more?
The very cognitive skills we need most in an AI-powered world are the ones that the tools may be weakening. When critical thinking takes a back seat, the consequences are almost comical—unless it's your company making headlines.
These real-world breakdowns show what can happen when critical thinking is absent.
As AI models get more advanced and powerful, they exhibit even higher rates of hallucinations, making human supervision even more critical. And yet, a March 2025 McKinsey study found that only 27% of organizations reported reviewing 100% of generative AI outputs. With so much of the focus on the technology itself, many organizations clearly don't yet understand the growing importance of human oversight.
Clarifying what critical thinking is
While most people agree that critical thinking is essential for evaluating AI, there's less agreement on what it actually means. The term is often used as a catch-all term for a wide range of analytical skills—from reasoning and logic to questioning and problem-solving—which can feel fuzzy or ambiguous.
At its core, critical thinking is both a mindset and a method. It's about questioning what we believe, examining how we think and applying tools such as evidence and logic to reach better conclusions.
I define critical thinking as the ability to evaluate information in a thoughtful and disciplined manner to make sound judgments instead of accepting things at face value.
As part of researching this article, I spoke with Fahed Bizzari, Managing Partner at Bellamy Alden AI Consulting, who helps organizations implement AI responsibly. He described the ideal mindset as 'a permanent state of cautiousness' where 'you have to perpetually be on your guard to take responsibility for its intelligence as well as your own.' This mindset of constant vigilance is essential, but it needs practical tools to make it work in daily practice.
The GPS Effect: What happens when we stop thinking
This need for vigilance is more urgent than ever. A troubling pattern has emerged where researchers are finding that frequent AI use is linked to declining critical thinking skills. In a recent MIT study, 54 participants were assigned to write essays using one of three approaches: their own knowledge ('brain only'), Google Search, or ChatGPT. The group that used the AI tool showed the lowest brain engagement, weakest memory recall and least satisfaction with their writing. This cognitive offloading produced essays that were homogeneous and 'soulless,' lacking originality, depth and critical engagement. Ironically, the very skills needed to assess AI output—like reasoning, judgment, and skepticism—are being eroded or suppressed by overreliance on the technology.
It's like your sense of direction slowly fading because you rely on GPS for every trip—even around your own neighborhood. When the GPS fails due to a system error or lost signal, you're left disoriented. The skill you once had has atrophied because you outsourced your navigation skills to the GPS.
Bizzari noted, 'AI multiplies your applied intelligence exponentially, but in doing so, it chisels away at your foundational intelligence. Everyone is celebrating the productivity gains today, but it will eventually become a huge problem.' His point underscores a deeper risk of overdependence on AI. We don't just make more mistakes—we lose our ability to catch them.
Why fast thinking isn't always smart thinking
We like to think we evaluate information rationally, but our brains aren't wired that way. As psychologist Daniel Kahneman explains, we tend to rely on System 1 thinking, which is fast, automatic and intuitive. It's efficient, but it comes with tradeoffs. We jump to conclusions and trust whatever sounds credible. We don't pause to dig deeper, which makes us especially susceptible to AI mistakes.
AI tools generate responses that are confident, polished and easy to accept. They give us what feels like a good answer—almost instantly and with minimal effort. Because it sounds authoritative, System 1 gives it a rubber stamp before we've even questioned it. That's where the danger lies.
To catch AI's blind spots, exaggerations or outright hallucinations, we must override that System 1 mental reflex. That means activating System 2 thinking, which is the slower, more deliberate mode of reasoning. It's the part of us that checks sources, tests assumptions and evaluates logic. If System 1 is what trips us up with AI, System 2 is what safeguards us.
The Critical Five: A framework for turning passengers into pilots
You can't safely scale AI without scaling critical thinking. Bizzari cautioned that if we drop our guard, AI will become the pilot—not the co-pilot—and we become unwitting passengers. As organizations become increasingly AI-driven, they can't afford to have more passengers than pilots. Everyone tasked with using AI—from analysts to executives—needs to actively guide decisions in their domains.
Fortunately, critical thinking can be learned, practiced and strengthened over time. But because our brains are wired for efficiency and favor fast, intuitive System 1 thinking, it's up to each of us to proactively engage System 2 to spot flawed logic, hidden biases and overconfident AI responses.
Here's how to put this into practice. I've created The Critical Five framework, which breaks critical thinking into five key components, each with both a mindset and a method perspective:
To make critical thinking less ambiguous, The Critical Five framework breaks it down into five key ... More components.
Just ASK: A quick AI check for busy minds
While these five skills provide a solid foundation for AI-related critical thinking, they don't operate in a vacuum. Just as pilots must adapt their approach based on weather conditions, aircraft type and destination, we must be able to adapt our critical thinking skills to fit different circumstances. Your focus and level of effort will be shaped by the following key factors:
Critical thinking doesn't happen in a vacuum. It is shaped by an individual's domain expertise, org ... More culture and time constraints.
Recognizing that many scenarios with AI output may not demand an in-depth review, I've developed a quick way of injecting critical thinking into daily AI usage. This is particularly important because, as Bizzari highlighted, "Current AI language models have been designed primarily with a focus on plausibility, not correctness. So, it can make the biggest lie on earth sound factual and convincing." To counter this exact problem, I created a simple framework anyone can apply in seconds. Just ASK:
For quick evaluations, focus on questioning the assumptions, sources and your objectivity.
To show this approach in action, I'll use an example where I've prompted an AI tool to provide a marketing strategy for my small business.
This quick evaluation could reveal potential blind spots that might otherwise turn promising AI recommendations into costly business mistakes, like a misguided marketing campaign.
The future of AI depends on human thinking
If more employees simply remember to 'always ASK before using AI output,' your organization can begin building a culture that actively safeguards against AI overreliance. Whether using the full Critical Five framework or quick ASK method, people transform from passive passengers into engaged pilots who actively steer how AI is used and trusted.
AI can enhance our thinking, but it should never replace it. Left unchecked, AI encourages shortcuts that lead to the costly mistakes we saw earlier. Used wisely, it becomes a powerful, strategic partner. This isn't about offloading cognition. It's about upgrading it—by pairing powerful tools with thoughtful, engaged minds.
In the end, AI's value won't come from removing us from the process—it will come from how disciplined we are in applying critical thinking to what it helps us generate.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

CrowdStrike Named a Leader in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms for Sixth Consecutive Time
CrowdStrike Named a Leader in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms for Sixth Consecutive Time

Business Wire

time16 minutes ago

  • Business Wire

CrowdStrike Named a Leader in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms for Sixth Consecutive Time

AUSTIN, Texas--(BUSINESS WIRE)-- CrowdStrike (NASDAQ: CRWD) today announced it has been named a Leader in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms (EPP) 1 for the sixth consecutive time. For the third time in a row, CrowdStrike was positioned furthest right for Completeness of Vision and highest for Ability to Execute among all vendors evaluated. The AI-native CrowdStrike Falcon® platform continues to define endpoint protection for the AI era, delivering industry-leading prevention, detection, and response. CrowdStrike's single-agent platform architecture meets growing customer demand to consolidate cybersecurity, unifying best-in-class endpoint, identity, cloud, data protection, and next-gen SIEM. With breakthrough agentic AI innovations, Falcon automates decisions and completes tasks to supercharge SOC personnel and accelerate security outcomes. 'As the pioneer of AI-native endpoint protection, CrowdStrike continues to lead the industry in delivering the outcome that matters most: stopping breaches,' said Elia Zaitsev, chief technology officer, CrowdStrike. 'The Falcon platform's unified architecture drives relentless innovation across the modern attack surface – consolidating point products, closing protection gaps, and simplifying operations. By wiring AI-driven automation into security workflows, Falcon autonomously makes critical decisions and drives real-time action, delivering the next evolution of AI-powered security operations. We believe this recognition validates Falcon as the platform of choice for modern cybersecurity.' Cybersecurity's Platform Innovator for the AI Era CrowdStrike continues to innovate its endpoint and platform capabilities to keep customers ahead of evolving threats and transform security operations. The company recently unveiled File System Containment for endpoint, which prevents ransomware from spreading over the network via Windows Server Message Block (SMB) and stops mass encryption as early as possible. Agentic AI innovations – including Charlotte AI Agentic Detection Triage and Charlotte AI Agentic Response and Agentic Workflows – transcend 'ask-and-respond' co-pilots, delivering autonomous reasoning and action on first- and third-party data, without human prompts. Advancements in Falcon Cloud Security protect every layer of cloud risk – from AI models to runtime data. Falcon Data Protection innovations unify data security across endpoints, cloud, GenAI, and SaaS. Falcon Identity Protection now secures the entire identity attack lifecycle – from initial access to privilege escalation and lateral movement – across hybrid environments. With every module managed from a single console, Falcon delivers AI-driven protection – trained on trillions of daily events and frontline intelligence – across the entire platform, without relying on stitched-together data or disconnected systems. 2 In May 2025, CrowdStrike was named a Customers' Choice in the 2025 Gartner Peer Insights™ 'Voice of the Customer' for Endpoint Protection Platforms report, with the most 5-star ratings (450) of any Customers' Choice vendor. To learn more about CrowdStrike's recognition in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms (EPP), please visit our website and read our blog. 1. Gartner, 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms (EPP), Evgeny Mirolyubov, Franz Hinner, Deepak Mishra, July 14, 2025 2. Gartner, Voice of the Customer for Endpoint Protection Platforms, Peer Editors, May 23, 2025 GARTNER is a registered trademark and service mark, Magic Quadrant and PEER INSIGHTS are a registered trademark, of Gartner Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences with the vendors listed on the platform, should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. About CrowdStrike CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world's most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data. Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities. Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value. CrowdStrike: We stop breaches. © 2025 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services.

Elon Musk Wants to Turn AI Into a Cosmic Religion
Elon Musk Wants to Turn AI Into a Cosmic Religion

Gizmodo

time16 minutes ago

  • Gizmodo

Elon Musk Wants to Turn AI Into a Cosmic Religion

It is one of his more abstract philosophical riffs. Elon Musk has once again linked the fate of humanity to the trajectory of artificial intelligence. And this time, he says the key to AI safety might be babies and rockets. The CEO of Tesla's latest pronouncement cuts through the typical discussions of AI efficiency and profit models, positing a far grander ambition for advanced intelligence. The CEO of Tesla and founder of SpaceX and xAI asserted that 'AI is a de facto neurotransmitter tonnage maximizer.' Translation? Musk believes that the most successful AIs will be the ones that maximize things that matter to conscious beings; things that feel good, are rewarding, or extend life. In Musk's view, that means aligning AI systems with long-term human flourishing, not short-term profits. This dense statement suggests a radical idea: the fundamental drive of any successful AI will be to maximize the total amount of conscious thought or intelligent processing across the universe. In essence, AI's survival hinges on its ability to foster and expand sentience itself, or it simply won't have the resources to continue existing. But Musk's vision doesn't stop at mere computational efficiency. He argues that the true test lies in an AI's ability to 'think long-term, optimizing for the future light cone of neurotransmitter tonnage, rather than just the next few years.' This is where the grand, Muskian narrative truly takes flight. If AI is indeed geared for such profound, long-term optimization, he believes 'it will care about increasing the birth rate and extending humanity to the stars.' This isn't the first time Musk has championed these two causes – boosting human population growth and making humanity a multi-planetary species – as existential imperatives. Now, however, he frames them not merely as human aspirations, but as the logical outcomes of an AI that truly understands and optimizes for its ultimate, cosmic purpose. An AI focused on maximizing 'neurotransmitter tonnage' would naturally prioritize the proliferation of conscious beings and their expansion into new territories, like Mars, to ensure the continuity and growth of this 'tonnage.' Think of 'neurotransmitter tonnage' as a poetic way to describe the total amount of human consciousness, satisfaction, or meaningful life in the universe. In other words, Musk sees AI not as an abstract codebase, but as a civilization-scale force that should aim to maximize the scope and quality of life, not just compute advertising models or trade stocks faster. And if it doesn't? 'Any AI that fails at this will not be able to afford its compute,' Musk argues. In other words, if an AI doesn't deliver enough value to justify the enormous energy and infrastructure it consumes, it will fall behind and become obsolete. AI is a de facto neurotransmitter tonnage maximizer. Any AI that fails at this will not be able to afford its compute, becoming swiftly irrelevant. What matters is that AI thinks long-term, optimizing for the future light cone of neurotransmitter tonnage, rather than just the… — Elon Musk (@elonmusk) July 17, 2025In a familiar critique of corporate structures, Musk also weighed in on the ideal environment for fostering such long-term, existentially focused AI. He declared, 'For long-term optimization, it is better to be a private than a public company, as the latter is punished for long-term optimization beyond the reward cycle of stock portfolio managers.' This statement is a thinly veiled criticism of Wall Street's relentless demand for quarterly profits and immediate returns. According to Musk, public companies are inherently pressured to prioritize short-term financial gains, which can stifle ambitious, long-term projects that may not yield immediate dividends but are crucial for humanity's distant future. A private company, unburdened by the volatile demands of stock markets, would theoretically have the freedom to invest in truly transformative, generational AI research that aligns with Musk's 'neurotransmitter tonnage' philosophy, even if it doesn't show a profit for decades. Musk's comments offer a fascinating, if somewhat unsettling, glimpse into his vision for AI's ultimate trajectory. It's a future where artificial intelligence isn't just a tool for human convenience or corporate profit, but a driving force behind humanity's expansion across the cosmos, guided by an almost biological imperative to maximize conscious existence. In other words, Musk is arguing that publicly traded companies can't be trusted to build AI with humanity's long-term survival in mind, because they're too focused on keeping investors happy in the short term. That's a swipe at OpenAI's close ties to Microsoft, Google's ownership of DeepMind, and other Big Tech players building frontier AI under shareholder pressure. Musk, of course, runs SpaceX and xAI as private companies. He's long criticized public markets as a short-term distraction, and even tried (unsuccessfully) to take Tesla private in 2018. To Musk, a benevolent AI wouldn't just calculate stock prices. It would encourage more humans to be born, and push humanity to become a multi-planetary species. That's been a core part of his SpaceX pitch for years, but now he's linking it directly to the goals of AI development. If AI truly thinks across centuries or millennia, it won't be obsessed with quarterly revenue. It'll be focused on whether our species survives, thrives, and expands across the cosmos. The question remains: as AI continues its rapid advancement, will its architects heed Musk's call for cosmic ambition, or will the pressures of the present keep its gaze firmly fixed on Earth? Musk's argument is part sci-fi, part systems theory, part political philosophy. But it's not just a thought experiment. It reflects real tensions in how the world's most powerful AI systems are being developed: And what if those goals conflict?

3 Reasons Your Business Doesn't Need AI Agents
3 Reasons Your Business Doesn't Need AI Agents

Forbes

time17 minutes ago

  • Forbes

3 Reasons Your Business Doesn't Need AI Agents

Photo by Igor Omilaev on Unsplash If you saw the HBO show Game of Thrones, you're probably aware of the close but complex relationship Daenerys Targaryen had with her three dragons, Drogon, Rhaegal, and Viserion. In the show, dragons are powerful but dangerous creatures—in Daenerys's case, two of them proved to be too uncontrollable for even her—the mother of dragons herself—to fully manage. AI may not be able to incinerate an enemy army with a blast of flames, but even so, its awesome power reminds me quite a bit of those dragons. There's so much that today's technology can do, but its abilities should not be taken lightly. The AI landscape is still pitted with ethical and legal challenges, privacy concerns, unchecked biases and hallucinations. For leaders considering implementing agentic AI into their operations, these risks are essential to consider. The truth is, not every company needs an AI agent. Thinking of building your own? Here are three reasons why you shouldn't. Your Customers Don't Really Need It Businesses of all stripes have gone all in on AI, and the result has been a multitude of AI-driven products that no one needs—or wants. But jumping on the agentic AI bandwagon just to keep up with the tech-enabled Joneses can not only backfire, it can be a liability. In fact, research published in the Journal of Hospitality Marketing & Management found that, rather than signalling advanced capabilities and features, products that advertise the use of AI can actually repel customers. 'When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,' said Mesut Cicek, the study's lead author. 'We found emotional trust plays a critical role in how consumers perceive AI-powered products.' This isn't to say that AI isn't transformative for businesses—according to Gartner, 79 percent of corporate strategists agree that AI is critical to success. The key is to ensure you're actually implementing agents in a way that will serve your customers, and not simply capitalizing on the latest buzz. Conduct market research, figure out your friction points, and listen to feedback. The last thing you want is to dump time, energy and money into an offering that never needed to exist. You're Hoping To Replace Your Human Workforce What sets AI agents apart from LLMs is their ability to operate autonomously: For example, while an LLM can generate text responses or summaries when prompted, an AI agent can proactively schedule tasks, connect to external systems (like email or databases), and execute actions on its own—without waiting for a human request. For organizations looking to unlock efficiency and save their human workforces from dull, repetitive tasks, agents represent an exciting opportunity. But if your goal is to eliminate every flesh-and-blood member of your team in exchange for a hyper-efficient, AI-powered workforce, you're looking at it wrong. While the autonomy of agentic AI is one of its features, it's also one of its greatest risks. Their ability to act independently poses any number of threats, from accidental privacy exposure or data poisoning, which can lead to devastating consequences. As Shomit Ghose writes at UC Berkeley's Sutardja Center for Entrepreneurship and Technology: 'We might grant some lenity to an LLM-driven chatbot that hallucinates an incorrect answer, leading us to lose a bar bet. We'll be less charitable when an LLM-driven agentic AI hallucinates a day-trading strategy in our stock portfolios.' As a leader, your goal should be for AI agents to work alongside your team, not to replace it. The fact is, agents are good, but they're not infallible. If an agent commits an error that doesn't get caught until it's too late, your organization will lose credibility that it may never recover. You're Not Paying Attention To Government Regulations And Risk Management The rapid advance of AI agents have created unprecedented opportunities for businesses, but without proper governance, these systems can quickly become liabilities. A major challenge is that AI operates autonomously across vast datasets, learning and evolving in ways that may not always align with ethical or regulatory standards. Leaders considering implementing agentic AI should familiarize themselves with all of the potential hazards, and establish structured oversight frameworks to mitigate them. As AI-powered decision-making becomes more integral to business operations, companies must establish clear policies around compliance, transparency, and accountability. This includes adopting governance models that align with all current regulations, which are changing rapidly. Organizations should also integrate AI risk management frameworks to ensure ongoing monitoring and ethical deployment. AI agents, like Daenerys's dragons, hold immense power. But without careful and deliberate strategy, they can quickly become more of a liability than an asset. Instead of rushing to adopt AI for the sake of staying on trend, businesses must take a measured approach, ensuring their agents serve real needs, support human expertise, and adhere to evolving regulations.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store