
AI Can't Replace Education—Unless We Let It
Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete.
While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring.
Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world?
AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability.
Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding.
That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method.
That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us.
I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop.
Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature.
The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process.
The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this.
We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations.
After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding.
This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.'
AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter.
The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNN
38 minutes ago
- CNN
How AI can help ease the strain on health care services
CNN's Clare Sebastian visits Philips in the Netherlands, where the company has centered its focus on innovations in the health care tech sector, including the use of artificial intelligence in some of the most commonly used diagnostics tools in the world.
Yahoo
an hour ago
- Yahoo
Meta vs. Nvidia: How to play Mag 7 in the second half of 2025
The "Magnificent Seven," the group of stocks comprising Nvidia (NVDA), Alphabet (GOOG, GOOGL), Tesla (TSLA), Microsoft (MSFT), Amazon (AMZN), Meta (META), and Apple (AAPL), fueled market gains in 2023 and 2024. Washington Crossing Advisors senior portfolio manager Chad Morganlander and Senior Reporters Brooke DiPalma and Allie Canal join Opening Bid host Brian Sozzi to take a closer look at Meta and Nvidia, examining which Magnificent Seven name is better positioned for the second half of 2025. To watch more expert insights and analysis on the latest market action, check out more Market Catalysts here. Meta just hit a record high on Monday. Nvidia's 17% surge the past month has it knocking on the door of a $4 trillion market cap. My question is this, should you own both Meta and Nvidia in the face of their face ripping rallies? Chad, I'll start with you here. If you had to pick one of these two names in the back half of the year, which one is it? I would perhaps go with Meta, uh, there seems as if they have a continual competitive advantage and it doesn't look like they're going to be in the crosshairs of the DOJ anytime soon. Uh, with that said, I'm sorry, I'm going to pivot, uh, you want to own other companies within the MAC 7 that perhaps their valuations have been drastically discounted. Brooke, uh, you mentioned earlier on this new super intelligence unit for Meta and it sounds great. It's led by Alexander Wang, uh, who who led Scale AI sounds exciting on paper. What I can't figure out is how is this going to impact Meta's earnings this year? I don't know if it will. Well, what Mark Zuckerberg says that this is the necessity of the company. Of course, Mark Zuckerberg, the founder, the CEO of Meta, formerly known as Facebook. But what he really said here was that he believes that this is beginning of a new era for humanity and he's fully committed to doing what it takes for Meta to lead the way. Of course, Meta looking to get ahead in this AI revolution that so many companies are looking to jump on board. And as you mentioned, he's brought in top executives from AI, uh, Scale AI CEO Alexander Wang. He also brought in former GitHub CEO, uh, as well as Nat, that's Nat Friedman there. He also brought in 11 new hires who are really focused around this AI trade and he's really saying here, you know, investors really need to jump on and understand that this is not a want, a desire. This is a necessity for the company to compete in the near term and really long term here, Brian. All you know, who's really driving the future of AI? That's Jensen Huang, Nvidia. Make the case for Nvidia. Yeah, Nvidia, baby, the godfather of AI here. And we've seen mega cap stocks, especially Nvidia outperform. We were just talking about the share price move that we've seen up about 15%. We are nearing closer and closer to that $4 trillion market cap. And just a little context here, Brian. If Nvidia hits those levels, that would be valued 36% higher than the entire British stock market. That's the FTSE 100 and just 18% below the total value of Japan's Nikkei 225. So that just goes to show the sheer enthusiasm around this company, around this stock. Of course, earnings, that's huge for this company. If we see any deterioration on that front when it comes to demand along with the hyperscalers, if they hint at any sort of demand headwinds, Blackwell, obviously a big focus for a lot of those investors. If we see that, we could have a pullback in the stock. But so far, earnings have been relatively steady, moving to the upside, and I think that is the big next test for Nvidia. But I'm bullish here and I think that Nvidia is going to continue to see a lot of those gains.


Forbes
an hour ago
- Forbes
Cloudflare Sidesteps Copyright Issues, Blocking AI Scrapers By Default
AI training IT service management company Cloudflare is striking back on behalf of content creators, blocking AI scrapers by default. Web scrapers are bots that crawl the internet, collecting and cataloguing content of all types, and are used by AI firms to collect material that can be used to train their models. Now, though, Cloudflare is allowing website owners to choose if they want AI crawlers to access their content, and decide how the AI companies can use it. They can opt to allow crawlers for certain purposes—search, for example—but block others. AI companies will have to obtain explicit permission from a website before scraping. 'Original content is what makes the internet one of the greatest inventions in the last century, and it's essential that creators continue making it," said Matthew Prince, co-founder and CEO of Cloudflare. "AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant internet with a new model that works for everyone.' The company's also introducing "pay per crawl"—the ability to charge companies for access. Currently in beta, this facility means that every time an AI crawler requests content, it has to either present payment intent via request headers for successful access, or receive a "402 Payment Required" response with pricing. Users will be able to bypass charges for specific crawlers as needed - useful if they want to allow a certain crawler through for free, or to take part in a content partnership. Crucially, the new measures don't depend on copyright law - currently a legal minefield when it comes to the use of data for AI - but on standard contract law. The move has been welcomed by content owners and creators, with dozens of media organizations and others saying they plan to sign up. 'Cloudflare's innovative approach to block AI crawlers is a game-changer for publishers and sets a new standard for how content is respected online. When AI companies can no longer take anything, they want for free, it opens the door to sustainable innovation built on permission and partnership,' said Roger Lynch, CEO of Condé Nast. 'This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable.' However, with Cloudflare used by millions of organizations around the world, the move looks like bad news for AI companies. 'This long-awaited feature by Cloudflare is a true disaster for many GenAI vendors, which may be fatal to the current business models of GenAI", said Ilia Kolochenko, CEO at ImmuniWeb and an adjunct professor of cybersecurity at Capital Technology University in Maryland. "Most GenAI vendors will soon face a tough reality: paying a fair price for high-quality training data, while staying profitable. In view of the formidable competition emanating from China, many Western GenAI companies may simply quit the business as economically unviable.'