The human brain doesn't learn, think or recall like an AI. Embrace the difference
I am a cognitive neuroscience researcher, and I think that they are dangerously wrong.
The biggest threat isn't that these metaphors confuse us about how AI works, but that they mislead us about our own brains. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to one new machine after another: a clock, a switchboard, a computer. The latest erroneous metaphor is that our brains are like AI systems.
I've seen this shift over the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like 'training,' 'fine-tuning' and 'optimization' are frequently used to describe human behavior. But we don't train, fine-tune or optimize in the way that AI does. And such inaccurate metaphors can cause real harm.
The 17th century idea of the mind as a 'blank slate' imagined children as empty surfaces shaped entirely by outside influences. This led to rigid education systems that tried to eliminate differences in neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalized support. Similarly, the early 20th century 'black box' model from behaviorist psychology claimed only visible behavior mattered. As a result, mental healthcare often focused on managing symptoms rather than understanding their emotional or biological causes.
And now there are new misbegotten approaches emerging as we start to see ourselves in the image of AI. Digital educational tools developed in recent years, for example, adjust lessons and questions based on a child's answers, theoretically keeping the student at an optimal learning level. This is heavily inspired by how an AI model is trained.
This adaptive approach can produce impressive results, but it overlooks less measurable factors such as motivation or passion. Imagine two children learning piano with the help of a smart app that adjusts for their changing proficiency. One quickly learns to play flawlessly but hates every practice session. The other makes constant mistakes but enjoys every minute. Judging only on the terms we apply to AI models, we would say the child playing flawlessly has outperformed the other student.
But educating children is different from training an AI algorithm. That simplistic assessment would not account for the first student's misery or the second child's enjoyment. Those factors matter; there is a good chance the child having fun will be the one still playing a decade from now — and they might even end up a better and more original musician because they enjoy the activity, mistakes and all. I definitely think that AI in learning is both inevitable and potentially transformative for the better, but if we will assess children only in terms of what can be 'trained' and 'fine-tuned,' we will repeat the old mistake of emphasizing output over experience.
I see this playing out with undergraduate students, who, for the first time, believe they can achieve the best measured outcomes by fully outsourcing the learning process. Many have been using AI tools over the past two years (some courses allow it and some do not) and now rely on them to maximize efficiency, often at the expense of reflection and genuine understanding. They use AI as a tool that helps them produce good essays, yet the process in many cases no longer has much connection to original thinking or to discovering what sparks the students' curiosity.
If we continue thinking within this brain-as-AI framework, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from identifying familiar patterns, but from breaking them through messiness and unexpected mistakes. Alexander Fleming discovered penicillin by noticing that mold growing in a petri dish he had accidentally left out was killing the surrounding bacteria. A fortunate mistake made by a messy researcher that went on to save the lives of hundreds of millions of people.
This messiness isn't just important for eccentric scientists. It is important to every human brain. One of the most interesting discoveries in neuroscience in the past two decades is the 'default mode network,' a group of brain regions that becomes active when we are daydreaming and not focused on a specific task. This network has also been found to play a role in reflecting on the past, imagining and thinking about ourselves and others. Disregarding this mind-wandering behavior as a glitch rather than embracing it as a core human feature will inevitably lead us to build flawed systems in education, mental health and law.
Unfortunately, it is particularly easy to confuse AI with human thinking. Microsoft describes generative AI models like ChatGPT on its official website as tools that 'mirror human expression, redefining our relationship to technology.' And OpenAI CEO Sam Altman recently highlighted his favorite new feature in ChatGPT called 'memory.' This function allows the system to retain and recall personal details across conversations. For example, if you ask ChatGPT where to eat, it might remind you of a Thai restaurant you mentioned wanting to try months earlier. 'It's not that you plug your brain in one day,' Altman explained, 'but … it'll get to know you, and it'll become this extension of yourself.'
The suggestion that AI's 'memory' will be an extension of our own is again a flawed metaphor — leading us to misunderstand the new technology and our own minds. Unlike human memory, which evolved to forget, update and reshape memories based on myriad factors, AI memory can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that remembers almost everything isn't an extension of the self; it breaks from the very mechanisms that make us human. It would mark a shift in how we behave, understand the world and make decisions. This might begin with small things, like choosing a restaurant, but it can quickly move to much bigger decisions, such as taking a different career path or choosing a different partner than we would have, because AI models can surface connections and context that our brains may have cleared away for one reason or another.
This outsourcing may be tempting because this technology seems human to us, but AI learns, understands and sees the world in fundamentally different ways, and doesn't truly experience pain, love or curiosity like we do. The consequences of this ongoing confusion could be disastrous — not because AI is inherently harmful, but because instead of shaping it into a tool that complements our human minds, we will allow it to reshape us in its own image.
Iddo Gefen is a PhD candidate in cognitive neuroscience at Columbia University and author of the novel 'Mrs. Lilienblum's Cloud Factory.'. His Substack newsletter, Neuron Stories, connects neuroscience insights to human behavior.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
4 minutes ago
- Forbes
Mistral AI's Environmental Audit Puts Spotlight On AI's Hidden Costs
Mistral AI Mistral AI has quantified the environmental price of artificial intelligence with unprecedented transparency, releasing what appears to be the first comprehensive lifecycle assessment of a large language model. The French AI startup's detailed analysis of its Mistral Large 2 model reveals that training alone generated 20,400 metric tons of carbon dioxide equivalent and consumed 281,000 cubic meters of water over 18 months. This disclosure comes as enterprises face dual pressures - implementing AI to stay competitive while fulfilling sustainability commitments. The audit provides decision-makers with concrete data points that were previously hidden behind industry opacity, enabling more informed technology adoption strategies. The numbers from Mistral's assessment illustrate the resource intensity of AI. Training the 123 billion parameter model required energy equivalent to 4,500 gasoline-powered cars operating for a year, while water consumption matched filling 112 Olympic-sized swimming pools. Each individual query through Mistral's Le Chat assistant generates 1.14 grams of CO2 equivalent and consumes 45 milliliters of water, roughly equivalent to growing a small radish. Mistral AI More significantly, the analysis reveals that operational phases have a greater impact on the environment. Training and inference account for 85% of water consumption, far exceeding the environmental cost of hardware manufacturing or data center construction. This operational dominance means that environmental costs accumulate continuously as model usage scales up. Mistral's research identifies actionable strategies for reducing environmental impact. Geographic location has a significant influence on carbon footprint, with models trained in regions with renewable energy and cooler climates exhibiting markedly lower emissions. The study demonstrates a strong correlation between model size and environmental cost, with larger models generating impacts roughly one order of magnitude higher for equivalent token generation. These findings suggest specific optimization approaches. Enterprises can reduce environmental impact by selecting appropriately sized models for specific use cases rather than defaulting to larger, general-purpose systems. Continuous batching techniques that group queries can minimize computational waste, while deploying models in regions with clean energy grids substantially reduces carbon emissions. Mistral's disclosure strategy differs significantly from that of its competitors. While OpenAI CEO Sam Altman recently claimed ChatGPT queries consume just 0.32 milliliters of water per request, the lack of a detailed methodology makes meaningful comparison difficult. This transparency gap presents opportunities for companies willing to provide comprehensive environmental data, allowing them to differentiate themselves competitively. The audit establishes environmental transparency as a key differentiator in the enterprise AI market. As sustainability metrics increasingly influence procurement decisions, vendors providing detailed environmental impact data gain advantages in enterprise sales cycles. This transparency enables more sophisticated vendor evaluations that balance performance requirements against environmental costs. For technology executives, Mistral's audit provides decision-making criteria previously unavailable. Organizations can now factor environmental impact into AI procurement decisions, alongside traditional metrics such as performance and cost. The data enables more sophisticated total cost of ownership calculations that include environmental externalities. Looking ahead, environmental performance may become as critical as computational performance in selecting AI vendors. Organizations that establish environmental accounting practices now position themselves advantageously as regulatory requirements expand and stakeholder scrutiny intensifies. The Mistral audit demonstrates that detailed environmental measurement is feasible, potentially making opacity from other vendors increasingly untenable in enterprise markets.


Bloomberg
24 minutes ago
- Bloomberg
Monolith's Wang on China AI Investment Opportunities
Tim Wang, Co-founder and CIO of Monolith Management, discusses his strategy and outlook on areas of investment opportunities in the Chinese AI sector. He speaks exclusively with Annabelle Droulers on the sidelines of the World AI Conference in Shanghai. (Source: Bloomberg)


The Verge
an hour ago
- The Verge
Samsung reveals a mysterious $16.5 billion chip deal.
Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy See all Stories Posted Jul 28, 2025 at 3:04 AM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Richard Lawler Posts from this author will be added to your daily email digest and your homepage feed. See All by Richard Lawler Posts from this topic will be added to your daily email digest and your homepage feed. See All Business Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Samsung Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech