
Is ChatGPT Making Us Stupid?
In boardrooms and classrooms, coffee shops and cubicles, the same question keeps coming up: Is ChatGPT making us smarter, or is it making us intellectually lazy—maybe even stupid?
There's no question that generative artificial intelligence is a game-changer. ChatGPT drafts our emails, answers our questions, and completes our sentences. For students, it's become the new CliffsNotes. For professionals, a brainstorming device. For coders, a potential job killer. In record time, it has become a productivity enhancer for almost everything. But what is it doing to our brains?
As someone who has spent his career helping clients anticipate and prepare for the future, this question deserves our attention. With any new technology, concerns inevitably arise about its impact. When calculators were first introduced, people worried that students would lose their ability to perform basic arithmetic or mental math skills. When GPS was first introduced, some fretted that we would lose our innate sense of direction. And when the internet bloomed, people grew alarmed that easy access to information would erode our capacity for concentration and contemplation.
'Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, is what often gets shortchanged by internet grazing,' noted technology writer Nicholas Carr in a prescient 2008 Atlantic article, 'Is Google Making Us Stupid?'
Today, Carr's question needs to be asked anew – but of a different techno-innovation. Just-released research studies are helping us understand what's going on when we allow ChatGPT to think for us.
What Happens to the Brain on ChatGPT?
Researchers at MIT invited fifty-four participants to write essays across four sessions, divided into three groups: one using ChatGPT, one using Google, and one using only their brainpower. In the final session, the groups switched roles. What these researchers found should make all of us pause.
Participants who used ChatGPT consistently produced essays that scored lower in originality and depth than those who used search or wrote unaided. More strikingly, brain imaging revealed a decline in cognitive engagement in ChatGPT users. Brain regions associated with attention, memory, and higher-order reasoning were noticeably less active.
The MIT researchers introduced the concept of "cognitive debt"—the subtle but accumulating cost to our mental faculties when we outsource too much of our thinking to AI. 'Just as relying on a GPS dulls our sense of direction, relying on AI to write and reason can dull our ability to do those very things ourselves,' notes the MIT report. 'That's a debt that compounds over time.'
The second study, published in the peer-reviewed Swiss journal Societies, is titled 'AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.' It broadens the lens from a lab experiment to everyday life.
Researchers surveyed 666 individuals from various age and educational backgrounds to explore how often people rely on AI tools—and how that reliance affects their ability to think critically. The findings revealed a strong negative correlation between frequent AI use and critical thinking performance. Those who often turned to AI for tasks like writing, researching, or decision-making exhibited lower 'metacognitive' awareness and analytical reasoning. This wasn't limited to any one demographic, but younger users and those with lower educational attainment were particularly affected.
What's more, the study confirmed that over-reliance on AI encourages 'cognitive offloading'—our tendency to let external tools do the work our brains used to do. While cognitive offloading isn't new (we've done it for centuries with calculators and calendars), AI takes it to a whole new level. 'When your assistant can 'think' for you, you may stop thinking altogether,' the report notes.
Are We Letting the Tool Use Us?
These studies aren't anti-AI. Neither am I. I use ChatGPT daily. As a futurist, I see ChatGPT and similar tools as transformational breakthroughs—the printing press of the 21st century. They unlock productivity, unleash creativity, and lower barriers to knowledge.
But just as the printing press didn't eliminate the need to learn to read, ChatGPT doesn't absolve us of the responsibility to think. And that is the danger today, that people will stop doing their own thinking.
These studies are preliminary, and further research is needed. However, there is sufficient evidence to suggest that heavy use of AI is not only a game changer, but an alarming threat to humanity's ability to solve problems, communicate with one another, and perhaps to thrive. In integrating metacognitive strategies—thinking about thinking—into education, workplace training, and even product design. In other words, don't just use AI—engage with it. The line we must straddle is between augmentation and abdication. Are we using AI to elevate our thinking? Or are we turning over the keys to robots?
Here are four ideas for using this new technology, while keeping our cognitive edge sharp:
The danger isn't that ChatGPT will replace us. But it can make us stupid—if we let it replace our thinking instead of enriching it. The difference lies in how we use it, and more importantly, how aware we are while using it. The danger is that we'll stop developing the parts of ourselves that matter most—because it's faster and easier to let the machine do it. Let's not allow that to happen.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
34 minutes ago
- Forbes
Copyrighted Books Are Fair Use For AI Training. Here's What To Know.
The use of AI systems has become part of our daily lives. The sudden presence of generative AI systems in our daily lives has prompted many to question the legality of how AI systems are created and used. One question relevant to my practice: Does the ingestion of copyrighted works such books, articles, photographs, and art to train an AI system render the system's creators liable for copyright infringement, or is that ingestion defensible as a 'fair use'? A court ruling answers this novel question, and the answer is: Yes, the use of copyrighted works for AI training is a fair use – at least under the specific facts of those cases and the evidence presented by the parties. But because the judges in both cases were somewhat expansive in their dicta about how their decisions might have been different, they provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The rulings on Meta and Anthropic's respective cases require some attention. Let's take a closer look. More than 30 lawsuits have been filed in the past year or two, in all parts of the nation, by authors, news publishers, artists, photographers, musicians, record companies and other creators against various AI systems, asserting that using the authors' respective copyrighted works for AI training purposes violates their copyrights. The systems' owners invariably assert fair use as a defense. They provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The Anthropic Case Anthropic planned to create a central library of "all the books in the world." The first decision, issued in June, involved a lawsuit by three book authors, who alleged that Anthropic PBC infringed the authors' copyrights by copying several of their books (among millions of others) to train its text generative AI system called Claude. Anthropic's defense was fair use. Judge Alsup, sitting the Northern District Court of California, held that the use of the books for training purposes was a fair use, and that the conversion of any print books that Anthropic had purchased and converted to digital was also a fair use. However, Anthropic's use of pirated digital copies for purposes of creating a central library of 'all the books in the world' for uses beyond training Claude, was not a fair use. Whether Anthropic's copying of its central library copies for purposes other than AI training (and apparently there was some evidence that this was going on, but on a poorly developed record) was left for another day. It appears that Anthropic decided early on in its designing of Claude that books were the most valuable training materials for a system that was designed to 'think' and write like a human. Books provide patterns of speech, prose and proper grammar, among other things. Anthropic chose to download millions of free digital copies of books from pirate sites. It also purchased millions of print copies of books from booksellers, converted them to digital copies and threw the print copies away, resulting in a massive central library of 'all the books in the world' that Anthropic planned to keep 'forever.' None of this activity was done with the authors' permission. Significantly, Claude was designed so that it would not reproduce any of the plaintiffs' books as output. There was not any such assertion by the plaintiffs, nor any evidence that it did so. The assertions of copyright infringement were, therefore, limited to Claude's ingestion of the books for training, to build the central library, and for the unidentified non-training purposes. Users of Claude ask it questions and it returns text-based answers. Many users use it for free. Certain corporate and other users of Claude pay to use it, generating over one billion dollars annually in revenue for Anthropic. The Anthropic Ruling Both decisions were from the federal district court in Northern California, the situs of Silicon ... More Valley. To summarize the legal analysis, Judge Alsup evaluated each 'use' of the books separately, as it must under the Supreme Court's 2023 Warhol v. Goldsmith fair use decision. Turning first to the use of the books as training data, Alsup found that the use of the books to train Claude was a 'quintessentially' transformative use which did not supplant the market for the plaintiffs' books, and as such qualified as fair use. He further found that the conversion of the purchased print books to digital files, where the print copies were thrown away, was also a transformative use akin to the Supreme Court's 1984 Betamax decision in which the court held that the home recording of free TV programming for time-shifting purposes was a fair use. Here, Judge Alsup reasoned, Anthropic lawfully purchased the books and was merely format-shifting for space and search capability purposes, and, since the original print copy was discarded, only one copy remained (unlike the now-defunct Redigi platform of 2018). By contrast, the downloading of the over seven million of pirate copies from pirate sites, which at the outset was illegal, for central library uses other than for training purposes could not be held to be a fair use as a matter of law, because the central library use was unjustified and the use of the pirate copies could supplant the market for the original. Anthropic Is Liable For Unfair Uses – The Cost of Doing Business? The case will continue on the issue of damages for the pirated copies of the plaintiffs' books used for central library purposes and not for training purposes. The court noted that the fact that Anthropic later purchased copies of plaintiffs' books to replace the pirated copies will not absolve it of liability, but might affect the amount of statutory damages it has to pay. The statutory damages range is $750 per copy at a minimum and up to $150,000 per copy maximum. It tempts one to wonder about all those other millions of copyright owners beyond the three plaintiffs – might Anthropic have to pay statutory damages for seven million copies if the pending class action is certified? Given the lucrativeness of Claude, could that be just a cost of doing AI business? The Meta Case Meta's decision to use shadow libraries to source books was approved by CEO Mark Zuckerberg. The second decision, issued two days following the Anthropic decision, on June 25, involves thirteen book authors, most of them famous non-fiction writers, who sued Meta, the creator of a generative AI model called Llama, for using the plaintiffs' books as training data. Llama (like Claude), is free to download, but generates billions of dollars for Meta. Like Anthropic, Meta initially looked into licensing rights from book publishers, but eventually abandoned those efforts and instead downloaded the books it desired from pirate sites called 'shadow libraries' which were not authorized by the copyright owners to store their works. Also like Claude, Llama was designed not to produce output that reproduced its source material in whole or substantial part, the record indicating that Llama could not be prompted to reproduce more than 50 words from the plaintiffs' books. Judge Chhabria, also in the Northern District of California, held Meta's use of plaintiffs' works to train Llama was a fair use, but he did so very reluctantly, chiding the plaintiff's lawyers for making the 'wrong' arguments and failing to develop an adequate record. Chhabria's decision is riddled with his perceptions of the dangers of AI systems potentially flooding the market with substitutes for human authorship and destroying incentives to create. The Meta Ruling Based on the parties' arguments and the record before him, like Judge Alsup, Judge Chhabria found that Meta's use of the books as training data for Llama was 'highly transformative' noting that the purpose of the use of the books - for creating an AI system - was very different than the plaintiffs' purpose of the books, which was for education and entertainment. Rejecting plaintiff's argument that Llama could be used to imitate the style of plaintiffs' writing, Judge Chhabria noted that 'style is not copyrightable.' The fact that Meta sourced the books from shadow libraries rather than authorized copies didn't make a difference; Judge Chhabria (in my opinion rightly) reasoned that to say that a fair use depends on whether the source copy was authorized begs the question of whether the secondary copying was lawful. Although plaintiffs tried to make the 'central library for other purposes than training' argument that was successful in the Anthropic case, Judge Chhabria concluded that the evidence simply didn't support that copies were used for purposes other than training, and noted that even if some copies were not used for training, 'fair use doesn't require that the secondary user make the lowest number of copies possible.' Since Llama couldn't generate exact or substantially similar versions of plaintiffs' books, he found there was no substitution harm, noting that plaintiffs' lost licensing revenue for AI training is not a cognizable harm. Judge Chhabria's Market Dilution Prediction Judge Chhabria warns that generative AI systems could dilute the market for lower-value mass market ... More publications. In dicta, clearly expressing frustration with the outcome in Meta's favor, Judge Chhabria discussed in detail how he thought market harm could – and should - be shown in other cases, through the concept of 'market dilution' - warning that a system like Llama, while not producing direct substitutes for a plaintiff's work, could compete with and thus dilute the plaintiff's market. There may be types of works unlike award-winning fictional works more susceptible to this harm, he said, such as news articles, or 'typical human-created romance or spy novels.' But since the plaintiffs before him didn't make those arguments, nor presented any record of the same, he said, he could not make a ruling on the same. This opportunity is left for another day. AI System Roadmap For Non-Infringement The court decisions provide an early roadmap as to how to design an AI system. Based on these two court decisions, here are my take-aways for building a roadmap for a non-infringing generative AI system using books:


WIRED
39 minutes ago
- WIRED
Despite Protests, Elon Musk Secures Air Permit for xAI
Jul 2, 2025 7:41 PM xAI's gas turbines get official approval from Memphis, Tennessee, even as civil rights groups prepare to sue over alleged Clean Air Act violations. Photograph:A local health department in Memphis has granted Elon Musk's xAI data center an air permit to continue operating the gas turbines that power the company's Grok chatbot. The permit comes amid widespread community opposition and a looming lawsuit alleging the company violated the Clean Air Act. The Shelby County Health Department released its air permit for the xAI project Wednesday, after receiving hundreds of public comments. The news was first reported by the Daily Memphian. In June, the Memphis Chamber of Commerce announced that xAI had chosen a site in Memphis to build its new supercomputer. The company's website boasts that it was able to build the supercomputer, Colossus, in just 122 days. That speed was due in part to the mobile gas turbines the company quickly began installing at the campus, the site of a former manufacturing facility. Colossus allowed xAI to quickly catch up to rivals OpenAI, Google, and Anthropic in building cutting-edge artificial intelligence. It was built using 100,000 Nvidia H100 GPUs, making it likely the world's largest supercomputer. xAI's Memphis campus is located in a predominantly Black community known as Boxtown which has been historically burdened with industrial projects that cause pollution. Gas turbines like the ones xAI is using in Memphis can be a significant source of harmful emissions, like nitrogen oxides, which create smog. Memphis already has some of the highest child asthma rates in Tennessee. Since xAI began running its turbines, residents have repeatedly met and rallied against the project. 'My neighbors and I are forced to breathe the pollution this company pumps into our air every day. We smell it. We inhale it. This isn't just an environmental issue — it's a public health emergency,' wrote State Rep. Justin Pearson, who grew up near Boxtown, in an MSNBC op-ed last week. Under the Clean Air Act, 'major' sources of emissions—like a cluster of gas turbines—need a permit, known as a Prevention of Significant Deterioration (PSD) permit. However, Shelby County Health Department officials told local reporters in August that this wasn't necessary for xAI since its turbines weren't designed to be permanent. Amid mounting local opposition, xAI finally applied for a permit with the Shelby County Health Department in January, months after it first began running the turbines. Last month, the NAACP and the Southern Environmental Law Center (SELC) announced that they intended to sue xAI for violating the Clean Air Act. 'xAI's decision to install and operate dozens of polluting gas turbines without any permits or public oversight is a clear violation of the Clean Air Act,' said senior SELC attorney Patrick Anderson in a press release. 'Over the last year, these turbines have pumped out pollution that threatens the health of Memphis families. This notice paves the way for a lawsuit that can hold xAI accountable for its unlawful refusal to get permits for its gas turbines.' The new permit from the health department allows the company to operate 15 turbines on the site until 2027. In June, Memphis mayor Paul Young wrote an op-ed in the Tennessee Commercial Appeal that noted xAI was currently operating 21 turbines. SELC says that aerial footage it took in April, however, showed as many as 35 turbines operating at the site. xAI did not immediately respond to WIRED's request for comment, including questions about how many turbines it is currently operating at the facility. Shelby County did not immediately respond to a request for comment. In May, Sharon Wilson, a certified optical gas imaging thermographer, traveled to Memphis to film emissions from the site with a special optical gas imaging camera that records usually-invisible emissions. Wilson tracks leaks from facilities in the Permian Basin, one of the world's most prolific oil and gas-producing regions, in Texas. She alleged to WIRED that what she saw in Memphis was one of the densest clouds of emissions she'd ever seen. 'I expected to see the typical power plant type of pollution that I see,' she says. 'What I saw was way worse than what I expected.' This is a developing story. Please check back for updates .


Fast Company
40 minutes ago
- Fast Company
Critical minerals are in the U.S., not in far-off mines
The clean energy transition is accelerating—but it's running into a critical roadblock: the mineral supply chain. Lithium, cobalt, and other critical minerals power everything from electric vehicles to grid-scale batteries. But the world cannot mine these minerals fast enough to keep up. By 2040, global demand for lithium alone is expected to surge more than 700%. Yet traditional mining remains slow, polluting, and geopolitically risky. Industry leaders and policymakers agree: We can't build a sustainable tomorrow on an unsustainable supply chain. But there's a smarter solution—and it's already flowing beneath us. The minerals hidden in our water Lithium and other critical minerals are already present in surprising abundance—not only in traditional hard rock or salar basins, but in often-overlooked water sources like geothermal brines, industrial effluents, and oilfield produced water. For years, these streams were dismissed as too complex or costly to process. But that's beginning to change. Advancements in direct lithium extraction (DLE) are now making it possible to recover lithium from these unconventional sources—cleanly, efficiently, and at scale. By isolating lithium directly from water without relying on evaporation ponds or invasive mining operations, DLE opens access to vast untapped U.S. domestic reserves. While the potential is immense, DLE remains an emerging technology—one that only a few companies globally are starting to prove at commercially viable levels of recovery, purity, and sustainability. DLE technologies that balance performance, sustainability, and cost are likely to define the next generation of critical mineral production. Among the most promising approaches are integrated, modular systems that combine extraction, concentration, and conversion processes into a single platform. Clayton Valley in Nevada and regions like the Smackover Formation in Arkansas, Louisiana, and Texas, and the Marcellus Shale Formations in Pennsylvania, West Virginia, and New York, are emerging as the proving grounds for this next generation of lithium production. Here, innovative DLE platforms are being piloted in real-world conditions and tested for adaptability to diverse brines and scalability for industrial deployment. As regulatory support grows and urgency around domestic critical minerals intensifies, the outcomes from these early deployments are likely to shape the future blueprint for lithium extraction. Faster, cleaner, smarter Compared to conventional methods, DLE slashes production timelines from months—or even years—to just hours or days. It eliminates the need for expansive land use and significantly reduces the environmental footprint. As global demand for lithium surges, DLE represents more than a technical advancement—it's a radical shift toward cleaner, faster, and smarter mineral production. DLE also delivers powerful cost advantages over traditional hard rock mining and refining. Its modular, compact design requires far less capital investment, bypassing the need for open-pit mines and evaporation ponds. Operating costs are lower thanks to higher lithium recovery rates—often 60–90% versus 30–50%—and energy-efficient processes that use fewer chemicals. Faster deployment means quicker returns, and it's a lighter environmental footprint that streamlines permitting and regulatory approval. Combined, these benefits make DLE a breakthrough solution for the next generation of lithium production. Resilient supply chains start at home As geopolitical tensions strain access to global mineral reserves, the urgency for domestic solutions has never been greater. DLE offers the U.S. and other hard-rock-constrained nations a pathway to build resilient, local supply chains. This approach opens the door to new jobs and economic activity in otherwise overlooked geographies. U.S. policy is beginning to catch up. The federal government has designated critical mineral security a national imperative. Programs like the Department of Energy's MINER initiative are catalyzing research into greener, more efficient extraction methods. Still, government support alone isn't enough. It's time for the private sector to lead. The call to lead The minerals needed to power the future aren't buried in far-off mines in South America and Australia. They're right here, already flowing beneath us. Utilities, oilfield operators, and technology innovators already manage the infrastructure, data, and water resources that can power this next frontier. By reimagining wastewater not as a burden but as a resource, we can unlock new revenue streams. In doing so, we also help build resilient domestic supply chains and strengthen national energy security, reducing dependence on foreign lithium sources and securing the materials critical for the clean energy transition. It's time to stop digging deeper and start thinking smarter. Water isn't just a resource. It's a solution. For those bold enough to lead, it's the future.