
Is ChatGPT making us stupid?
Back in 2008, The Atlantic sparked controversy with a provocative cover story: Is Google Making Us Stupid?
In that 4,000-word essay, later expanded into a book, author Nicholas Carr suggested the answer was yes, arguing that technology such as search engines were worsening Americans' ability to think deeply and retain knowledge.
At the core of Carr's concern was the idea that people no longer needed to remember or learn facts when they could instantly look them up online. While there might be some truth to this, search engines still require users to use critical thinking to interpret and contextualize the results.
Fast-forward to today, and an even more profound technological shift is taking place. With the rise of generative AI tools such as ChatGPT, Internet users aren't just outsourcing memory -- they may be outsourcing thinking itself.
Generative AI tools don't just retrieve information; they can create, analyze and summarize it. This represents a fundamental shift: Arguably, generative AI is the first technology that could replace human thinking and creativity.
That raises a critical question: Is ChatGPT making us stupid?
As a professor of information systems who's been working with AI for more than two decades, I've watched this transformation firsthand. And as many people increasingly delegate cognitive tasks to AI, I think it's worth considering what exactly we're gaining and what we are at risk of losing.
AI and the Dunning-Kruger effect
Generative AI is changing how people access and process information. For many, it's replacing the need to sift through sources, compare viewpoints and wrestle with ambiguity.
Instead, AI delivers clear, polished answers within seconds. While those results may or may not be accurate, they are undeniably efficient. This has already led to big changes in how we work and think.
But this convenience may come at a cost. When people rely on AI to complete tasks and think for them, they may be weakening their ability to think critically, solve complex problems and engage deeply with information.
Although research on this point is limited, passively consuming AI-generated content may discourage intellectual curiosity, reduce attention spans and create a dependency that limits long-term cognitive development.
To better understand this risk, consider the Dunning-Kruger effect. This is the phenomenon in which people who are the least knowledgeable and competent tend to be the most confident in their abilities because they don't know what they don't know.
In contrast, more competent people tend to be less confident. This is often because they can recognize the complexities they have yet to master.
This framework can be applied to generative AI use. Some users may rely heavily on tools such as ChatGPT to replace their cognitive effort, while others use it to enhance their capabilities.
In the former case, they may mistakenly believe they understand a topic because they can repeat AI-generated content. In this way, AI can artificially inflate one's perceived intelligence while actually reducing cognitive effort.
This creates a divide in how people use AI. Some remain stuck on the "peak of Mount Stupid," using AI as a substitute for creativity and thinking. Others use it to enhance their existing cognitive capabilities.
In other words, what matters isn't whether a person uses generative AI, but how. If used uncritically, ChatGPT can lead to intellectual complacency. Users may accept its output without questioning assumptions, seeking alternative viewpoints or conducting deeper analysis.
But when used as an aid, it can become a powerful tool for stimulating curiosity, generating ideas, clarifying complex topics and provoking intellectual dialogue.
The difference between ChatGPT making us stupid or enhancing our capabilities rests in how we use it. Generative AI should be used to augment human intelligence, not replace it. That means using ChatGPT to support inquiry, not to shortcut it. It means treating AI responses as the beginning of thought, not the end.
AI, thinking and the future of work
The mass adoption of generative AI, led by the explosive rise of ChatGPT -- it reached 100 million users within two months of its release -- has, in my view, left Internet users at a crossroads.
One path leads to intellectual decline: a world where we let AI do the thinking for us. The other offers an opportunity: to expand our brainpower by working in tandem with AI, leveraging its power to enhance our own.
It's often said that AI won't take your job, but someone using AI will. But it seems clear to me that people who use AI to replace their own cognitive abilities will be stuck at the peak of Mount Stupid. These AI users will be the easiest to replace.
It's those who take the augmented approach to AI use who will reach the path of enlightenment, working together with AI to produce results that neither is capable of producing alone. This is where the future of work will eventually go.
This essay started with the question of whether ChatGPT will make us stupid, but I'd like to end with a different question: How will we use ChatGPT to make us smarter? The answers to both questions depend not on the tool but on users.
Aaron French is an assistant professor of information systems at Kennesaw State University. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in the commentary are solely those of the author.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
Meta Clashes With Apple, Google Over Age Check Legislation
(Bloomberg) -- The biggest tech companies are warring over who's responsible for children's safety online, with billions of dollars in fines on the line as states rapidly pass conflicting laws requiring companies to verify users' ages. Trump Awards $1.26 Billion Contract to Build Biggest Immigrant Detention Center in US The High Costs of Trump's 'Big Beautiful' New Car Loan Deduction Can This Bridge Ease the Troubled US-Canadian Relationship? Salt Lake City Turns Winter Olympic Bid Into Statewide Bond Boom Trump Administration Sues NYC Over Sanctuary City Policy The struggle has pitted Meta Platforms Inc. and other app developers against Apple Inc. and Alphabet Inc.'s Google, the world's largest app stores. Lobbyists for both sides are moving from state to state, working to water down or redirect the legislation to minimize their clients' risks. This year alone, at least three states — Utah, Texas and Louisiana — passed legislation requiring tech companies to authenticate users' ages, secure parental consent for anyone under 18 and ensure minors are protected from potentially harmful digital experiences. Now, lobbyists for all three companies are flooding into South Carolina and Ohio, the next possible states to consider such legislation. The debate has taken on new importance after the Supreme Court this summer ruled age verification laws are constitutional in some instances. A tech group on Wednesday petitioned the Supreme Court to block a social media age verification law in Mississippi, teeing up a highly consequential decision in the next few weeks. Child advocates say holding tech companies responsible for verifying the ages of their users is key to creating a safer online experience for minors. Parents and advocates have alleged the social media platforms funnel children into unsafe and toxic online spaces, exposing young people to harmful content about self harm, eating disorders, drug abuse and more. Blame Game Meta supporters argue the app stores should be responsible for figuring out whether minors are accessing inappropriate content, comparing the app store to a liquor store that checks patrons' IDs. Apple and Google, meanwhile, argue age verification laws violate children's privacy and argue the individual apps are better-positioned to do age checks. Apple said it's more accurate to describe the app store as a mall and Meta as the liquor store. The three new state laws put the responsibility on app stores, signaling Meta's arguments are gaining traction. The company lobbied in support of the Utah and Louisiana laws putting the onus on Apple and Google for tracking their users' ages. Similar Meta-backed proposals have been introduced in 20 states. Federal legislation proposed by Republican Senator Mike Lee of Utah would hold the app stores accountable for verifying users' ages. Still, Meta's track record in its state campaigns is mixed. At least eight states have passed laws since 2024 forcing social media platforms to verify users' ages and protect minors online. Apple and Google have mobilized dozens of lobbyists across those states to argue that Meta is shirking responsibility for protecting children. 'We see the legislation being pushed by Meta as an effort to offload their own responsibilities to keep kids safe,' said Google spokesperson Danielle Cohen. 'These proposals introduce new risks to the privacy of minors, without actually addressing the harms that are inspiring lawmakers to act.' Meta spokesperson Rachel Holland countered that the company is supporting the approach favored by parents who want to keep their children safe online. 'Parents want a one-stop-shop to oversee their teen's online lives and 80% of American parents and bipartisan lawmakers across 20 states and the federal government agree that app stores are best positioned to provide this,' Holland said. As the regulation patchwork continues to take shape, the companies have each taken voluntary steps to protect children online. Meta has implemented new protections to restrict teens from accessing 'sensitive' content, like posts related to suicide, self-harm and eating disorders. Apple created 'Child Accounts,' which give parents more control over their children's' online activity. At Apple, spokesperson Peter Ajemian said it 'soon will release our new age assurance feature that empowers parents to share their child's age range with apps without disclosing sensitive information.' Splintered Groups As the lobbying battle over age verification heats up, influential big tech groups are splintering and new ones emerging. Meta last year left Chamber of Progress, a liberal-leaning tech group that counts Apple and Google as members. Since then, the chamber, which is led by a former Google lobbyist and brands itself as the Democratic-aligned voice for the tech industry, has grown more aggressive in its advocacy against all age verification bills. 'I understand the temptation within a company to try to redirect policymakers towards the company's rivals, but ultimately most legislators don't want to intervene in a squabble between big tech giants,' said Chamber of Progress CEO Adam Kovacevich. Meta tried unsuccessfully to convince another major tech trade group, the Computer & Communications Industry Association, to stop working against bills Meta supports, two people familiar with the dynamics said. Meta, a CCIA member, acknowledged it doesn't always agree with the association. Meta is also still a member of NetChoice, which opposes all age verification laws no matter who's responsible. The group currently has 10 active lawsuits on the matter, including battling some of Meta's preferred laws. The disagreements have prompted some of the companies to form entirely new lobbying outfits. Meta in April teamed up with Spotify Technology SA and Match Group Inc. to launch a coalition aimed at taking on Apple and Google, including over the issue of age verification. Competing Campaigns Meta is also helping to fund the Digital Childhood Alliance, a coalition of conservative groups leading efforts to pass app-store age verification, according to three people familiar with the funding. Neither the Digital Childhood Alliance nor Meta responded directly to questions about whether Meta is funding the group. But Meta said it has collaborated with Digital Childhood Alliance. The group's executive director, Casey Stefanski, said it includes more than 100 organizations and child safety advocates who are pushing for more legislation that puts responsibility on the app stores. Stefanski said the Digital Childhood Alliance has met with Google 'several times' to share their concerns about the app store in recent months. The App Association, a group backed by Apple, has been running ads in Texas, Alabama, Louisiana and Ohio arguing that the app store age verification bills are backed by porn websites and companies. The adult entertainment industry's main lobby said it is not pushing for the bills; pornography is mostly banned from app stores. 'This one-size fits all approach is built to solve problems social media platforms have with their systems while making our members, small tech companies and app developers, collateral damage,' said App Association spokesperson Jack Fleming. In South Carolina and Ohio, there are competing proposals placing different levels of responsibility on the app stores and developers. That could end with more stringent legislation that makes neither side happy. 'When big tech acts as a monolith, that's when things die,' said Joel Thayer, a supporter of the app store age verification bills. 'But when they start breaking up that concentration of influence, all the sudden good things start happening because the reality is, these guys are just a hair's breath away from eating each other alive.' (Updates with App Association statement in 24th paragraph.) Burning Man Is Burning Through Cash Confessions of a Laptop Farmer: How an American Helped North Korea's Wild Remote Worker Scheme It's Not Just Tokyo and Kyoto: Tourists Descend on Rural Japan Elon Musk's Empire Is Creaking Under the Strain of Elon Musk A Rebel Army Is Building a Rare-Earth Empire on China's Border ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Android Authority
16 minutes ago
- Android Authority
Google Maps crashes keeping you from planning a route home? You're far from alone
Andy Walker / Android Authority TL;DR Google Maps is crashing for some users when attempting to get transit directions. Other transportation methods do not appear to be affected. Not everyone on the same version of the app is experiencing the problem, though, and we've reached out to Google for more information. Modern navigation software is superior to paper maps for more reasons than we can count, and one great example is how well they can integrate with mass transit systems. Instead of driving, biking, or walking everywhere, apps like Google Maps make it a snap to plan trips that also incorporate segments where we're riding a bus or boarding a train — even with awareness of departure timetables. That's great when it's working. Right now, though, it isn't. Earlier today, reports started circulating over on Reddit's Google Maps sub about a problem with transit instructions, summarized in a post by user XenonOxide. About a dozen Pixel users chime in to respond with reports of their own, and everyone who's getting this seems to be having the same problem. For affected users, you're able to search for destinations and generate most types of navigation instructions just fine, but the moment you tap on public transit, Maps crashes. Android Police was able to recreate this effect, and confirms what users experienced while testing on version 25.30.00.785163646 of the Maps app. We attempted to also verify on our own phones, but even after updating Maps 25.29.00.782600971 to this newer release, we're still able to access navigation instructions without causing a crash. Why this is affecting some users and not others, we can't yet say, but reports in the Reddit thread show that crashes still appear to be taking place within the past hour, making it look like Google has yet to resolve the underlying issue. In fact, we don't see a mention of any problem at all on the company's Maps status page, so it may not even be widespread enough to be on the radar. We've reached out to Google to both try to find out if the company is aware of these crashes, and to see if there are any steps users experiencing them can take in order to get Maps working again.


Fast Company
16 minutes ago
- Fast Company
Think your ChatGPT therapy sessions are private? Think again.
If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court – a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. 'One example that we've been thinking about a lot… people talk about the most personal shit in their lives to ChatGPT,' Altman said. 'Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality.' Altman says that as a society we 'haven't figured that out yet' for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch. 'If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman told Von, arguing that AI conversations should be treated with the same level of privacy as a chat with a therapist. While interactions with doctors and therapists are protected by federal privacy laws in the U.S., exceptions exist for instances in which someone is a threat to themselves or others. And even with those strong privacy protections, relevant medical information can be surfaced by court order, subpoena or a warrant. Altman's argument seems to be that from a regulatory perspective, ChatGPT shares more in common with licensed, trained specialists than it does with a search engine. 'I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,' he said. Altman also expressed concerns about how AI will adversely impact mental health, even as people seek its advice in lieu of the real thing. 'Another thing I'm afraid of… is just what this is going to mean for users' mental health. There's a lot of people that talk to ChatGPT all day long,' Altman said. 'There are these new AI companions that people talk to like they would a girlfriend or boyfriend. 'I don't think we know yet the ways in which [AI] is going to have those negative impacts, but I feel for sure it's going to have some, and we'll have to, I hope, we can learn to mitigate it quickly.'