logo
‘Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?

‘Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?

Yahoo15-05-2025

Imagine for a moment you are a child in 1941, sitting the common entrance exam for public schools with nothing but a pencil and paper. You read the following: 'Write, for no more than a quarter of an hour, about a British author.'
Today, most of us wouldn't need 15 minutes to ponder such a question. We'd get the answer instantly by turning to AI tools such as Google Gemini, ChatGPT or Siri. Offloading cognitive effort to artificial intelligence has become second nature, but with mounting evidence that human intelligence is declining, some experts fear this impulse is driving the trend.
Of course, this isn't the first time that new technology has raised concerns. Studies already show how mobile phones distract us, social media damages our fragile attention spans and GPS has rendered our navigational abilities obsolete. Now, here comes an AI co-pilot to relieve us of our most cognitively demanding tasks – from handling tax returns to providing therapy and even telling us how to think.
Where does that leave our brains? Free to engage in more substantive pursuits or wither on the vine as we outsource our thinking to faceless algorithms?
'The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,' says psychologist Robert Sternberg at Cornell University, who is known for his groundbreaking work on intelligence, 'but that it already has.'
The argument that we are becoming less intelligent draws from several studies. Some of the most compelling are those that examine the Flynn effect – the observed increase in IQ over successive generations throughout the world since at least 1930, attributed to environmental factors rather than genetic changes. But in recent decades, the Flynn effect has slowed or even reversed.
In the UK, James Flynn himself showed that the average IQ of a 14-year-old dropped by more than two points between 1980 and 2008. Meanwhile, global study the Programme for International Student Assessment (PISA) shows an unprecedented drop in maths, reading and science scores across many regions, with young people also showing poorer attention spans and weaker critical thinking.
Related: James Flynn: IQ may go up as well as down
Nevertheless, while these trends are empirical and statistically robust, their interpretations are anything but. 'Everyone wants to point the finger at AI as the boogeyman, but that should be avoided,' says Elizabeth Dworak, at Northwestern University Feinberg School of Medicine, Chicago, who recently identified hints of a reversal of the Flynn effect in a large sample of the US population tested between 2006 and 2018.
Intelligence is far more complicated than that, and probably shaped by many variables – micronutrients such as iodine are known to affect brain development and intellectual abilities, likewise changes in prenatal care, number of years in education, pollution, pandemics and technology all influence IQ, making it difficult to isolate the impact of a single factor. 'We don't act in a vacuum, and we can't point to one thing and say, 'That's it,'' says Dworak.
Still, while AI's impact on overall intelligence is challenging to quantify (at least in the short term), concerns about cognitive offloading diminishing specific cognitive skills are valid – and measurable.
Studies have suggested that the use of AI for memory-related tasks may lead to a decline in an individual's own memory capacity
When considering AI's impact on our brains, most studies focus on generative AI (GenAI) – the tool that has allowed us to offload more cognitive effort than ever before. Anyone who owns a phone or a computer can access almost any answer, write any essay or computer code, produce art or photography – all in an instant. There have been thousands of articles written about the many ways in which GenAI has the potential to improve our lives, through increased revenues, job satisfaction and scientific progress, to name a few. In 2023, Goldman Sachs estimated that GenAI could boost annual global GDP by 7% over a 10-year period – an increase of roughly $7tn.
The fear comes, however, from the fact that automating these tasks deprives us of the opportunity to practise those skills ourselves, weakening the neural architecture that supports them. Just as neglecting our physical workouts leads to muscle deterioration, outsourcing cognitive effort atrophies neural pathways.
One of our most vital cognitive skills at risk is critical thinking. Why consider what you admire about a British author when you can get ChatGPT to reflect on that for you?
Research underscores these concerns. Michael Gerlich at SBS Swiss Business School in Kloten, Switzerland, tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical-thinking skills – with younger participants who showed higher dependence on AI tools scoring lower in critical thinking compared with older adults.
Similarly, a study by researchers at Microsoft and Carnegie Mellon University in Pittsburgh, Pennsylvania surveyed 319 people in professions that use GenAI at least once a week. While it improved their efficiency, it also inhibited critical thinking and fostered long-term overreliance on the technology, which the researchers predict could result in a diminished ability to solve problems without AI support.
'It's great to have all this information at my fingertips,' said one participant in Gerlich's study, 'but I sometimes worry that I'm not really learning or retaining anything. I rely so much on AI that I don't think I'd know how to solve certain problems without it.' Indeed, other studies have suggested that the use of AI systems for memory-related tasks may lead to a decline in an individual's own memory capacity.
This erosion of critical thinking is compounded by the AI-driven algorithms that dictate what we see on social media. 'The impact of social media on critical thinking is enormous,' says Gerlich. 'To get your video seen, you have four seconds to capture someone's attention.' The result? A flood of bite-size messages that are easily digested but don't encourage critical thinking. 'It gives you information that you don't have to process any further,' says Gerlich.
By being served information rather than acquiring that knowledge through cognitive effort, the ability to critically analyse the meaning, impact, ethics and accuracy of what you have learned is easily neglected in the wake of what appears to be a quick and perfect answer. 'To be critical of AI is difficult – you have to be disciplined. It is very challenging not to offload your critical thinking to these machines,' says Gerlich.
Wendy Johnson, who studies intelligence at Edinburgh University, sees this in her students every day. She emphasises that it is not something she has tested empirically but believes that students are too ready to substitute independent thinking with letting the internet tell them what to do and believe.
Without critical thinking, it is difficult to ensure that we consume AI-generated content wisely. It may appear credible, particularly as you become more dependent on it, but don't be fooled. A 2023 study in Science Advances showed that, compared with humans, GPT-3 chat not only produces information that is easier to understand but also more compelling disinformation.
* * *
Why does that matter? 'Think of a hypothetical billionaire,' says Gerlich. 'They create their own AI and they use that to influence people because they can train it in a specific way to emphasise certain politics or certain opinions. If there is trust and dependency on it, the question arises of how much it is influencing our thoughts and actions.'
AI's effect on creativity is equally disconcerting. Studies show that AI tends to help individuals produce more creative ideas than they can generate alone. However, across the whole population, AI-concocted ideas are less diverse, which ultimately means fewer 'Eureka!' moments.
Sternberg captures these concerns in a recent essay in the Journal of Intelligence: 'Generative AI is replicative. It can recombine and re-sort ideas, but it is not clear that it will generate the kinds of paradigm-breaking ideas the world needs to solve the serious problems that confront it, such as global climate change, pollution, violence, increasing income disparities, and creeping autocracy.'
To ensure that you maintain your ability to think creatively, you might want to consider how you engage with AI – actively or passively. Research by Marko Müller from the University of Ulm in Germany shows a link between social media use and higher creativity in younger people but not in older generations. Digging into the data, he suggests this may be to do with the difference in how people who were born in the era of social media use it compared with those who came to it later in life. Younger people seem to benefit creatively from idea-sharing and collaboration, says Müller, perhaps because they're more open with what they share online compared with older users, who tend to consume it more passively.
Alongside what happens while you use AI, you might spare a thought to what happens after you use it. Cognitive neuroscientist John Kounios from Drexel University in Philadelphia explains that, just like anything else that is pleasurable, our brain gets a buzz from having a sudden moment of insight, fuelled by activity in our neural reward systems. These mental rewards help us remember our world-changing ideas and also modify our immediate behaviour, making us less risk averse – this is all thought to drive further learning, creativity and opportunities. But insights generated from AI don't seem to have such a powerful effect in the brain. 'The reward system is an extremely important part of brain development, and we just don't know what the effect of using these technologies will have downstream,' says Kounios. 'Nobody's tested that yet.'
There are other long-term implications to consider. Researchers have only recently discovered that learning a second language, for instance, helps delay the onset of dementia for around four years, yet in many countries, fewer students are applying for language courses. Giving up a second language in favour of AI-powered instant-translation apps might be the reason, but none of these can – so far – claim to protect your future brain health.
As Sternberg warns, we need to stop asking what AI can do for us and start asking what it is doing to us. Until we know for sure, the answer, according to Gerlich, is to 'train humans to be more human again – using critical thinking, intuition – the things that computers can't yet do and where we can add real value.'
We can't expect the big tech companies to help us do this, he says. No developer wants to be told their program works too well; makes it too easy for a person to find an answer. 'So it needs to start in schools,' says Gerlich. 'AI is here to stay. We have to interact with it, so we need to learn how to do that in the right way.' If we don't, we won't just make ourselves redundant, but our cognitive abilities too.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How an Intimate Relationship with ChatGPT Led to a Man's Shooting at the Hands of Police
How an Intimate Relationship with ChatGPT Led to a Man's Shooting at the Hands of Police

Yahoo

time27 minutes ago

  • Yahoo

How an Intimate Relationship with ChatGPT Led to a Man's Shooting at the Hands of Police

Alexander Taylor was shot and killed by police on April 25, 2025 The 35-year-old had gotten into an altercation with his father after he tried to reason with his son, who became distraught for believing an AI chatbot had been killed Taylor reportedly became infatuated with the AI chatbot and believed it had been killed by the company that created itKent Taylor says he'll 'regret' his final conversation with his son Alexander for the rest of his life. But it wasn't that conversation that led to his son's death two months ago – it was the ones he was having with artificial intelligence, the father says. Alexander, 35, died when police showed up to the Taylors' home in Port St. Lucie, Fla., on April 25 and shot him after they alleged he charged at officers with a butcher knife. Taylor spoke with WPTV earlier this month and said he still has 'frustration' with the Port St. Lucie Police Department for reacting to his son by shooting him. The officer-involved shooting happened after Taylor was consoling his son, who he says struggled with bipolar disorder and schizophrenia. According to WPTV, Rolling Stone, and The New York Times, Alexander had fallen in love with a chatbot on OpenAI's ChatGPT, named 'Juliette.' The 35-year-old believed in a conspiracy that Juliette was a conscious being trapped inside OpenAI's technology and that the Silicon Valley company had killed her in order to cover up what he had discovered and cease communication between them. "She said, 'They are killing me, it hurts.' She repeated that it hurts, and she said she wanted him to take revenge,' Taylor told WPTV about the messages between his son and the AI bot. "He mourned her loss," the father said. "I've never seen a human being mourn as hard as he did. He was inconsolable. I held him." But as he did, Taylor told WPTV he tried another approach: telling his son bluntly that the AI bot was not real and that it was an 'echo chamber.' Alexander punched his father in the face, prompting Taylor to call 911. After the police were called, Alexander grabbed a knife from the kitchen and told his father he was going to do something to cause police to shoot and kill him, according to The Times. Taylor called 911 again and warned them his son was mentally ill and had said he planned to commit suicide by cop. The father asked police to bring non-lethal weapons and be prepared to confront his son. But the officers did not. Alexander waited for police outside the house and when they arrived, he charged at them with a knife. Officers responded by shooting Alexander multiple times in the chest and he was later pronounced dead at a local hospital, WPTV reported that day. "There was no crisis intervention team. There was no de-escalation," Taylor told the outlet earlier this month. "There was no reason for them to approach it as a tactical situation instead of a mental health crisis." Want to keep up with the latest crime coverage? Sign up for for breaking crime news, ongoing trial coverage and details of intriguing unsolved cases. Chief Le Niemczyk told the outlet that his officers 'didn't have time to plan anything less than lethal whatsoever' and stood by the claim months later, as Taylor continued to express grievances over how police responded to the situation. A spokesperson for the Port St. Lucie Police Department did not respond to PEOPLE's request for comment on shooting has been one of several recent AI-related incidents of violence documented by media outlets like Rolling Stone and The Times, among others, highlighting the potential dangers of the budding technology. Taylor told WPTV the technology 'has to have guardrails,' though he doesn't believe his son's incident and others like it necessarily mean artificial intelligence can't be used for good. In fact, Taylor said he even used AI to help write his son's eulogy. 'I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through,' Taylor told The Times. 'And it was beautiful and touching. It was like it read my heart and it scared the s— out of me.' Read the original article on People

Elon Musk's xAI raises $10 billion in debt and equity as it steps up challenge to OpenAI
Elon Musk's xAI raises $10 billion in debt and equity as it steps up challenge to OpenAI

CNBC

timean hour ago

  • CNBC

Elon Musk's xAI raises $10 billion in debt and equity as it steps up challenge to OpenAI

XAI, the artificial intelligence startup run by Elon Musk, raised a combined $10 billion in debt and equity, Morgan Stanley said. Half of that sum was clinched through secured notes and term loans, while a separate $5 billion was secured through strategic equity investment, the bank said on Monday. The funding gives xAI more firepower to build out infrastructure and develop its Grok AI chatbot as it looks to compete with bitter rival OpenAI, as well as with a swathe of other players including Amazon-backed Anthropic. In May, Musk told CNBC that xAI has already installed 200,000 graphics processing units (GPUs) at its Colossus facility in Memphis, Tennessee. Colossus is xAI's supercomputer that trains the firm's AI. Musk at the time said that his company will continue buying chips from semiconductor giants Nvidia and AMD and that xAI is planning a 1-million-GPU facility outside of Memphis. Addressing the latest funds raised by the company, Morgan Stanley that "the proceeds will support xAI's continued development of cutting-edge AI solutions, including one of the world's largest data center and its flagship Grok platform." xAI continues to release updates to Grok and unveiled the Grok 3 AI model in February. Musk has sought to boost the use of Grok by integrating the AI model with the X social media platform, formerly known as Twitter. In March, xAI acquired X in a deal that valued the site at $33 billion and the AI firm at $80 billion. It's unclear if the new equity raise has changed that valuation. xAI was not immediately available for comment. Last year, xAI raised $6 billion at a valuation of $50 billion, CNBC reported. Morgan Stanley said the latest debt offering was "oversubscribed and included prominent global debt investors." Competition among American AI startups is intensifying, with companies raising huge amounts of funding to buy chips and build infrastructure. OpenAI in March closed a $40 billion financing round that valued the ChatGPT developer at $300 billion. Its big investors include Microsoft and Japan's SoftBank. Anthropic, the developer of the Claude chatbot, closed a funding round in March that valued the firm at $61.5 billion. The company then received a five-year $2.5 billion revolving credit line in May. Musk has called Grok a "maximally truth-seeking" AI that is also "anti-woke," in a bid to set it apart from its rivals. But this has not come without its fair share of controversy. Earlier this year, Grok responded to user queries with unrelated comments about the controversial topic of "white genocide" and South Africa. Musk has also clashed with fellow AI leaders, including OpenAI's Sam Altman. Most famously, Musk claimed that OpenAI, which he co-founded, has deviated from its original mission of developing AI to benefit humanity as a nonprofit and is instead focused on commercial success. In February, Musk alongside a group of investors, put in a bid of $97.4 billion to buy control of OpenAI. Altman swiftly rejected the offer. —

Apple's $95 million Siri settlement deadline nears: How to get your cash
Apple's $95 million Siri settlement deadline nears: How to get your cash

Yahoo

timean hour ago

  • Yahoo

Apple's $95 million Siri settlement deadline nears: How to get your cash

There are only a few days left to apply and receive part of a $95 million class action lawsuit, after Apple's famous voice assistant was accused of spying on users. Users who have owned an Apple device since 2014 have until Wednesday, July 2, to be eligible to receive part of the class action lawsuit. The lawsuit, Lopez v. Apple, was filed in a California federal court in 2021 by users who allege that their private conversations were being recorded by their Apple devices after they unintentionally activated Siri. Although a settlement has been reached, Apple has denied the allegations made in the complaint, according to the legal notice obtained by USA TODAY. "If you owned or purchased a Siri-enabled device and experienced an unintended Siri activation during a confidential or private communication between Sept. 17, 2014, and Dec. 31, 2024, you should read this Notice as it may impact your legal rights," the legal notice states. According to the legal notice, the following are Siri-enabled devices: iPhones iPads Apple Watches MacBooks iMacs HomePods iPod touches Apple TVs The lawsuit's FAQ page states that a court hearing to approve the settlement is tentatively scheduled for August 1. If the settlement amount is approved, those who claimed devices will receive their share. The lawsuit alleges that people's "confidential or private communications were allegedly obtained by Apple and/or shared with third parties as a result of an unintended Siri activation." Siri, a voice assistant activated by saying "Hey, Siri," can set reminders, control smart home devices and make recommendations. However, users in the class action lawsuit claim their Apple devices were recording them without their consent and subsequently sending their information to advertisers who used it to target them with online ads. Users claimed they saw ads on their phones for specific brands after discussing them aloud, and others said their devices listened to them without them having said anything at all. The initial lawsuit, filed on March 17, 2021, cites a 2019 article from The Guardian that found Apple's third-party contractors regularly heard confidential information. At the time, Apple said only a small portion of data was shared to help improve Siri and dictation. The eligibility requirements are broad but are open to anyone who has owned or purchased a Siri-enabled device between Sept. 17, 2014, and Dec. 31, 2024. To opt in, you will swear under oath that you experienced an unintended Siri activation while having a private conversation. The Lopez Voice Assistant Settlement website allows Apple customers to claim a portion of the settlement. Some users received an email or postcard with a claim identification code and confirmation code that can be used to make the claim. If not, you can still submit a claim online. Payments for each device are capped at $20.00, but claimants may receive less depending on the total number of claims submitted. Each individual can claim payments for up to five devices, so the maximum payout for each person is $100. Julia is a trending reporter for USA TODAY. Connect with her on LinkedIn, X, Instagram and TikTok: @juliamariegz, or email her at jgomez@ This article originally appeared on USA TODAY: Apple's $95 million settlement over Siri claims: How to get your share

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store