
Does ChatGPT dull your thinking skills? New MIT study shows why brains of youngsters are at risk
The study divided 54 subjects — 18 to 39-year-olds from the Boston area — into three groups, and asked them to write several essays using OpenAI's ChatGPT, Google's search engine or nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement. They 'consistently underperformed at neural, linguistic and behavioural levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
How does reliance on ChatGPT impact the brain?
When individuals rely heavily on AI to generate content or answers, they tend to 'offload' cognitive efforts to the AI. This means they engage less in deep, reflective thinking, analysis and independent problem-solving. This may weaken the brain's ability to perform these tasks independently over time.
The MIT study also noted that essays written with ChatGPT lacked original content and often consisted of copied and pasted responses with minimal editing. Users also reported a fragmented sense of authorship and difficulty recalling what they had written, suggesting a lack of internal integration with the material.
How does over reliance on ChatGPT impact memory?
Participants who used ChatGPT struggled to recall their own work, even when later asked to rewrite essays without the tool. This indicates that the information was not being deeply processed or integrated into their memory networks.
This happens because of the reduced inclination to critically evaluate the AI's output. This can also lead to an 'echo chamber effect' where thoughts are subtly shaped by AI's probabilistic guesses based on its training data rather than independent reasoning.
Why ChatGPT at best be a complementary learning tool
Earlier studies have shown how AI, when used as a complement to human thinking rather than a replacement, enhances learning. In the MIT study, too, it was seen that the 'brain-only' group, when later given access to ChatGPT for a rewrite, demonstrated increased cognitive activity, implying that AI can be beneficial if foundational thinking is already in place.
The consensus seems to be that AI tools like ChatGPT are powerful assistants, but they should not become 'cognitive crutches.' The integration of AI in education necessitates the cultivation of new competencies, including the ability to discern the limitations and potential biases of AI-generated content and to use AI tools effectively as aids in one's critical thinking processes.
(Dr Ajinkya is psychiatrist, Kokilaben Dhirubhai Ambani Hospital, Mumbai)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
25 minutes ago
- The Hindu
Meta poaches top AI talent from OpenAI and DeepMind as Zuckerberg escalates AI push
The minds that brought you the conversational magic of ChatGPT and the multimodal power of Google's Gemini now have a new home: Meta. In a stunning talent exodus, captured in a single tweet by Meta's new AI chief, Alexandr Wang, the architects of the current AI revolution have been poached from OpenAI, Google DeepMind, and Anthropic. Mr. Alexandr's tweet is not merely a hiring announcement; it is a declaration of intent. By announcing his role as Chief AI Officer at Meta alongside Nat Friedman and a veritable 'who's who' of top-tier AI researchers, Mr. Alexandr and Meta were signaling a seismic shift in the technology landscape. This mass talent acquisition from rivals like OpenAI, Google DeepMind, and Anthropic is CEO Mark Zuckerberg's most audacious move yet to dominate the next technological frontier. It represents a calculated effort to bolster Meta's AI venture by poaching the very minds that built its competitors' greatest successes. However, this aggressive pivot towards superintelligence cannot be viewed in a vacuum. It is haunted by the ghosts of Meta's past — from the Cambridge Analytica scandal to the Instagram teen mental health crisis — forcing a critical examination of whether Zuckerberg has evolved from a disruptive force into a responsible steward for the age of AI. Why Meta's AI talent acquisition signals a new era The list of new hires is a strategic masterstroke — it's not just about adding headcount; it's about acquiring institutional knowledge and simultaneously weakening the competition. According to reports, Mr. Zuckerberg has personally handled these AI hires. And he has carefully picked top talent from all his rivals. From OpenAI, Meta has poached the creators behind GPT-4o's groundbreaking voice and multimodal capabilities and foundational model builder. Shengjia Zhao, the co-creator of ChatGPT and GPT-4, is now part of Meta. This is a significant loss to Sam Altman's AI company. From Google DeepMind, Mr. Zuckerberg has poached Jack Rae, the pre-training tech lead for Gemini 2.5, and other experts in the text-to-image generation. From Anthropic, Meta has poached Joel Pobar, the AI firm's inference expert. This talent raid provides Meta with some immediate advantages. First, it gives the company instant credibility that it quite serious about its AI bet as the new team has direct, hands-on experience building and training the world's most advanced models. Second, it disrupts the roadmaps of its competitors, forcing them to regroup and replace key personnel. Third, it creates a powerful gravitational pull for future talent, signaling that Meta is now the premier destination for ambitious AI work, backed by near-limitless computational resources and a direct path to impacting billions of users. Can Zuckerberg be trusted with the future of AI? This aggressive push into AI stands in stark contrast to the defining scandals of Zuckerberg's career. The Cambridge Analytica affair revealed a fundamental flaw in Facebook's DNA: a platform architecture that prioritized growth and data collection over user privacy and security, which was then exploited for political manipulation. The company's response was slow, defensive, and ultimately insufficient to repair the deep chasm of public trust. Then, 'The Facebook Files' exposé by The Wall Street Journal detailed internal research showing that Meta knew Instagram was toxic for the mental health of teenage girls. The company's leadership chose to downplay the findings and continue with product strategies that exacerbated these harms. Both incidents stem from the same root philosophy: 'move fast and break things,' a mantra that prioritizes scale and engagement above all else, with societal consequences treated as unfortunate but acceptable collateral damage. Applying this ethos to AI, a technology with far greater potential for both good and harm, is a terrifying prospect. If a social feed algorithm could destabilize democracies and harm teen self-esteem, what could a superintelligent agent, deployed to three billion users with the same growth-at-all-costs mindset, be capable of? Mr. Zuckerberg's past misadventures are not just historical footnotes; they are the core reason for public skepticism towards Meta's AI ambitions. How Zuckerberg has evolved from social media to superintelligence Mr. Zuckerberg's character, as observed through his actions over two decades, is one of relentless, almost singular, ambition. He has consistently demonstrated a willingness to be ruthless in competition (cloning Snapchat's features into Instagram Stories), a visionary in long-term bets (acquiring Instagram and WhatsApp, pivoting to the Metaverse), and an ability to withstand immense public and regulatory pressure. His critics would argue he is a leader who lacks a deep-seated ethical framework, often optimizing for power and market dominance while retroactively applying ethical patches only when forced by public outcry. His defenders might say he is a pragmatic engineer who is learning and adapting. The Cambridge Analytica scandal arguably forced him to mature from a hoodie-wearing coder into a global CEO who must at least speak the language of governance and responsibility. How Meta's AI super-team challenges OpenAI and Google The crucial question is whether this change is superficial or substantive. His current strategy with AI suggests a potential evolution. The open-sourcing of the Llama models can be interpreted in two ways. On one hand, it's a shrewd business move to commoditise the layer of the stack where OpenAI and Google have a strong lead, fostering an ecosystem dependent on Meta's architecture. On the other, it can be framed as a commitment to transparency and democratisation, a direct response to the 'black box' criticism leveled at his past operations. This new 'super-team' will be the ultimate test. Will they be fire-walled by a new ethical charter, or will the immense pressure from Mr. Zuckerberg to 'win' the AI race override all other considerations? How is Meta positioning itself for the AI age Against the closed, API-first models of OpenAI and the integrated-but-cautious approach of Google, Meta is carving out a unique strategic position. It is fighting the war on two fronts — by making Llama an open-source alternative, Meta is making itself the default foundation for thousands of startups, researchers, and developers, disrupting the business models of its rivals. Mr. Zuckerberg hasn't stopped with that, he has also publicly committed to acquiring hundreds of thousands of high-end NVIDIA GPUs, signaling that his company will not be outspent on compute. With the addition of this new team, Meta completes the trifecta: massive data, unparalleled compute, and now, world-leading human talent. The goal is no longer just to build a chatbot for Messenger or an image generator for Instagram. As Mr. Alexandr's tweet boldly states, the aim is 'Towards superintelligence.' This is a direct challenge to the stated missions of DeepMind and OpenAI. The formation of this AI super-team is the culmination of Mr. Zuckerberg's pivot from social media king to aspiring AI emperor. It is an act of immense strategic importance, one that immediately elevates Meta to the top tier of AI development. Yet, the success of this venture will not be measured solely by the capability of the models it produces. It will be measured by whether Mr. Zuckerberg can build an organization that has learned from the profound societal failures of its past. This is a defining gambit for Meta founder — a chance to redefine his legacy not as the creator of a divisive social network, but as the leader who responsibly ushered in the age of artificial intelligence.

The Hindu
25 minutes ago
- The Hindu
FTC seeks more information about SoftBank's Ampere deal: Report
The U.S. Federal Trade Commission is seeking more details about SoftBank Group Corp's planned $6.5 billion purchase of semiconductor designer Ampere Computing, Bloomberg reported on Tuesday. The inquiry, known formally as a second request for information, suggests the acquisition may undergo an extended government review, the report said. SoftBank announced the purchase of the startup in March, part of its efforts to ramp up its investments in artificial intelligence infrastructure. The report did not state the reasoning for the FTC request. SoftBank, Ampere and the FTC did not immediately respond to a request for comment. SoftBank is an active investor in U.S. tech. It is leading the financing for the $500 billion Stargate data centre project and has agreed to invest $32 billion in ChatGPT-maker OpenAI.


Time of India
40 minutes ago
- Time of India
No AI without immigration: All 11 of Zuckerberg's new superintelligence hires are immigrants who completed undergraduates abroad
Whether one likes it or not, America was built by immigrants, the wretched masses that the Statue of Liberty inscription asks to welcome with open arms. From Albert Einstein to Elon Musk, immigrants made America a great country, and now it looks like they are making America great again (MAGA, as the slogan goes), and this time in artificial intelligence. America's edge isn't just its innovation, it's the global brainpower who has been giving an edge to the country for centuries. And nothing makes that clearer than the news out of Meta's new Superintelligence Lab: 11 top AI researchers hired, and every single one of them is an immigrant. That's right, not one US undergrad among them. These are elite engineers, model architects, and machine learning pioneers who began their academic journeys in India, China, South Africa, the UK, and Australia. They trained in America's best labs, built some of the most advanced AI systems at OpenAI, DeepMind, and Anthropic, and now they've been snapped up by Zuckerberg to build the next wave of intelligence. If you want to understand why the US leads the world in AI, you don't need to look at GPUs or model benchmarks. Just look at the passports of the people writing the code. Immigrants have long been the silent engine of Silicon Valley. From coding at Google to architecting cloud infrastructure at Microsoft, they form the backbone of America's biggest tech empires. A significant number of these engineers hail from India, whose talent pipeline has become essential to the innovation output of the U.S. tech sector. They don't just fill jobs, they define the direction of the industry. Now, as AI becomes the next great technological frontier, these same minds are shaping the models, algorithms, and safety protocols that will govern its future. Meta's new superintelligence team is just the latest proof that without immigration, there is no AI revolution. Top AI talent hired by Meta to power its superintelligence lab by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Adidas Three Shorts With 60% Discount, Limited Stock Available Original Adidas Shop Now Undo Trapit Bansal Background: Previously at OpenAI Education: Bachelor of Science (BS), Indian Institute of Technology (IIT) Kanpur, India Doctor of Philosophy (PhD), University of Massachusetts Amherst, USA Profile: Bansal is recognized for his groundbreaking work in reinforcement learning applied to chain-of-thought reasoning, a technique that helps AI models improve logical step-by-step problem-solving. As a co-creator of the o-series models, he has set new standards in large language model performance and reliability. Bansal's research is widely cited and has influenced both academic and industrial AI systems. Shuchao Bi Background: Previously at OpenAI Education: Bachelor of Science (BS), Zhejiang University, China Doctor of Philosophy (PhD), University of California, Berkeley, USA Profile: Bi is a leader in multimodal AI, having co-developed the voice mode for GPT-4o and the o4-mini model. His expertise lies in integrating speech and text, enabling AI to understand and generate content across different formats. Bi's work has been crucial in advancing conversational AI and making digital assistants more natural and accessible. Huiwen Chang Background: Previously at Google Research Education: Bachelor of Science (BS), Tsinghua University, China Doctor of Philosophy (PhD), Princeton University, USA Profile: Chang is known for inventing the MaskIT and Muse architectures, which have become foundational in generative image models. She led the GPT-4o image generation team, driving innovations that allow AI to create high-quality, realistic images from text prompts. Her research bridges the gap between computer vision and creative AI applications. Ji Lin Background: Previously at OpenAI Education: Bachelor of Engineering (BEng), Tsinghua University, China Doctor of Philosophy (PhD), Massachusetts Institute of Technology (MIT), USA Profile: Lin has played a key role in optimizing and scaling large language models, including GPT-4o and the o4 family. His work focuses on model efficiency and image generation, making state-of-the-art AI more accessible and cost-effective for real-world deployment. Lin's contributions are highly regarded in both academia and industry. Joel Pobar Background: Previously at Anthropic & Meta Education: Bachelor of Information Technology (Honours), Queensland University of Technology (QUT), Australia Profile: Pobar is a veteran infrastructure specialist with over a decade of experience building scalable AI systems. He has contributed to major projects like HHVM, Hack, and PyTorch, which are widely used in the tech industry. Pobar's expertise ensures that AI models run efficiently at scale, supporting rapid innovation and deployment. Jack Rae Background: Previously at Google DeepMind Education: Bachelor of Science (BS), University of Bristol, UK Master of Science (MS), Carnegie Mellon University (CMU), USA Doctor of Philosophy (PhD), University College London (UCL), UK Profile: Rae is a leading figure in large-scale language model research, having led pre-training for Gemini 2.5 and developed Google's Gopher and Chinchilla models. His work has advanced the understanding of model scaling laws and data efficiency, shaping the next generation of AI systems. Hongyu Ren Background: Previously at OpenAI Education: Bachelor of Science (BS), Peking University, China Doctor of Philosophy (PhD), Stanford University, USA Profile: Ren has contributed to the development and post-training of GPT-4o and the o1/3/4o-mini models, focusing on improving robustness and reliability. His research addresses key challenges in making AI models safer and more trustworthy for widespread use. Johan Schalkwyk Background: Previously at Google Education: Bachelor of Science in Engineering (BS Meng), University of Pretoria, South Africa Profile: Schalkwyk is a Google Fellow and a pioneer in speech recognition, having led the Maya team and contributed to the early development of Google's Sesame project. His innovations have set industry standards in voice technology, powering products used by millions worldwide. Pei Sun Background: Previously at Google DeepMind Education: Bachelor of Science (BS), Tsinghua University, China Master of Science (MS), Carnegie Mellon University (CMU), USA Profile: Sun specializes in post-training and reasoning for advanced AI models, including Gemini, and has developed perception systems for Waymo's self-driving cars. His work bridges AI research and real-world applications, particularly in autonomous systems and robotics. Jiahuai Yu Background: Previously at OpenAI & Gemini Education: Bachelor of Science (BS), University of Science and Technology of China (USTC), China Doctor of Philosophy (PhD), University of Illinois Urbana-Champaign (UIUC), USA Profile: Yu has made significant contributions to perception and multimodal AI, working on o3/4o-mini and GPT-4/4o. His research enables AI to interpret and generate information across text, images, and other modalities, expanding the possibilities of human-AI interaction. Shengjia Zhao Background: Previously at OpenAI Education: Bachelor of Science (BS), Tsinghua University, China Doctor of Philosophy (PhD), Stanford University, USA Profile: Zhao is a co-creator of ChatGPT, GPT-4, and o4-mini, and has led research on data synthesis and AI safety. His work has had a direct impact on the reliability and ethical deployment of generative AI, making him a respected voice in the field. AI Masterclass for Students. Upskill Young Ones Today!– Join Now