logo
ChatGPT, brain rot and who should use AI and who should not

ChatGPT, brain rot and who should use AI and who should not

India Today23-06-2025

There was a time when almost everyone had a few phone numbers stored in the back of their mind. We would just pick up our old Nokia, or a cordless, and dial in a number. Nowadays, most people remember just one phone number — their own. And in some cases, not even that. It is the same with birthdates, trivia like who the prime minister of Finland is, or the accurate route to this famous bakery in that corner of the city.advertisementHumans are no longer memory machines, something which often leads to hilarious videos on social media. Young students are asked on camera to name the first prime minister of India and all of them look bewildered. Maybe Gandhi, some of them gingerly say. We all laugh a good bit at their expense.But it's not the fault of the kids. It's a different world. The idea of memorising stuff is a 20th-century concept. Memory has lost its value because now we can recall anything or everything with the help of Google. We can store information outside our brain and into our phones and access it anytime we want. Because memory has lost its value, we have also lost our ability to memorise things. Is it good? Is it bad? That is not what this piece is about. Instead, it is about what we are going to lose next.advertisement
Next, say in 10 to 15 years, we may end up losing our ability to think and analyse, just the way we have lost the ability to memorise. And that would be because of ChatGPT and its ilks.So far, we had suspected something like this. Now, research is beginning to trace it in graphs and charts. Around a week ago, researchers at MIT Media Lab ran some experiments on what happens inside the brain of people when they use ChatGPT. As part of the experiment, the researchers divided 54 people in three groups: people using only the brain to work, people using brain and Google search, and people using brain and ChatGPT. The work was writing an essay and as the participants in the research went about doing it, their brains were scanned using EEG.The findings were clear. 'EEG revealed significant differences in brain connectivity,' wrote MIT Lab researchers. 'Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.'The research was carried out across four months and in the last phase, participants who were part of the brain-only group were asked to also use ChatGPT, whereas the ChatGPT group was told to not use it at all. 'Over four months, LLM (ChatGPT) users consistently underperformed at neural, linguistic, and behavioural levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning,' wrote MIT Labs researchers.advertisementWhat is the big takeaway? Quite simple. Like anything cerebral — for example, it is well-established that reading changes and rewires the brain — the use of something like ChatGPT impacts our brain in some fundamental ways. The brain, just like a muscle, can atrophy when not used. And, we have started seeing signs in labs that when people rely too much on AI tools like ChatGPT to do their thinking, writing, analysing, our brains may lose some of this functionality.Of course, there could be the other side of the story too. If in some areas, the mind is getting a break, it is possible in some other parts that neurons might light up more frequently. If we lose our ability to analyse an Excel sheet with just a quick glance, maybe we will get the ability to spot bigger ideas faster after looking at the ChatGPT analysis of 10 financial statements.advertisementBut, I am not certain. On the whole, and if we include everyone, the impact of information abundance that tools like Google and Wikipedia have brought has not resulted in smarter or savant-like people. There is often a crude joke on the internet — we believed that earlier, people were stupid because they did not have access to information. Oh, just naive we were.It is possible that, at least on the human mind, the impact of tools like ChatGPT may not end up being a net positive. And that brings me to my next question. So, who should or who should not use ChatGPT? The current AI tools are undoubtedly powerful. They have the potential to crash through all the gate-keeping that happens within the world. They can make everyone feel superhuman.When this much power is available, it would be a waste to not use it. So, everyone should use AI tools like ChatGPT. But I do feel that there has to be a way to go about it. If we don't want AI to wreck our minds, we will have to be smart about how we use them. In formative years — in schools and colleges or at work when you are learning the ropes of the trade — it would be unwise to use ChatGPT and similar tools. The idea is that you should use ChatGPT like a bicycle, which makes you more efficient and faster, instead of as a crutch. The idea is that before you use ChatGPT, you should already have a brain that has figured out a way to learn and connect dots.advertisementThis is probably the reason why, in recent months again and again, top AI experts have highlighted that the use of AI tools must be accompanied by an emphasis on learning the basics. DeepMind CEO Demis Hassabis put it best last month when he was speaking at Cambridge. Answering a question about how students should deal with AI, he said, 'It's important to use the time you have as an undergraduate to understand yourself better and learn how to learn.'In other words, Hassabis believes that before you jump onto ChatGPT or other AI tools, you should first have the fundamental ability to analyse, adapt and learn quickly without them. In the future, this, I think, is going to be key to using AI tools in a better way. Or else, they may end up rotting our brains, similar to what we have done to our memory and attention span due to Instagram, Google and all the information overload.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)(Views expressed in this opinion piece are those of the author)Trending Reel

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Deception: AI is learning to lie, scheme, and threaten its creators, ETHRWorld
AI Deception: AI is learning to lie, scheme, and threaten its creators, ETHRWorld

Time of India

time37 minutes ago

  • Time of India

AI Deception: AI is learning to lie, scheme, and threaten its creators, ETHRWorld

Advt Advt Join the community of 2M+ industry professionals. Subscribe to Newsletter to get latest insights & analysis in your inbox. All about ETHRWorld industry right on your smartphone! Download the ETHRWorld App and get the Realtime updates and Save your favourite articles. New York: The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations the race to deploy increasingly powerful models continues at breakneck deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts."O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives.- 'Strategic kind of deception' -For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."The concerning behavior goes far beyond typical AI "hallucinations" or simple insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder."This is not just hallucinations. There's a very strategic kind of deception."The challenge is compounded by limited research companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).- No rules -Current regulations aren't designed for these new European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread."I don't think there's much awareness yet," he this is taking place in a context of fierce companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said breakneck pace leaves little time for thorough safety testing and corrections."Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".Researchers are exploring various approaches to address these advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this forces may also provide some pressure for Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

WhatsApp testing new multi-account feature in a single phone
WhatsApp testing new multi-account feature in a single phone

Deccan Herald

time42 minutes ago

  • Deccan Herald

WhatsApp testing new multi-account feature in a single phone

In 2022, WhatsApp rolled out the multi-device feature that allows people to link a single WhatsApp account with up to four greatly enhances the user experience, as users will be able to reply to a message from a computer or another handset without needing their primary phone. And, most importantly, all messages in the inbox are synced in real-time, allowing users to view old messages on any linked Mark Zuckerberg-owned entity is testing another value-added feature that allows users to set up a secondary account on a single phone, reported WABeta Info, citing the latest WhatsApp beta of iOS version announces Gemini AI model that can run on robots user will get two options to add a second account to the phone: 1) By registering a new phone number, and 2) By linking an existing account through a QR whenever the user switches the account, the chat inbox will get synced with the device and allow him/her to view all the latest and old will come in handy for those who have enterprises or even corporate employees to have two different accounts for one personal and another for a related development, WhatsApp recently launched a new message summaries feature in the Messenger private message summaries, WhatsApp will offer the option to use a generative Artificial Intelligence-based Meta AI bot to summarise long unread messages in a brief for the new feature, Message Summaries, utilises Private Processing technology, which enables Meta AI to generate a response directly on the the user uses Meta AI in group chat, other members will not know he/she summarised the unread now, this new feature is available only in the US. It will be expanded to other countries in the coming brings Meta AI-powered private message the latest news on new launches, gadget reviews, apps, cybersecurity, and more on personal technology only on DH Tech.

Canada will scrap tax that prompted Trump to suspend trade talks
Canada will scrap tax that prompted Trump to suspend trade talks

Business Standard

time43 minutes ago

  • Business Standard

Canada will scrap tax that prompted Trump to suspend trade talks

Canada's government announced on Sunday night that it would cancel a tax on American technology companies that led President Trump to suspend trade talks between the two countries, handing an important victory to Trump. Prime Minister Mark Carney discussed the decision to scrap Canada's digital services tax with Trump on Sunday, Carney's office said. In a sign that trade talks were resuming, Canada's finance minister, François-Philippe Champagne, spoke with the United States Trade Representative, Jamieson Greer, on Sunday, according to Carney's office. The tax, which had been due to take effect on Monday, became the latest flashpoint in difficult negotiations between the United States and Canada on Friday, when Trump said the talks were off. On social media, Trump called the levy a 'blatant attack' and said he would inform Canada within a week about the duties 'they will be paying to do business with the United States of America.' Forty-eight hours later, the Canadian government folded, announcing it would not go ahead with the tax. The finance ministry said the government had decided to 'rescind the Digital Services Tax in anticipation of a mutually beneficial comprehensive trade arrangement with the United States.' Technically, the cancellation of the tax needs to be approved in legislation, so until that time, the government is suspending its collection. Politically, canceling the tax should be a simple matter for the government. The White House did not immediately respond to a request for comment. Canada's 3 percent digital services tax has been in place since last year, but the first payments were only due beginning on Monday. Because the tax is retroactive, American companies were preparing to turn over roughly $2.7 billion to the Canadian government, according to a trade group for large American tech companies. US officials from both parties have long chafed at taxes like the one Canada has imposed, calling them unfairly targeted at services provided by American companies like Google, Apple and Amazon. Carney said that the cancellation of the tax would put the talks back on track with the goal of reaching an agreement on July 21. The Trump administration has imposed a 25 percent tariff on most goods from Canada, with which it has a free-trade agreement together with other countries, Canada is also subject to a 50 percent US tariff on its exports of steel and aluminum. Talks on reaching a new trade deal are particularly crucial for Canada, whose economy is heavily dependent on exports to the United States. Canada is America's second-largest trade partner. 'In our negotiations on a new economic and security relationship between Canada and the United States, Canada's new government will always be guided by the overall contribution of any possible agreement to the best interests of Canadian workers and businesses,' Carney said in a written statement.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store