logo
The hidden cost of AI: Cognitive lethargy

The hidden cost of AI: Cognitive lethargy

Observer7 days ago
The world stands at a technological crossroads, with an unprecedented proliferation of large language models (LLMs), such as ChatGPT, drastically altering and reshaping the very essence of human existence. It has permeated the lives, homes, work, and play spaces of people. Over 378 million people worldwide are estimated to be active users of AI, including LLMs like ChatGPT, Gemini, Claude, and Copilot.
The use of AI-generated large language models experienced a significant global surge in 2025. Hundreds of millions of people worldwide use these tools daily for academic, personal, and professional purposes. Considering the rapid growth and development of AI, it becomes crucial to understand the cognitive implications of the widespread use of LLMs in educational and informational contexts. Today, there is substantial evidence-based research to show that, although it enhances the accessibility and personalization of education, the prolonged and frequent use of AI tools for information reduces people's critical thinking capacity. So, the integration of LLMs in learning ecosystems presents a complex duality.
Recently, the research from the Massachusetts Institute of Technology (MIT) has raised concerns for the education sector, educators, and learners. The study suggests that the increased use of AI systems may raise serious concerns regarding human intellectual development and autonomy. The usage of LLMs provides users with singular responses that will inadvertently discourage lateral thinking and independent judgment. Instead of becoming seekers of knowledge, we are leaning towards passive consumption of AI-generated content. In the long run, this world will lead to superficial engagement, weakened critical thinking, and less long-term memory formation, as well as a shallower understanding of the material. This will lead to a decrease in decision-making skills, and it will create a false perception that learning is effortless and simplified, decreasing student motivation and reducing the interest in individual research.
The increased use of ChatGPT will affect student learning, performance, perception of learning, and impact higher-order thinking. The MIT research suggests that while AI tools can enhance productivity, they may also promote a form of meta-cognition laziness. The fundamental principle of research inquiry is compromised as students rely heavily on digital tools for information gathering. The students will fall prey to Echo Chambers, where users of ChatGPT become trapped in self-reinforcing information bubbles, filtering out contradictory evidence. Echo Chambers can impact the foundation of academic discourse and debate. Furthermore, the sophisticated functioning of the algorithms leaves the users unaware of information gaps in the research, degrading the standard of scholarly outcomes.
These research findings have many implications for various stakeholders in the education sector and beyond. The role of the professor is evolving from a source of knowledge to a facilitator and guide. The curriculum must be adapted to digital literacy and changing patterns with a focus on security and safety. The technological developments call for monitoring academic integrity. Students must adopt a balanced approach to protect their higher-order thinking.
Artificial intelligence is here to stay and will impact all sectors, creating new career opportunities and displacing traditional employment pathways. Evidence suggests that if left unchecked, the world risks turning learners into mere editors of AI-generated text, rather than actual creators and thinkers. While advanced technologies, such as artificial intelligence, have immense potential and offer unprecedented opportunities for enhancing human learning and access to vast volumes of information, they also have the power to impact cognitive development, long-term memory building, and intellectual independence, and demand caution and critical consideration.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Omani research team uses AI to detect genetic heart disorders
Omani research team uses AI to detect genetic heart disorders

Times of Oman

time6 hours ago

  • Times of Oman

Omani research team uses AI to detect genetic heart disorders

Muscat: A groundbreaking research initiative has made significant progress in identifying the genetic causes of cardiomyopathy in the Sultanate of Oman. The study, titled 'Implementation of Machine Learning Approach to Predict Pathogenicity of Genetic Variants Associated with Cardiomyopathy,' utilises Whole Exome Sequencing (WES) and advanced Artificial Intelligence (AI) methodologies to investigate the underlying genetic contributors to this complex heart condition. The research was led by Dr. Ahmed Al Amri, Consultant of Molecular Genetics Lab and Head of Training and Professional Staff Development at the National Genetics Centre of the Royal Hospital. Cardiomyopathy, a heterogeneous group of myocardial disorders, remains a significant health burden worldwide. In the Sultanate of Oman, the genetic basis of the disease has been insufficiently characterised, hindering effective diagnosis and intervention. This study aimed to bridge that gap by conducting genetic analysis on a cohort of Omani families affected by cardiomyopathy. Dr. Ahmed Al Amri explained that central to the research is the development of a novel AI-based analysis model called 'CardioVar.' This model is designed to enhance the interpretation of WES data through the integration of more than 50,000 genetic variants and mutations linked to cardiomyopathy, supported by multiple AI algorithms to ensure analytical precision. The research project achieved a diagnostic success rate of over 80%, uncovering both known and previously unreported genetic mutations, some of which involve genes not previously linked to cardiomyopathy. These findings offer valuable insight into the genetic landscape of the disease within the Omani population and present new opportunities for further research to confirm the clinical relevance of novel variants. The AI model significantly improved both the speed and accuracy of genetic analysis, demonstrating its practical value in overcoming the limitations of traditional sequencing approaches. Dr. Al Amri emphasised that this approach not only advances the understanding of the genetic basis of cardiomyopathy in the Sultanate of Oman but also provides a scalable framework for implementing AI in genomic medicine, enhancing diagnostic accuracy and personalised care. The multidisciplinary research team included Dr. Ahmed Al Amri, Dr. Aisha Al Balushi, Nibras Al Mahrami, Dr. Musallam Al Ariami, Dr. Nadia Al Hashimi, Dr. Mohammed Al Rawahi, Dr. Tuqa Al Lawati, Dr. Bushra Al Shamsi, Dr. Fahad Al Hattali, and Ms. Mashael Al Balushi. Together, they have contributed to a practical model that blends clinical knowledge, laboratory science, and artificial intelligence to achieve high-impact results. This research not only advances the scientific understanding of inherited cardiomyopathy in the Sultanate of Oman but also exemplifies how AI can transform genetic diagnostics. It stands as a national model for the integration of precision medicine into routine clinical care, with plans to expand the study to include other hereditary heart conditions and incorporate its findings into everyday healthcare practices.

India's semiconductor mission paves way for digital sovereignty: Experts
India's semiconductor mission paves way for digital sovereignty: Experts

Times of Oman

time11 hours ago

  • Times of Oman

India's semiconductor mission paves way for digital sovereignty: Experts

New Delhi : In a significant stride towards establishing digital sovereignty, the central government's ambitious semiconductor mission is being hailed as a transformative step in securing the country's control over its digital infrastructure. In an exclusive conversation with ANI, experts stated that this initiative lays the groundwork for India to become self-reliant across the entire digital technology stack, from chip manufacturing to Artificial Intelligence (AI) and cloud applications. At the heart of this initiative lies the understanding that all mission-critical digital infrastructure, whether related to artificial intelligence or cloud computing, fundamentally depends on semiconductor chips. "When you are talking about a completely digital sovereignty, it starts at the chip level. Whether it is a complete AI stack or a complete cloud stack on which any mission-critical applications of government or enterprises are running, it always starts from the ground level, which is the chip. From chip level, we end up making equipment," Sunil Gupta, Chair, ASSOCHAM National Council on Datacenter, told ANI. "Chips form the base, leading to the development of equipment, operating systems, datasets, models, and ultimately, applications." On top of that, Gupta said, "You start making operating systems, then you run in case of AI examples. You have data sets and then you have models and then you have applications." "So, the Government of India is clearly understanding this whole stack, which needs to be completely owned by India. At the chip level, India has created a semiconductor mission, which means India is able to design, fabricate, assemble, and package its own chips in India itself." Recognizing this technological hierarchy, the Government of India has launched the India Semiconductor Mission (ISM) to ensure full ownership of this stack. The mission aims to enable domestic capabilities in chip design, fabrication, assembly, and packaging. "The semiconductor mission and the government emphasis on that is actually a step in the very very right direction. It is at the route of becoming a sovereign nation in terms of digital infrastructure," Gupta said. He further said the initial efforts have begun with the manufacturing of 28-nanometer chips, but the government is setting its sights higher. "Starting has been done maybe with chips of 28 nanometers, but I am very sure once this mission starts, we will start manufacturing the most high-end two and three nanometer chips also," Gupta stated. "As the Minister of IT also announced that within five years India can also expect its GPU manufacturing in India, which is a real sovereignty at the root level that you are not dependent on a chip which is designed in the US and manufactured in Taiwan." As per Gupta, "India will be able to design and manufacture chips in India itself. Now, on top of that, which is the semiconductor mission that is addressing this point. We also would have seen that India is also trying to find other sources of rare earth metals." "The Prime Minister has visited so many countries in the Latin America and Africa. The reason was that if China had put a huge control on rare Earth. It is presumably, they are owning 90 per cent of the rare Earth in the world. Then India will have to be self-dependent on rare Earth because these rare Earth which are actually being used in the manufacturing of electronics. So, that is another component on which the government is very serious about," he added. On sovereign tech for India's digital transformation, founding member of Bharath Cloud, Dipali Pillai stated, "Having a digital India on sovereignty is crucial because it is an economic and security story, and having everything in house is very important, because then we are running on the regulations that we write..." On India announcing indigenous chip manufacturing units, she stated, "Having everything on our own soil and working on our own terms is extremely important for us to grow and innovate..."

AI is not your friend
AI is not your friend

Observer

timea day ago

  • Observer

AI is not your friend

Meta CEO Mark Zuckerberg and OpenAI's Sam Altman have been aggressively promoting the idea that everyone — children included — should form relationships with AI 'friends' or 'companions'. Meanwhile, multinational tech companies are pushing the concept of 'AI agents' designed to assist us in our personal and professional lives, handle routine tasks and guide decision-making. But the reality is that AI systems are not and never will be, friends, companions, or agents. They are and will always remain, machines. We should be honest about that and push back against misleading marketing that suggests otherwise. The most deceptive term of all is 'artificial intelligence'. These systems are not truly intelligent and what we call 'AI' today is simply a set of technical tools designed to mimic certain cognitive functions. They are not capable of true comprehension and are neither objective, fair, nor neutral. Nor are they becoming any smarter. AI systems rely on data to function and increasingly, that includes data generated by tools like ChatGPT. The result is a feedback loop that recycles output without producing deeper understanding. More fundamentally, intelligence is not just about solving tasks; it's also about how those tasks are approached and performed. Despite their technical capabilities, AI models remain limited to specific domains, such as processing large datasets, performing logical deductions and making calculations. When it comes to social intelligence, however, machines can only simulate emotions, interactions and relationships. A medical robot, for example, could be programmed to cry when a patient cries, yet no one would argue that it feels genuine sadness. The same robot could just as easily be programmed to slap the patient and it would carry out that command with equal precision – and with the same lack of authenticity and self-awareness. The machine doesn't 'care'; it simply follows instructions. And no matter how advanced such systems become, that is not going to change. AI systems are not and never will be, friends, companions, or agents. Simply put, machines lack moral agency. Their behaviour is governed by patterns and rules created by people, whereas human morality is rooted in autonomy — the capacity to recognise ethical norms and behave accordingly. By contrast, AI systems are designed for functionality and optimisation. They may adapt through self-learning, but the rules they generate have no inherent ethical meaning. Consider self-driving cars. To get from point A to point B as quickly as possible, a self-driving vehicle might develop rules to optimise travel time. If running over pedestrians would help achieve that goal, the car might do so, unless instructed not to, because it cannot understand the moral implications of harming people. This is partly because machines are incapable of grasping the principle of generalisability — the idea that an action is ethical only if it can be justified as a universal rule. Moral judgment depends on the ability to provide a plausible rationale that others can reasonably accept. These are what we often refer to as 'good reasons'. Unlike machines, humans are able to engage in generalisable moral reasoning and can therefore judge whether their actions are right or wrong. The term 'data-based systems' (DS) is thus more appropriate than 'artificial intelligence', as it reflects what AI can actually do: generate, collect, process and evaluate data to make observations and predictions. It also clarifies the strengths and limitations of today's emerging technologies. At their core, these are systems that use highly sophisticated mathematical processes to analyse vast amounts of data — nothing more. Humans may interact with them, but communication is entirely one-way. DS have no awareness of what they are 'doing' or of anything happening around them. This is not to suggest that DS cannot benefit humanity or the planet. On the contrary, we can and should rely on them in domains where their capabilities exceed our own. But we must also actively manage and mitigate the ethical risks they present. Developing human-rights-based DS and establishing an International Data-Based Systems Agency at the United Nations would be important first steps in that direction. Over the past two decades, Big Tech firms have isolated us and fractured our societies through social media — more accurately described as 'anti-social media', given its addictive and corrosive nature. Now, those same companies are promoting a radical new vision: replacing human connection with AI 'friends' and 'companions'. At the same time, these companies continue to ignore the so-called 'black box problem': the untraceability, unpredictability and lack of transparency in the algorithmic processes behind automated evaluations, predictions and decisions. This opacity, combined with the high likelihood of biased and discriminatory algorithms, inevitably results in biased and discriminatory outcomes. The risks posed by DS are not theoretical. These systems already shape our private and professional lives in increasingly harmful ways, manipulating us economically and politically, yet tech CEOs urge us to let DS tools guide our decisions. To protect our freedom and dignity, as well as the freedom and dignity of future generations, we must not allow machines to masquerade as what they are not: us. @Project Syndicate, 2025

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store