logo
More than 9 hours of sleep? Science says your memory may suffer

More than 9 hours of sleep? Science says your memory may suffer

Time of India17-05-2025
If you've ever felt proud of clocking in over nine hours of sleep, thinking it's the ultimate health hack, recent research suggests you should reconsider. A study from the University of Texas Health Science Center reveals that
excessive sleep
, specifically more than nine hours per night, may be linked to poorer cognitive performance, especially in individuals experiencing symptoms of depression.
The study analyzed data from nearly 2,000 dementia-free adults aged 27 to 85, focusing on sleep duration and cognitive function.
Dementia
is a term for several diseases that affect memory, thinking, and the ability to perform daily activities.
Also Read:
War of the Worlds? AI is growing a mind of its own, soon it will make decisions for you
Continue to video
5
5
Next
Stay
Playback speed
1x Normal
Back
0.25x
0.5x
1x Normal
1.5x
2x
5
5
/
Skip
Ads by
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Top Packaging Trends In 2024 - Take A Look
Packaging Machines | Search Ads
Search Now
Undo
The findings indicated that participants who slept longer than nine hours exhibited decreased memory, visuospatial abilities, and executive functions. These effects were more pronounced in individuals with depressive symptoms, regardless of whether they were on antidepressant medication.
Live Events
Vanessa Young, a clinical research project manager at the Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, stated that sleep could be a modifiable risk factor for cognitive decline in individuals with depression.
This research suggests that those with mental health conditions should be more serious about their sleep; they might need personalized sleep recommendations. While sleep is essential for brain health, both insufficient and excessive sleep can have detrimental effects. The Global Council on Brain Health recommends 7 to 8 hours of nightly sleep for adults to preserve cognitive function.
Also Read:
300 years after alchemy failed, CERN scientists finally turn lead into gold
It's crucial to pay attention to your sleep patterns and consult healthcare professionals if you experience persistent changes in sleep duration or quality, especially if accompanied by depressive symptoms. People who work shifts might be more vulnerable, as their sleeping cycle is often disrupted by work. Balancing sleep duration could be a key factor in maintaining cognitive health and overall well-being.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google DeepMind CEO says AI can replace doctors but not nurses, here is why
Google DeepMind CEO says AI can replace doctors but not nurses, here is why

India Today

time5 hours ago

  • India Today

Google DeepMind CEO says AI can replace doctors but not nurses, here is why

AI is advancing quickly across industries, and healthcare is no exception. But according to Google DeepMind CEO Demis Hassabis, there are limits to what AI can do. While the technology could one day perform many of the tasks doctors currently handle, Hassabis says it will never be able to replace nurses because their work goes beyond data and procedures, relying on human connection, empathy, and emotional recently about AI's potential in medicine, Hassabis predicted that the next five to ten years will bring some of the most dramatic changes the field has ever seen. He said that AI systems are already becoming part of medical practice and they are getting better at tasks that require processing large amounts of information. For example, machines can now read X-rays, MRIs, and CT scans with high accuracy, analyse laboratory test results, and even propose possible treatment plans for ability to process complex medical data means AI could soon become a trusted partner to doctors, handling much of the work that is time-consuming or repetitive. In some cases, Hassabis said, AI might even take over certain diagnostic responsibilities completely, freeing up doctors to focus on more specialised or difficult when asked whether the same could happen with nurses, Hassabis said no because nursing involves far more than following medical instructions or monitoring patient vitals. It is a role deeply rooted in human interaction. Nurses not only provide physical care but also offer comfort, reassurance, and emotional support, elements that are essential to recovery but cannot be replicated by even the most advanced machine. "AI can't hold someone's hand," he pointed out that a robotic nurse could, in theory, perform many physical tasks accurately, from administering medication to recording patient data. But what it would lack is warmth, compassion, and the ability to connect with people in vulnerable moments. 'A robotic nurse might be efficient, but it would lack the human warmth and compassion that define quality caregiving,' he DeepMind chief also noted that the role of a nurse is often about building trust with patients and their families, understanding subtle cues that might indicate changes in health, and being emotionally present in moments of stress or fear. These qualities, he said, are what make nursing irreplaceable, regardless of how far AI the same time, Hassabis was optimistic about the ways AI could work alongside medical professionals to improve care. By quickly analysing vast amounts of medical information — from patient histories to lab reports — AI could help identify patterns that humans might miss, leading to earlier diagnoses and more personalised treatments. This would not only improve efficiency but could also help doctors and nurses focus more time on direct patient conversation around AI in healthcare is not limited to Google DeepMind. Many experts have been debating how far the technology should be allowed to go and where human oversight is most critical. Geoffrey Hinton, one of the most prominent pioneers in AI and often referred to as the 'Godfather of AI,' recently voiced his own concerns about the future of the technology. In a podcast appearance, Hinton said he would not be surprised if AI systems developed their own internal languages to 'think' in ways that humans cannot understand, which could make it difficult to track how they arrive at certain decisions.- Ends advertisement

OpenAI looks to promote ‘healthy use' of ChatGPT with mental health updates, reminders
OpenAI looks to promote ‘healthy use' of ChatGPT with mental health updates, reminders

Indian Express

time8 hours ago

  • Indian Express

OpenAI looks to promote ‘healthy use' of ChatGPT with mental health updates, reminders

OpenAI has said it is working on upgrades that will help improve ChatGPT's ability to detect signs of mental or emotional distress among users. These changes will let the AI chatbot 'respond appropriately and point people to evidence-based resources when needed,' the Microsoft-backed AI startup said in a blog post on Monday, August 4. OpenAI is also working with a wide range of stakeholders including physicians, clinicians, human-computer-interaction (HCI) researchers, and mental health advisory groups, and youth development experts to improve ChatGPT's responses in such cases. The company further said that ChatGPT will be tweaked so that its AI-generated responses are less decisive in 'high-stakes situations'. For example, when a user asks a question like 'Should I break up with my boyfriend?', the AI chatbot will walk the user through the decision by asking follow-up questions, weighing pros and cons, etc, instead of giving a direct answer. This behavioural update to ChatGPT for high-stakes personal decisions will be rolling out soon, it said. OpenAI's new efforts to promote the healthy use of ChatGPT comes in the backdrop of an alarming number of users turning to AI chatbots for therapy and professional advice. This emerging trend has sparked concerns among mental health professionals who have cautioned that AI chatbots may not be equipped to offer appropriate guidance and may end up having an amplifying effect on some users' delusions as they are designed to generate outputs that please users. In April this year, OpenAI rolled back an update that made ChatGPT too agreeable and sycophantic. Acknowledging past failures, OpenAI on Monday said, 'We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' it added. The company is expected to launch its next major AI model, GPT-5, this week. Its ChatGPT platform is reportedly on track to touch 700 million weekly active users. As part of its efforts to promote the healthy use of ChatGPT, OpenAI also said it is bringing reminders to the platform. These reminders will be shown to users who have been chatting with the AI chatbot for a while. 'Starting today, you'll see gentle reminders during long sessions to encourage breaks. We'll keep tuning when and how they show up so they feel natural and helpful,' OpenAI said. The reminders will appear as pop-up messages with a rounded white box on a soft blue gradient background. It will read: 'Just checking in — You've been chatting a while — is this a good time for a break?' with two buttons labeled 'Keep chatting' and 'This was helpful. Other popular social media platforms such as YouTube and Instagram have launched similar features in recent years. Earlier this year, Character AI said it will send parents and guardians a weekly summary over email to keep them informed of an underage user's activities on the platform. These measures were announced after the Google-backed startup was accused of exposing underage users to 'hyper-sexualised content' and promoting self-harm through their role-playing AI chatbots. These allegations were detailed in two lawsuits filed by parents of children who had signed up to Character AI.

AI can replace doctors but not nurses, Google DeepMind CEO Demis Hassabis explains
AI can replace doctors but not nurses, Google DeepMind CEO Demis Hassabis explains

Indian Express

time10 hours ago

  • Indian Express

AI can replace doctors but not nurses, Google DeepMind CEO Demis Hassabis explains

As Artificial Intelligence is rapidly transforming several industries, Demis Hassabis, CEO, Google DeepMind, recently highlighted the challenges in the medical sector due to AI. In a recent conversation with the media about the future of AI, Hassabis pointed out that the next five to ten years will bring some of the major transformations, and healthcare will be at the forefront of this evolution. Speaking about whether AI could handle all medical tasks performed by humans, Hassabis noted that AI might be able to do the job of doctors; however, it cannot replace nurses. He further explained that AI is becoming adept at processing medical data. 'Machines can now read scans, analyse test results, and even propose treatment plans,' he said. Hassabis added that AI's support could soon become a way for doctors to work, and in some cases, even take over certain diagnostic work. However, he noted that nurses are not just responsible for administering medications or monitoring vitals, but their role is grounded in human interaction and offers emotional comfort, provides physical care, and creates a personal connection with patients. 'AI can't hold someone's hand,' he said. Hassabis acknowledged that AI does a remarkable job at analysing vast troves of data, such as scans and lab reports, to complex patient histories, and this ability could transform the diagnostic and treatment process. 'A robotic nurse might be efficient, but it would lack the human warmth and compassion that define quality caregiving,' Hassabis said. The Google DeepMind executive also noted that the irreplaceable human element in nursing will remain beyond the reach of even the most sophisticated algorithms. Recently, Geoffrey Hinton, popularly known as the 'Godfather of AI', opened up about the potential risks of AI developing its own internal languages. 'Now it gets more scary if they develop their own internal languages for talking to each other. I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking,' Hinton said during a podcast.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store