
The bizarre reason some people are fine on just four hours of sleep a night
This mutation affects a phosphate exchange process crucial to the Sleep -wake cycle, leading to less Sleep and potentially more deep Sleep.
Mice with the same mutation slept 30 minutes less than unaltered mice, confirming the gene's role in Sleep duration.
This discovery could lead to new therapies for Sleep disorders and improve Sleep quality.
The mutation seems to increase deep Sleep in those who have it.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
25 minutes ago
- Daily Mail
Walking just 14 more steps per minute could protect against rising chronic conditions
Walking just 14 more steps a minute could boost your health and protect against deadly diseases, a study suggests. Researchers at the University of Chicago tracked 102 frail adults — weaker individuals who are exhausted by little exercise — in their late 70s who were asked to do a 45-minute walking session three times a week. Over the four-month study, half were asked to continue walking at a 'relaxed and comfortable pace' during the exercise sessions. But the rest were told to walk 'as fast as they safely could'. Participants in the 'fast-walking' group walked 14 more steps every minute on average by the study end, and about 100 steps per minute — equivalent to the average among adults. The fast-walking group achieved a ten percent improvement in their six-minute walking distance, indicating enhanced endurance and cardiovascular health. This improvement also suggested increased muscle mass and a lower risk of falls, the leading cause of injury-related death in adults over 65, as well as improved aerobic fitness, a key predictor for longevity and sustained independence in older age. For comparison, those who walked at a relaxed pace saw no improvement in either measure during the study. Dr Daniel Rubin, an anesthesiologist who led the study, and others wrote: 'We demonstrated that an increase of 14 steps per minute during the intervention sessions increased the odds of an improvement in [endurance]. 'Older adults can increase their [steps per minute] and [steps per minute] can serve as a surrogate measure of activity intensity during walking interventions.' Average adults walk about 100 to 130 steps per minute, according to estimates, while older and frail individuals walk about 82 steps per minute on average. The average American also walks about 5,100 steps per day, well below the recommendation of 10,000 every 24 hours. In the paper, published in PLOS One, researchers recruited adults from 14 retirement homes near the university. The study defined a frail adult as an individual with weight loss, slowness, weakness, exhaustion and someone who undertook little physical activity. Of the participants, only 35 percent were able to walk unaided with the remainder requiring a cane, walker, scooter or wheelchair at times. They were divided into two equal groups for walking sessions, the fast group or the relaxed group, with each led by a trained research assistant. Over the first three sessions, adults were asked to walk 45 minutes at a comfortable pace. In the next eight sessions, participants were asked to walk 40 minutes, and start and end each session with five minutes of stair tapping — stepping and quickly tapping the toes of each foot on the edge of a step. During the walking, those in the fast-walking group were also asked to increase their intensity until they reached 70 percent of their heart rate maximum. The heart rate maximum is the highest number of times the heart can beat in one minute during strenuous physical activity. It is calculated using the formula of 220 minus someone's age. For those in the study, their maximum heart rate would be around 147 beats per minute and 70 percent of this would be 103 beats per minute. Over the remaining sessions, participants were asked to walk for 35 minutes but begin each session with a 10-minute warm up. But those in the exercise group were asked to incrementally increase their speed during the walking sessions to 'as fast as they safely could.' Participants' walking was tracked using an activPAL tracker that was strapped onto their thigh, which measured steps and speed. They were able to stop to rest during the exercises, but this stopped the timer, which would not restart until they began to walk again. Researchers found that among those in the relaxed group, their steps decreased during the study from 82 to 77 steps per minute. For comparison, those in the exercise group saw this rise from 86 to 100 steps per minute on average. Participants were asked to complete the six-minute walk test at the start and end of the study to measure their endurance. In the relaxed group, participants saw a slight improvement, with the distance they walked increasing from 836 to 869 feet. For comparison, however, those in the exercise group saw the distance they walked increase from 843 to 1,033 feet per session - a 10 percent rise. For comparison, the average American adult can walk around 2,100 feet in six minutes. The team concluded: 'The overall exercise dose (frequency, duration, and intensity) between the two groups only differed with respect to the intensity component as frequency and duration were kept constant between the two groups. 'Thus, prefrail and frail older adults engaged in walking interventions can derive further improvement in their functional outcomes by increasing [steps per minute] during a fixed volume of walking exercise.' The study was funded by the National Institutes of Health and the National Institute of Aging.


The Guardian
37 minutes ago
- The Guardian
Human-level AI is not inevitable. We have the power to change course
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Altman captures a Silicon Valley mantra: technology marches forward inexorably. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms. Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures). Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong? Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course. A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.' For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before. No technology is inevitable, not even something as tempting as AGI. Some AI worriers like to point out the times humanity resisted and restrained valuable technologies. Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s. No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts. Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans. And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.' It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements. These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.' There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts. In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production. The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913. In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round. Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere. It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet). But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy. It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed. Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want. At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'. Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile. But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody? We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing. But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources. Governments, on the other hand, aren't subject to the same financial pressures. An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run. Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand. And while the world may feel fractious, rival nations have cooperated to surprising degrees. The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'. In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty. On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion. The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it. Technology happens because people make it happen. We can choose otherwise. Garrison Lovely is a freelance journalist


The Independent
an hour ago
- The Independent
The great debate: Cane sugar vs. corn syrup
President Donald Trump — who reportedly drinks up to 12 cans of Diet Coke a day — said Wednesday that beverage giant Coca-Cola had agreed to use real cane sugar in its regular Coke. 'This will be a very good move by them — You'll see. It's just better!' Trump wrote in a Wednesday post on his Truth Social platform. The company said in a statement that it had appreciated the president's enthusiasm for the brand and that more details on 'new innovative offerings within [the] Coca‑Cola product range [would] be shared soon.' Coca-Cola is the best-selling carbonated soft drink in the U.S. Right now, Coke in the U.S. is made with high-fructose corn syrup to give it its sweet, fizzy taste. The sweetener is made from corn starch. Cane sugar is made from sugarcane, the tall, bamboo-like stalks known for their high sucrose content, and is used as the sweetener in Coke in most countries. But, is one healthier than the other? Here's what to know... Experts say cane sugar is not necessarily healthier The experts say it likely won't matter which sweetener is in Coke. Corn syrup has slightly more fructose than table sugar, or glucose. Fructose doesn't prompt the body to produce insulin, which triggers a hormone that helps us to feel full. 'Our bodies aren't going to know if that's cane sugar or high-fructose corn syrup. We just know that it is sugar and we need to break that down,' Caroline Susie, a registered dietitian nutritionist and a spokesperson for the Academy of Nutrition and Dietetics, told Health. Consuming an excessive amount of any refined sugar can lead to a higher risk of weight gain and associated chronic conditions, such as type 2 diabetes and heart disease. 'Both high-fructose corn syrup and cane sugar are about 50 percent fructose, 50 percent glucose, and have identical metabolic effects,' Dr. Dariush Mozaffarian, a cardiologist and director of the Food is Medicine Institute at the Friedman School of Nutrition Science and Policy at Tufts University, told NBC News. Soda is soda America has a sugar habit — and a penchant for ultra-processed foods — that it needs to kick, according to Mozaffarian. Soda has more than the daily recommended limit for added sugars for teens and children. Added sugar refers to sugars and syrups that are added to foods and beverages during processing and production. 'It's always better to cut down on soda, no matter what the form of sugar is,' Dr. Melanie Jay, a professor of medicine and population health at the NYU Grossman School of Medicine and director of the NYU Langone Comprehensive Program on Obesity Research, told NBC News. There's pushback Coke sold in the U.S. has been made with high fructose corn syrup since the mid-1980s. Corn was a cheaper option than cane sugar: the U.S. has a lot of corn farmers and the government has long supported the industry. Other countries, including Mexico and Australia, still use cane sugar. The company has imported glass bottles of Mexican Coke to the U.S. since 2005. Corn is the nation's number one crop, and the Corn Refiners Association President and CEO John Bode said in a statement that replacing high fructose corn syrup with cane sugar 'doesn't make sense.' 'President Trump stands for American manufacturing jobs, American farmers, and reducing the trade deficit,' he said. 'Replacing high fructose corn syrup with cane sugar would cost thousands of American food manufacturing jobs, depress farm income, and boost imports of foreign sugar, all with no nutritional benefit.' He told The Washington Post that it would be more economical to introduce a product with cane sugar than to abandon the cheap and popular high-fructose corn syrup.