
New Wearable Algorithm Improves Fitness Tracking in Obesity
In two studies conducted in a total of 52 people for 1838 minutes in a lab and 14,045 minutes in a 'free-living' situation, the algorithm performed mostly as well or better than 11 'gold-standard' algorithms designed by other researchers using research-grade devices, achieving over 95% accuracy in people with obesity in real-world situations.
The algorithm bridges an important gap in fitness technology — that most wearable devices are using algorithms validated mainly in people without obesity, said Nabil Alshurafa, PhD, Feinberg School of Medicine, Northwestern University, in Chicago.
Alshurafa was motivated to create the algorithm after attending an exercise class with his mother-in-law, who had obesity. His mother-in-law worked hard, but her effort barely showed on the leaderboard. He realized that most current fitness trackers use activity-monitoring algorithms developed for people without obesity.
'Commercial devices calibrate their accelerometer-to-calorie models using data mostly from people with normal BMI, using algorithms that rely on 'average' gait and metabolism,' he told Medscape Medical News. 'But people with obesity are known to exhibit differences in walking gait, speed, resting energy expenditure, and physical function. When you feed 'average' motion to kcal [kilocalories] mappings for people with different gait patterns, the math does not always line up.'
This may be particularly true for people with obesity who wear fitness trackers throughout the day on their hip, rather than their wrist, because of differences in gait patterns and other body movements.
The study was published online in Scientific Reports . The anonymized dataset and code and documentation are publicly available for use by other researchers.
'More Inclusive and Reliable'
Researchers in Alshurafa's lab developed and tested the open-source, dominant-wrist algorithm specifically tuned for people with obesity. The algorithm estimated metabolic equivalent of task (MET) values per minute from commercial smartwatch sensor data and compared them to actigraphy-based energy estimates in people with obesity.
In an in-lab study, 27 participants performed activities of varying intensities while wearing a smartwatch and a metabolic cart — a mask that measures the volume of oxygen the wearer inhales and the volume of carbon dioxide the wearer exhales to calculate their energy burn in kcals and their resting metabolic rate.
The activities included, among others, typing on a computer, lying still on the floor doing nothing, walking slowly on a treadmill, doing pushups against a door, and following along with an aerobics video. Each activity was performed for 5 minutes, followed by 5 minutes of rest. The researchers compared the fitness tracker results against the metabolic cart results.
Another 25 participants wore a smartwatch and a body camera for 2 days in a free-living study. The body camera enabled the researchers to visually confirm when the algorithm over- or under-estimated kcals.
The in-lab analysis included 2189 minutes of data and the free-living analysis included 14,045 minutes of data.
Compared to the metabolic cart, the new algorithm achieved lower root mean square error across various sliding windows (analyses of continuous and overlapping data streams).
In the free-living study, the algorithm's estimates fell within ±1.96 SDs of the best actigraphy-based estimates for 95.03% of minutes.
'Our proposed method accurately estimated METs compared to 11 algorithms primarily validated in nonobese populations, suggesting that commercial wrist-worn devices can provide more inclusive and reliable [energy expenditure] measures using our algorithm,' the authors wrote.
Challenges Ahead
More work needs to be done before apps for iOS and android driven by the new algorithm are available for use later this year, Alshurafa said. Because the model is tuned for users with obesity, 'we need a reliable way to obtain BMI or body composition, and possibly ways of turning on and off the algorithm over time or perhaps modifying the algorithm as people's fitness level changes.'
'Because we've optimized for the dominant hand, we'll need clear user guidance, and possibly user-interface prompts, to drive this cultural shift in watch placement,' he said.
To ensure accuracy across diverse users, activities, and wear styles, the team will conduct field testing and pool anonymized data.
'Power, size, and regulatory requirements may force trade-offs, so we'll work closely with device manufacturers on adaptive calibration routines and streamlined firmware,' Alshurafa said. 'But the real priority is training and tailoring our systems on truly diverse data and being transparent about who's represented in that data. Too many commercial devices skip this, leading users to assume they work universally when their models actually have limitations.'
For now, clinicians should be aware that the app has only been validated so far in individuals with obesity wearing their tracker on the dominant wrist and use outside that population or on the nondominant wrist may yield less accurate calorie estimates, he added. 'Beyond those parameters, though, the algorithm is ready for deployment and offers a powerful new tool for personalized activity monitoring.'
Mir Ali, MD, a bariatric surgeon and medical director of MemorialCare Surgical Weight Loss Center at Orange Coast Medical Center in Fountain Valley, California, agreed that an algorithm that more accurately reflects exercise and energy expenditure of patients with obesity would be helpful, and that 'any improvements' would likely be beneficial for patients and clinicians.
That said, 'a larger study comparing the new algorithm vs currently available devices would provide more validation,' Ali, who was not involved in the study, told Medscape Medical News .
In addition, 'research elucidating exercise goals and calorie expenditure for obese patients could be helpful to better counsel patients on what is the optimal goal for weight loss,' he said.
Ali noted that 'trackers for heart disease and pulmonary problems may be useful to help patients attain cardio-pulmonary improvement' — and indeed, Alshurafa's team will be looking at ways to tailor fitness trackers for diabetes and hypertension going forward.
This study is based on work supported by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Science Foundation, the National Institute of Biomedical Imaging and the National Institutes of Health's National Center for Advancing Translational Sciences.
Alshurafa and Ali declared no competing interests.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
14 minutes ago
- Forbes
In The AI Revolution, Medical Schools Are Falling Behind U.S. Colleges
Instead of learning to use the tools that will define tomorrow's care, med school students still ... More memorize biochemistry pathways and obscure facts they'll never use in clinical practice. getty At Duke University, every matriculating student now has access to a custom AI assistant. At Cal State, more than 460,000 students across 23 campuses are equipped with a 24/7 ChatGPT toolkit upon enrollment. These aren't pilot programs. They're part of a full-scale transformation in the way higher education is preparing students for their future careers. Meanwhile, most U.S. medical schools remain stuck in the last century. Instead of learning to use the tools that will define tomorrow's care, students still memorize biochemistry pathways and are tested on obscure facts they'll never use in clinical practice. Following the release of OpenAI's ChatGPT in 2022, college deans and department chairs responded with caution. They worried about plagiarism, declining writing skills and an overreliance on artificial intelligence. Since, most have since shifted from risk avoidance to opportunity. Today, universities are integrating generative AI into tutoring, test prep, research, advising and more. Many now expect faculty to teach AI fluency across each of their disciplines. Medical education hasn't kept pace. A recent Educause study found that only 14% of medical schools have developed a formal GenAI curriculum compared with 60% of undergraduate programs. Most medical school leaders continue to view large language models as administrative tools rather than clinical ones. That's a mistake. By the time today's students become physicians, they'll carry in their pockets a tool more powerful and important to clinical practice than the stethoscope ever was. In seconds, GenAI can surface every relevant medical study, guideline and precedent. And soon, it will allow patients to accurately evaluate symptoms and understand treatment options before they ever set foot in a clinic. Used wisely, generative AI will help prevent the 400,000 deaths each year from diagnostic errors, the 250,000 from preventable medical mistakes and the 500,000 from poorly controlled chronic diseases. Despite GenAI's potential to transform healthcare, most medical schools still train students for the medicine of the past. They prioritize memorization over critical thinking and practical application. They reward students for recalling facts rather than for effectively accessing and applying knowledge with tools like ChatGPT or Claude. Historically, physicians were judged by how well they told patients what to do. In the future, success will be measured by medical outcomes. Specifically, how well clinicians and AI-empowered patients work together to prevent disease, manage symptoms and save lives. The outdated approach to medical education persists beyond university classrooms. Internship and residency programs still prioritize applicants for their memorization-based test scores. Attending physicians routinely quiz trainees on arcane facts instead of engaging in practical problem-solving. This practice, known as 'pimping,' is a relic of 20th-century training. Few industries outside of medicine would tolerate it. How To Modernize Medical Training Generative AI is advancing at breakneck speed, with capabilities doubling roughly every year. In five years, medical students will enter clinical practice with GenAI tools 32 times more powerful than today's models — yet few will have received formal training on how to use them effectively. Modernizing medical education must begin with faculty. Most students entering medical school in 2025 will already be comfortable using generative AI, having leaned on it during college and while preparing for the MCAT exam. But most professors will be playing catch-up. To close this gap, medical schools should implement a faculty education program before the new academic year. Instructors unfamiliar with GenAI would learn how to write effective prompts, evaluate the reliability of answers and ask clarifying questions to refine outputs. Once all faculty have a foundational understanding of the new applications, the real work begins. They need to create a curriculum for the coming semester. Here are two examples of what that might look like for third-year students on a clinical rotation: Exercise 1: Differential diagnosis with GenAI as a co-physician In a small-group session, students would receive a clinical vignette: A 43-year-old woman presents with fatigue, joint pain and a facial rash that worsens with sun exposure. Students would begin by first drafting their own differential diagnosis. Then, they would prompt a generative AI tool to generate its own list of potential diagnoses. Next, participants would engage the AI in a back-and-forth dialogue, questioning its reasoning, testing assumptions and challenging conclusions. To reinforce clinical reasoning in collaboration with GenAI, each student would also submit written responses to these questions: Is lupus or dermatomyositis the more likely diagnosis, and why? What additional data would help rule out Lyme disease? Cite three high-quality studies that support your diagnostic ranking. The goal of this type of exercise isn't to identify a 'right' answer but to strengthen analytical thinking, expose cognitive biases and teach students how to use GenAI to broaden diagnostic reasoning (not limit it). By the end of the exercise, students should be more confident using AI tools to support — but not replace — their own clinical judgment. Exercise 2: Managing chronic disease with GenAI support In this scenario, students imagine seeing a 45-year-old man during a routine checkup. The patient has no prior medical problems but, on physical exam, his blood pressure measures 140/100. Students begin by walking through the clinical reasoning process: What questions would they ask during the patient history? Which physical findings would be most concerning? What laboratory tests would they order? What initial treatment and follow-up plan would they recommend? Then, students enter the same case into a generative AI tool and evaluate its recommendations. Where do the AI's suggestions align with their own? Where do they differ (and why)? Finally, students are tasked with designing a patient-centered care plan that incorporates medical therapy, lifestyle changes and as many GenAI-powered applications as possible. These might include analyzing data from at-home blood pressure monitors, customizing educational guidance or enabling patients to actively manage their chronic diseases between visits. Training Physicians To Lead, Not Follow Colleges understand that preparing students for tomorrow's careers means teaching them how to apply generative AI in their chosen fields. Medicine must do the same. Soon, physicians will carry in their pocket the entirety of medical knowledge, instantly accessible and continuously updated. They'll consult AI agents trained on the latest research and clinical guidelines. And their patients, empowered by GenAI, will arrive not with random Google results, but with a working understanding of their symptoms, potential diagnoses and evidence-based treatment options. If medical schools don't prepare students to lead clinical application of these tools, for-profit companies and private equity firms will focus solely on ways to lower costs, even when these approaches compromise medical care. As medical school deans prepare to welcome the class of 2029, they must ask themselves: Are we training students to practice yesterday's medicine or to lead tomorrow's?


CNET
18 minutes ago
- CNET
Best Amazon Prime Day Deals Under $25: Our 34 Favorite Budget-Friendly Bargains on Apple, Anker, Roku and More
Amazon's Prime Day sale is almost here, running from July 8 to 11 this year, and we're already seeing plenty of deals live now on big-ticket items like appliances and TVs. But you certainly don't have to splurge to score a great deal this Prime Day; we've also found tons of budget-friendly bargains. Here are the best discounts CNET deal experts have found that you can snag for $25 or less. These deals are a great chance to save big and dodge the rising cost of living, as well as the threat of price increases due to tariffs. There are some big brands here too, so no matter what you're after, it's worth checking out the best Prime Day deals for less than $25, which we've listed below. We'll keep this article updated as deals end and new offers become available. We're also keeping an eye on the competing sales at Best Buy and Walmart. Best Prime Day tech deals under $25 Amazon Fire TV Stick 4K: $25 Take half off the newest Amazon Fire Stick right now at Amazon. It gives you a crystal clear 4K picture, plus it's AI powered and uses Wi-Fi 6 so it's pretty quick, too. Details Save $25 $25 at Amazon Close INIU 10,000-mAh power bank: $20 This slim portable charger is easy to take anywhere with you. It has 10,000mAh capacity, which is enough power to charge your AirPods Pro more than 13 times. Plus it uses a USB-C port. Details Save $13 $20 at Walmart Close SanDisk 128GB Extreme Pro SD card: $22 If you're constantly running out of storage like I do, this 128GB SD card can be a huge help. It even has fast data transfer and up to 130MB/s read speed for quick access to what you need. Details Save $4 $22 at Amazon Close More Prime Day tech deals: Best Prime Day home and kitchen deals under $25 Dash Deluxe rapid egg cooker: $25 Eggs are expensive but cooking them doesn't have to be. This deluxe egg cooker can make hard boiled, poached and scrambled eggs, as well as omelets and even steamed veggies. Perfect for making breakfast on busy mornings. Details Save $5 $25 at Amazon Close Roku Smart Home smart light strip SE: $15 This light strip from Roku is more than 16 feet long and can add some ambiance to any room in your home. Plus, they're smart lights so you can use your phone to change colors, turn them on or off. Details Save $8 $15 at Walmart Close Chefman electric kettle, 1.8L: $22 This BPA-free water boiler is great for making tea, pasta and rice. It can boil almost two liters of water at once. Plus, it has an auto-shut off for added safety. This deal is for Prime members only. Details Save $6 $22 at Amazon Close More Prime Day home and kitchen deals: Best Prime Day outdoor deals under $25 Thermacell rechargeable mosquito repellent: $25 Summer's outdoor activities bring mosquitos. This rechargeable repellent doesn't have a scent or a spray and can repel mosquitos up to 20 feet away. Plus, it comes with a 12-hour refill. Details Save $5 $25 at Amazon Close Cuisinart 13-piece wooden handle tool set: $25 This set comes with four stainless steel skewers, one grill cleaning brush, spatula, a fork, tongs and four corn on the cob holders. It has just about anything you'd need for all your barbecues. Details Save $15 $25 at Amazon Close More Prime Day outdoor deals: Best Prime Day health and wellness offers under $25 Cyrico resistance bands, three-pack: $13 These resistance bands come in a pack of three for different resistance levels and exercises. Details Save $8 $13 at Walmart Close Amazon Basics extra thick exercise yoga mat: $17 This yoga mat is a half-inch thick, offering lots of support. It also offers shock absorption, in case of a yoga pose gone wrong. It's made of NBR foam and wipes clean easily. Details Save $5 $17 at Amazon Close More Prime Day health and wellness deals: When will Amazon Prime Day deals begin? Amazon announced that its next Prime Day shopping event will take place from July 8 to 11. This year, the shopping event will last longer than previous iterations and seeing as it follows the Fourth of July weekend, we're already seeing some great deals. Additionally, Amazon is usually one of the best places to shop because it sells products from almost all major brands across popular categories like tech, appliances, mattresses and fashion, so keep an eye on this page for updated deals. If you're shopping outside of Amazon, we recommend checking out appliance and tech sales at Best Buy, Home Depot and Lowe's, as well as sales on a large variety of categories at Nordstrom, Target and Walmart, among others. How to keep up with the best Amazon Prime Day deals There are a lot of ways to ensure you're getting the latest scoop on Amazon Prime Day offers. The CNET Deals team covers all the best price drops, discounts and deals every day from across the web, highlighting the best offers. Also, we don't limit our coverage to just Amazon, we track all the major retailer sales, sharing the promotions you need to hear about, and there are plenty of ways to hear from us. One option is to bookmark to check out our latest coverage. You can also follow @CNETDeals on X to see everything we publish or sign up for our CNET Deals newsletter for a daily digest of deals delivered to your inbox. Another great option is to sign up for CNET Deals text alerts for curated deals during major shopping events. Remember to install our CNET Shopping browser extension to help ensure that purchases you make all year round will be at the lowest price available.


CNET
20 minutes ago
- CNET
Stop Using ChatGPT for These 11 Things Right Now
ChatGPT and other AI chatbots can be powerful natural language tools, especially when you know how to prompt. You can use ChatGPT to save money on travel, plan your weekly meal prep or even help pivot your career. While I'm a fan, I also know the limitations of ChatGPT, and you should too, whether you're a newbie or an old hand. It's fun for trying out new recipes, learning a foreign language or planning a vacation, but you don't want to give ChatGPT carte blanche in your life. It's not great at everything -- in fact, it can be downright sketchy at a lot of things. ChatGPT sometimes hallucinates information and passes it off as fact, and it may not always have up-to-date information. It's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios when you should put down the AI and choose another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 1. Diagnosing physical health issues I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. My licensed doctor told me that. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. 2. Taking care of your mental health ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. ChatpGPT doesn't have lived experience, can't read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work — the hard, messy, human work — to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. 3. Making immediate safety decisions If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. 4. Getting personalized financial or tax planning ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can't replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. 5. Dealing with confidential or regulated data As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. 6. Doing anything illegal This one is self-explanatory. 7. Cheating on schoolwork I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. 8. Monitoring information and breaking news Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. 9. Gambling I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I would never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. 10. Drafting a will or other legally binding contract ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court. 11. Making art This isn't an objective truth, just my own opinion, but I don't believe AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.