
Google Exclusive: How the Pixel Watch 3 got a life-saving feature the Apple Watch can't match
While fall, crash and incident detection are all but par for the course on high-end, full-feature smartwatches, a new, more advanced safety feature surfaced last summer that's currently only available on the Google Pixel Watch 3. That's right, not even the Apple Watch Ultra 2 or Samsung Galaxy Watch Ultra offers anything like Google's Loss of Pulse Detection tool.
Like fall detection, Loss of Pulse Detection is designed to help users out during an emergency — in this case, a medical one, when there may otherwise be no one around. Better yet, setting up Loss of Pulse Detection takes less than 2 minutes, which is not a lot of time considering it could be a literal lifesaver.
To find out more about Loss of Pulse Detection, including insights into the development, testing and FDA approval process, I had an exclusive interview with Edward Shi, the product manager on the Google Safety Team who spearheaded the project.
Our 30-minute conversation covered a lot, but it's Google's creative approach to testing the new safety feature — something that's crucial for avoiding false positives — that most fascinated me.
For one, Shi and his team had to figure out how to simulate a loss of pulse in a living subject, for testing purposes, of course, which is no easy feat. His team also worked with stunt actors to understand how a user may fall when a loss of pulse is experienced.
Beyond that, our conversation touched on whether older Pixel Watch devices could get Loss of Pulse Detection in the future, how long until the competition replicates the feature and what the Google Safety Team is up to next.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Edward Shi: I'm a product manager here on our Android and Pixel Safety Team. Our team works on safety products with a goal of giving users peace of mind in their day-to-day lives. These products include, in the past, features such as car crash detection and fall detection.
For Loss of Pulse, specifically, I'm one of the main product managers on the project, working across the teams, with our clinicians, our engineers, etc., to bring Loss of Pulse Detection to the Pixel Watch 3.
It uses sensors on the Pixel Watch to detect a potential loss of pulse and prompt a call to emergency services with either the user's smartwatch or their connected phone.
Shi: It's really for any Pixel Watch 3 user who meets our eligibility criteria. It uses sensors on the Pixel Watch to detect a potential loss of pulse and prompt a call to emergency services with either the user's smartwatch or their connected phone, who can then intervene and potentially provide life-saving care.
A loss of pulse is a time-sensitive emergency, and it can be caused due to a variety of different factors, such as a cardiac arrest, a respiratory or circulatory failure, poisoning, etc. Many of these events are unwitnessed today. So around 50% of cardiac arrests, in particular, are unwitnessed, meaning that no one's around to help.
Shi: The two main sensors are the PPG sensor as well as accelerometer. We use PPG to detect pulselessness, as well as the accelerometer to look at motion in particular. So if a loss of pulse occurs, what we anticipate is that the user is unconscious, so there shouldn't be excessive motion.
So those two sensors combined help form the foundation of the algorithm.
The algorithm is trying to balance both detecting that emergency, so in this case, a loss of pulse, while minimizing accidental triggers.
Shi: There are a lot of similarities in the sense that all are emergency detection features. Essentially, these are for potential life-threatening emergencies in which a user may not be able to call for help themselves. In those events, we would need to be able to detect that emergency and then help connect [the user] with emergency services.
Much of the design and the principles remain the same. The algorithm is trying to balance both detecting that emergency, so in this case, a loss of pulse, while minimizing accidental triggers.
That's a really key part of all three of the features. We don't want to overly worry and bother the user with accidental triggers. Also, in particular, we don't want to burden [emergency] partners with accidental triggers in the case where a user doesn't need help.
Shi: Once a loss of pulse [or] a car crash [or] a fall is detected, the experience is designed to try to quickly connect the user over to emergency services. If, for whatever reason, the user doesn't actually need help, the user experience is [also] designed so that they can easily cancel any call.
We actually worked with stunt actors to induce pulselessness and simulate a fall within a reasonable timeframe to see if it was still able to detect a loss of pulse in those scenarios.
Shi: I don't know if I could precisely say exactly how long, but definitely over a year and a half, but it can really vary. One particular [safety] feature isn't necessarily the same as the others.
They may look similar on the surface, like a fall or a car crash or a loss of pulse, but each of them has its own unique challenges in validating both the algorithm and developing the user experience.
And of course, with laws, we had to go through working with our regulatory partners and regulatory bodies in different regions [for Loss of Pulse Detection]. So there are different complexities for each of them, so the timeline can definitely vary.
Shi: It's a bit of both. So, it's definitely algorithmically tested. We also collect hundreds of thousands of real-world user data and run our algorithm over that data to take a look at how often it could be triggered.
Internally, we have "dog foods." And then we ran clinical studies. All of that is run to measure how often we're seeing accidental triggers in particular.
In addition to honing the algorithm or user experience design, we run user research studies to look and walk [users through the] 'flow,' both during onboarding, as well as when an actual loss of pulse is detected.
[We're] seeing that users understand what's happening and are able to cancel out of that flow if they don't need help. So, it's both algorithmic as well as user research.
Basically, using a pneumatic tourniquet to cut off blood flow in an arm, [we were able] to simulate temporary pulselessness.
Shi: It is pretty difficult, and it took a lot of creativity from our research scientists, in particular. Basically, using a pneumatic tourniquet to cut off blood flow in an arm, [we were able] to simulate temporary pulselessness.
We were able to do that and then put our watches on the user at the same time to ensure that our algorithm was detecting that [loss of] pulse when it occurred.
We actually worked with stunt actors to induce pulselessness and simulate a fall within a reasonable timeframe to see if it was still able to detect a loss of pulse in those scenarios.
Shi: We're very fortunate at Google to have great team members who are familiar with the process and are regulatory experts. Receiving U.S. FDA clearance does go through a rigorous process to ensure quality and understandability of the products that are coming through.
So really, it's taking a look at the U.S. FDA established regulatory frameworks and regulations, knowing what we have to conduct in terms of necessary performance testing, what we have to show to prove that the feature is doing what it [says], and in particular, that it's understandable to users who choose to use the product.
Shi: The biggest thing that we inform users about, essentially during onboarding, is that it's only meant to detect an immediate loss of pulse. So it's not meant to diagnose or treat any medical conditions, and it's not meant to be a feature that gives you a pre-warning of any health condition.
That's a really important distinction that we do try to make as clear as possible within the product itself, so that you don't change any health regimens, etc, and you don't change anything that you've heard from medical professionals. As always, go to your healthcare professional to discuss all of your well-being, etc., and what's best for you.
Shi: It's something we can't go into detail about at the moment. We have to look at both the hardware that's available on the older Pixel Watches and see if it's possible.
Also, we have to ensure that there is hardware equivalency on each of the different devices. So we have to make sure on the older Pixel devices, if we were to do [Loss of Pulse Detection], that it still performs as expected within the guidelines that we set.
We would like to make [the feature] available as widely as we possibly can, so that's what we're going to try to do.
Shi: Our top priority when we released this feature was to make sure that it maintains its quality and is able to do what it says it does within the guiding principles that we have. What we anticipate is that as new Pixel Watches are released, it's available on all different Pixel Watches.
Of course, it's going to be a hardware-by-hardware validation. We would like to make it available as widely as we possibly can, so that's what we're going to try to do.
Shi: I think this is definitely speculation and subjective, but I think in the tech world, people are always looking at other competitors and trying to close the gap or match different features. So I wouldn't be surprised if that's something that people did.
In some ways, I think for our team, that this would be a good thing — with safety in particular — if other competitors started trying to copy features. I think as long as everyone maintains high quality, of course, then it's not necessarily a bad thing.
But yes, I think it's fair to assume that people are looking at it and they attempt to copy it.
Shi: We're always looking at helping users get connected with help if they aren't able to themselves. We know emergencies, hopefully, are a bit of a rare event in users' daily lives, but there could be other scenarios where users may feel unsafe.
So, one of our existing features is a Safety Check. When users are going out for a run or going out for a hike and they want that extra peace of mind, they can start a Safety Check, and we can check in with them, and then if they don't respond, we can automatically share their location and reason and context with their emergency contacts.
That's an existing feature, and also things that we're thinking about on the safety side. We're looking across the spectrum from emergencies to daily use cases of how we can help, how we can deliver a little bit more peace of mind in your daily life.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
an hour ago
- The Verge
Samsung is about to find out if Ultra is enough
I don't often get asked about the phones I'm testing when I'm out and about, unless it's a folding phone. Then I usually hear some version of the same thing: 'Oh, I thought about getting one of those! But then I just got a [insert slab-style phone name here].' My anecdotal data matches the actual sales figures; there are many more people curious about folding phones than there are buyers of folding phones. Samsung would very much like that to not be the case, and, by all indications, it's about to pull out all the stops at at its Unpacked event on July 9th. But is putting the Ultra name on a folding phone enough? The weak sales are not for lack of trying — Samsung has been trying to sell us on foldables for a good chunk of the last decade, and Google also got in the game a couple of years ago. Motorola has had substantial success selling clamshell-style flip phones; Counterpoint Research found that the brand's foldable market share grew 253 percent year-over-year in 2024. But that's a bigger piece of a very small pie. TrendForce estimated that foldables made up just 1.5 percent of the overall smartphone market in 2024. In the US, Samsung was the earliest and loudest folding phone maker, but a half dozen iterations of folding phones hasn't managed to make a significant dent. It didn't help that Samsung's foldable lineup last year was a barely warmed-over version of the one from a year before. The Z Flip 6 was a spec bump with some software improvements; the Fold 6 trimmed a few millimeters here, added a few there, and laid flatter when you opened it — not exactly gripping stuff. Lucky for us, Samsung seems to have more excitement planned this time around. The company has all but confirmed that we'll get an Ultra-branded Fold for the first time, with a thinner profile to rival the recent efforts from Honor and Oppo. The Z Flip 7 is likely to get a bigger, Razr-style screen that covers most of the front panel, and we might see a cheaper FE version with the old cover screen design. That all seems to address a couple of common complaints about foldables: they're too pricey and come with too many tradeoffs compared to a slab-style phone. I'm not quite sure it'll be enough, though. Foldables remain more susceptible to damage from dust than a standard flagship phone — and repairs can be pricier. Despite saying years ago that it's pursuing full dustproofing, Samsung doesn't seem to have cracked the code on a fully IP68-rated foldable just yet. Taking a chance on an expensive phone that's less durable than your typical $1,000 flagship? That's kind of a big ask, especially with prices on everything else we buy going up, too. It's not all doom and gloom for foldables, however. Analysts are putting a lot of stock in rumors of a folding phone from Apple coming in 2026. An iFold or whatever it might be called could help expand the market, at least in the US, and maybe that rising tide would float Samsung's boat, too. Maybe a couple of new models hitting different price segments is enough to get Samsung's marketshare growing again — a strategy that has worked well for the company in the past. Maybe an Ultra foldable with ultra specs will convince some people who were on the fence about folding phones. And if anyone was holding out for an extra hinge, well, Samsung might just have that covered, too. Photography by Allison Johnson / The Verge


CNET
3 hours ago
- CNET
Get a Free Storage Upgrade When Your Preorder the Sleek and Unique Nothing Phone 3
Samsung and Google may make some of the most popular Android phones, but there not the only worthwhile options out there. CNET's tech experts have been continually impressed with Nothing's sleek and stylish devices, and a new model is on the way. The Nothing Phone 3 is set to hit shelves on July 15, and Amazon is offering a great preorder deal throughout the Fourth of July weekend. The online retailer is offering $100 off the more advanced configuration with 16GB of RAM and 512GB of storage, which knocks it down to $799. That's the same price as the basic model, which means you're essentially getting a free storage upgrade. There's no set expiration for this offer, but with the phone releasing soon and the Fourth of July weekend ending, we'd get your preorder in sooner rather than later. Our reviewer found a lot to like about this 2025 Android, including its eye-catching design and unique light-up interface. And the positives don't stop there. It's equipped with a powerful Snapdragon 8S Gen 4 processor and 16GB of RAM to support a full suite of AI tools and features. Plus, it has a vibrant 6.7-inch 1.5K AMOLED display, and a 50MP quad camera system to capture stunning photos and 4K videos. It's powered by the lates Android 15 OS, and lightning-fast 65W charging that can get the battery to 50% in less than 20 minutes. Best July Prime Day Deals 2025 CNET's team of shopping experts have explored thousands of deals on everything from TVs and outdoor furniture to phone accessories and everyday essentials so you can shop the best Prime Day deals in one place. See Now Why this deal matters This Nothing phone is a great option for those who want an alternative from the major brands. It features some cutting-edge hardware and a quirky, eye-catching design that helps it stand out from the crowd. If you've been looking to upgrade, this Fourth of July weekend preorder offer is a great chance to grab one at a solid price.


Atlantic
4 hours ago
- Atlantic
The AI Birthday Letter That Blew Me Away
In May, I asked Google's chatbot, Gemini, to write a birthday letter to my best friend. Within seconds, it spat out the most impressive piece of AI writing I have ever encountered. Instead of reading as soulless, machine-generated text, the letter felt unnervingly like something I might've actually written. 'You're probably rolling your eyes,' the letter read, after a sentence that my friend would most definitely have rolled his eyes at. All I had typed into the chatbot was a nine-word prompt containing my friend's first name and the age he was turning. But the letter referenced real moments from our friendship. One paragraph recounted a conversation we had shared on the eve of college graduation; another reflected on a challenging period we had navigated together. Gemini had even included his correct birth date. I hadn't planned to let AI write the birthday letter for me. When I opened Google Drive to type it up myself, Gemini popped up and volunteered to help out. Since the spring, when I first signed up for a free trial of Google's AI Pro subscription—normally $20 a month—Gemini has followed me around the Googleverse. The tool is akin to a souped-up version of Microsoft Clippy: In Gmail, it offers to summarize long threads and draft entire messages. In Sheets, it volunteers to assist with data analysis, generating colorful bar graphs at the click of a button. But Gemini has proved most alluring in Drive, where the chatbot can automatically find and consult relevant files before generating text. That's how Gemini was able to whip up such a good birthday letter: It already knew a lot about me (and, by association, my friend). Of all the things that chatbots excel at, they have generally not been very reliable for individualized tasks. Ask an AI tool to write an essay on, say, the history of popcorn, and you will likely get a decent response. But ask it to write a speech for your sister's wedding, and the result will probably be quite poor. You might get a better speech if you feed the chatbot a decade of your texts and emails, her wedding website, and previous toasts you've given for other loved ones. But that process takes time and effort, which most people don't put in. Tech executives dream instead of hyper-personalized chatbots that automatically have access to all of the information they might ever need. After sucking up the web to build models capable of generating coherent text, AI companies are now mining our personal troves of data to teach chatbots everything there is to know about us. Google, with its colossal data empire in tow, is particularly well positioned to lead the way. If OpenAI introduced us to the Hallmark-card version of AI writing, Google is ushering in a new chapter where chatbots are capable of drafting the sort of intimate letters you might write to your best friend. The birthday letter was just the start. Not only could Gemini write fairly convincingly in my voice; the chatbot, as I quickly learned, was teeming with my personal information. When asked, it accurately described my financial goals, my vaccination history, and my parents' physical appearances. To test the limits of how much Google knew about me, I told the chatbot to make a CIA dossier. The first section ('IDENTIFYING INFORMATION') listed my full name, email address, and current location. Not too crazy. Section two ('RELATIONSHIPS & PERSONAL HISTORY') accurately described the details of both a long-term romantic relationship and a brief high-school fling. By section three ('PSYCHOLOGICAL PROFILE'), the chatbot was dissecting my communication style and emotional intelligence. And in section four ('POTENTIAL VULNERABILITIES'), Gemini had outlined my travel history, citing the time I had spent abroad as an exchange student, and diagnosed me as an overthinker. Not everything in the dossier was accurate. Gemini struggled to disentangle fact from fiction, occasionally confusing details from short stories I've written with real-life anecdotes. When I later asked the chatbot if it knew my birthday, it told me I was born in 2010 (wrong, though it got the date right on a second try). Even though the birthday letter was startlingly good, Gemini occasionally slipped into a more generic chatbot register—at one point, it described the future as 'everything shimmering in the distance.' Still, Gemini knows me much better than other chatbots do. When I asked ChatGPT to create a CIA dossier, it failed miserably: The bot overinterpreted my prompt, explaining that a key part of my personality was my 'taste for espionage tropes.' The other details it added were vague and unimpressive. There's a clear reason for the discrepancy. Unlike Google, OpenAI doesn't have half my lifetime's worth of my data stored up. In Gmail, I have more than 200,000 emails, amounting to 30 gigabytes, some of which date back to elementary school. My Drive contains another 45 gigabytes of files, such as chemistry study guides and travel itineraries, half-written poems and unsent love letters, budgeting spreadsheets and New Year's resolutions, insurance appeals and symptom trackers. Even if you don't spend your free time soliloquizing in Google Docs like I sometimes do, the search giant likely knows enough about you to train your own custom chatbot. Our emails, files, and browsing histories are all already at the company's fingertips. Chrome is the most popular browser in the world; almost o ne-third of the planet's emails are sent with Gmail; and Google's productivity apps have billions of users who store files across Drive, Docs, Sheets, and Slides. That's to say nothing of Maps, YouTube, or the entire Android ecosystem. Google knows it's sitting on a gold mine. In May, at the company's annual software conference, the Gemini team lead Josh Woodward said Google's goal is to make the chatbot the most 'personal' and 'proactive' AI assistant around. He offered education as an example. College students are flocking to ChatGPT, but those same students do much of their work using Google software such as Docs and Slides. 'Imagine you're a student; you've got a big physics exam looming,' Woodward said. Gemini might see the test on your calendar a week out and send you 'personalized quizzes' based on the readings and lecture notes you've already stored in Google Drive. There are countless other ways you might use such personalized AI. When I asked Gemini to write me a cover letter, it automatically consulted several I had previously written. When I prompted Gemini to make me a summer-reading list, it first combed through email exchanges with high-school and college instructors, a list of my favorite books, and two editions of a weekly newsletter I subscribe to. Google is not the only company pushing forward with bespoke AI. Sam Altman recently described the 'platonic ideal state' for ChatGPT as a model with access to 'your whole life.' This chatbot would ingest every piece of information you had ever produced or encountered—including the books you had read, emails you had sent and received, and even conversations you'd had with your friends and family. With the explicit goal of making ChatGPT more personalized, OpenAI recently upgraded the chatbot's 'memory' feature, such that the bot is now able to reference all of a user's past conversations. But building up that data will take time. Legacy tech firms such as Apple and Microsoft do already have plenty of data to draw on, but Google is further ahead in its consumer AI efforts. Then there's Meta: The company's stand-alone AI app, which launched this spring, encourages users to link the assistant to their Facebook and Instagram accounts for 'an even stronger personalized experience.' Facebook comments and Instagram DMs, however, are simply less meaty than email exchanges and PDF documents. Google has faced a bumpy road since generative AI exploded a few years ago. The technology has presented the biggest threat yet to Google's search business, and the company's share of the market recently dropped to its lowest in a decade. At the same time, usage of Google's AI tools has skyrocketed over the past year, and the company recently rolled out a new AI search mode in an attempt to steal search queries back from the likes of ChatGPT. Now, with the company's personalization advantage, Google could surge ahead. Whether Google or another company gets there first, this new era of AI is coming. For years, we have been shedding information online through clicks and likes, photographs and files, emails and search queries. That digital exhaust is now getting a second life. Already, it can be difficult to figure out whether text that you encounter online is generated by AI. Soon, while looking back on old emails, you might even feel that way about your own writing.