Google Is Adding a Battery Health Menu to the Pixel 8a, but Not the Pixel 8 Pro
The company says that this tool will not be available on older Pixel devices, including the Pixel 8 and Pixel 8 Pro—which were released just a few months before the Pixel 8a and are flagship-level phones, unlike the 8a. This means that users of these earlier models will not be able to check their battery's health using Google's own system.
The Battery Health menu can be found in the Settings app on supported devices, given that they are running the latest Android 16 beta. As the name suggests, it gives users information about their phone's current battery capacity as a percentage of its original level.
It also gives tips for extending battery life, such as turning on Adaptive Charging or limiting the maximum charge to 80%. This tool has been a standard part of iPhones since 2018, even on older models like the iPhone 6, but Google's version will only be available on its newest hardware.
Overall, Google made a bad move. I hope they implement it soon, or at least specify exactly why the feature works in the budget device but not the flagships.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Upturn
30 minutes ago
- Business Upturn
Data Science Course FAANG Interview Prep 2025 – Data Scientist Jobs at Google Amazon Meta Apple Netflix Updated
Santa Clara, July 26, 2025 (GLOBE NEWSWIRE) — In 2025, AI-driven data analytics is transforming industries by enabling real-time decision-making, predictive modeling, and automation. Companies like Google and SAS are leading this revolution with platforms like BigQuery and SAS Viya, which integrate AI to process complex data and support advanced analyses in real-time. This shift underscores the growing demand for professionals skilled in AI and data science, particularly in roles that require navigating and leveraging these advanced tools. For more information, visit: Interview Kickstart is at the forefront of preparing professionals for this evolving landscape. As a leading upskilling platform, IK offers a comprehensive Data Science Course designed by FAANG+ experts. This program equips learners with the skills needed to excel in data science roles, focusing on areas such as data structures, algorithms, system design, and technical program management. The course is structured to provide a deep understanding of data science fundamentals and their practical applications. Over the first three weeks, learners explore the principles of designing scalable and efficient systems, a crucial skill for data scientists working with large datasets and complex algorithms. For the next six weeks, the course delves into managing technical projects, emphasizing the coordination between data science teams and other stakeholders to ensure successful project delivery. Participants focus on a specific technical domain, allowing them to tailor their learning to areas such as machine learning, AI, or big data analytics. The course includes dedicated career coaching where the program offers guidance on resume building, LinkedIn profile optimization, personal branding, and behavioral interview preparation, ensuring learners are well-prepared for job applications and interviews. A typical week at IK involves a blend of foundational content, live sessions, and practical exercises. On Thursdays, learners receive high-quality videos and course materials covering fundamental concepts and case studies. Sundays feature four-hour online live sessions that apply these concepts to real-world scenarios, including mini mock interviews with live feedback from Tier-1 instructors. From Monday to Wednesday, participants work on practice problems and case studies, applying the concepts learned and engaging in live doubt-solving sessions with FAANG+ instructors. Daily, learners have 1:1 access to instructors for personalized coaching and solution walkthroughs. The program also includes up to 15 mock interviews with hiring managers from top-tier companies like Google and Apple. These domain-specific interviews provide detailed, personalized feedback, helping learners identify and work on improvement areas. The transparent, non-anonymous format ensures a realistic interview experience. In the context of the rapidly evolving AI-driven data analytics landscape, Interview Kickstart's Data Science Course offers a structured and comprehensive pathway for professionals aiming to enhance their skills and secure roles in top tech companies. By focusing on both technical proficiency and career development, IK ensures that learners are well-equipped to navigate and succeed in the dynamic field of data science. For more information, visit: About Interview Kickstart Founded in 2014, Interview Kickstart is a premier upskilling platform empowering aspiring tech professionals to secure roles at FAANG and top tech companies. With a proven track record and over 20,000 successful learners, the platform stands out with its team of 700+ FAANG instructors, hiring managers, and tech leads, who deliver a comprehensive curriculum, practical insights, and targeted interview prep strategies. Offering live classes, 100,000+ hours of pre-recorded video lessons, and 1:1 sessions, Interview Kickstart ensures flexible, in-depth learning along with personalized guidance for resume building and LinkedIn profile optimization. The holistic support, spanning 6 to 10 months with mock interviews, ongoing mentorship, and industry-aligned projects, equips learners to excel in technical interviews and on the job. ### For more information about Interview Kickstart, contact the company here: Interview KickstartBurhanuddin Pithawala+1 (209) 899-1463 [email protected] 4701 Patrick Henry Dr Bldg 25, Santa Clara, CA 95054, United States


Forbes
2 hours ago
- Forbes
Tea App Breach Reveals Why Web2 Can't Protect Sensitive Data
Web2 failure exposes Tea App users' sensitive data. A dating app built to empower women and marginalized genders has now put them at risk. Tea, the viral safety-focused app that lets users anonymously review men they have dated, has suffered a major data breach. Sensitive user data including photos, government IDs, and chat logs was exposed and later shared on the message board 4chan. According to 404 Media, the breach was caused by a misconfigured Firebase database, a centralized backend platform maintained by Google. The leaked data included full names, selfies, driver's licenses, and sensitive messages from within the app. Many of these files were uploaded during identity verification processes and were never intended to be public. Tea confirmed the breach and said the data came from a two-year-old version of the app, though it's unclear whether users were ever notified of this risk during sign-up. For many users, however, that explanation offers little comfort. Trust was broken, and it was trust the platform had sold as its core value. What is Tea? Tea launched in 2023 and quickly gained attention for its bold concept. The app allows women, nonbinary people, and femmes to post anonymous reviews of men they have dated. These posts can include green flag or red flag labels along with identifying details like first names, age, city, and photo. It also offered tools like reverse image searches, background checks, and AI-powered features such as 'Catfish Finder.' For a monthly subscription fee, users could unlock deeper insights. The app pledged to donate a portion of profits to the National Domestic Violence Hotline, branding itself as a safer space for navigating modern dating. At one point in July 2025, Tea reached the top of the Apple App Store. But beneath the growth was a fragile architecture. A Breach That Breaks the Tea Mission The Tea breach is not just a case of leaked data; it is a collapse of purpose. A platform built for safety exposed the very identities it was meant to protect. Legal IDs. Facial recognition data. Personal messages. Tea marketed itself as a safe space where people could share vulnerable experiences without fear of retaliation. That trust was supposed to be a feature, not a liability. But in exposing the identities of people who likely signed up for the app under the promise of anonymity, the breach reversed the app's core mission. It also reignited debate around the ethics of crowdsourced review platforms. While Tea's users may have had the best intentions, the lack of formal moderation or fact-checking raises significant legal concerns. Already, reports suggest the company receives multiple legal threats each day related to defamation or misuse. Now, with the breach, the legal stakes have escalated. And they may soon extend into privacy litigation, depending on what jurisdictions impacted users reside in. Tea and Web2's Fragility At the heart of this failure is a familiar problem in consumer tech: reliance on Web2 infrastructure. Firebase, while powerful and scalable, is a centralized backend system. When a problem occurs, users have no control over what is exposed or how quickly it is contained. This was the foundation Tea chose, despite the known risks of centralized data storage. Web2 models store user data in app-controlled databases. This may work for e-commerce or gaming, but with private messages and government-issued IDs, the risks multiply. Once exposed, that kind of information is almost impossible to fully retrieve or erase: disappearing into the vastness of cyberspace. The Tea incident echoes previous Web2 failures. In 2015, the Ashley Madison breach exposed the names and email addresses of users on a platform designed for private affairs. The consequences ranged from public shaming to blackmail. While the scale was different, the pattern was the same: a platform promising discretion, but failing to secure its core value proposition. Web2 Tools of Tea & Web3 Upgrades The incident reopens a critical discussion around digital identity and decentralization. Web3 advocates have long argued that user-controlled identity systems—such as those built with zero-knowledge proofs, decentralized identifiers (DIDs), or blockchain-based attestations—can prevent precisely this kind of disaster. Had Tea used a self-sovereign identity system, users could have verified themselves without ever uploading their actual ID to a centralized database. They could have shared attestations from trusted issuers or community verification methods instead. These systems remove the need to store vulnerable personal files, drastically lowering risk in the event of a breach. Projects like BrightID and Proof of Humanity already explore these models by enabling anonymous but verifiable identities. Though still early-stage, these systems offer a glimpse of a safer future. Ultimately, this could help reduce single points of failure. Web3's architecture, where users control their credentials and data flows through distributed systems, provides a fundamentally different risk profile that may be better suited for sensitive social platforms. Web2 Failures Create Web3 Urgency The Tea breach also poses real-world risks beyond the app itself. Exposed IDs and selfies could be used to open fraudulent crypto exchange accounts, commit SIM-swap attacks, or bypass Know Your Customer (KYC) checks on blockchain platforms. As digital assets grow more accessible, the overlap between privacy, dating, and financial fraud will only increase. This could also create reputational damage for users outside of Tea. If their names or images are associated with unverifiable accusations, even falsely, those records could be copied or weaponized in future contexts. Search engines have long memories. So do blockchain crawlers. For regulators and technologists, the Tea breach offers a blueprint of what not to do. It also poses a serious question: should platforms that deal in high-sensitivity content be allowed to launch without structural privacy safeguards? More pointedly, can any platform promise safety without first rethinking the assumptions of its data model? What's Next for Tea & Other Web2 Tool Users For now, Tea says it is reviewing its security practices and rebuilding user trust. But the breach highlights a larger industry problem. Platforms that promise anonymity and empowerment must treat data protection as a structural principle: not an optional feature. This incident may become a case study in why Web2 safety tools are insufficient for modern risks. Whether for dating, reputation, or whistleblowing, the next generation of platforms may need to be decentralized from the start. Tea promised safety. What it delivered was a case study in how trust breaks down in the Web2 era.


Forbes
2 hours ago
- Forbes
Google Warns This Email Means Your Gmail Is Under Attack
You do not want to get this email. With all the cyber security attacks compromising smartphones and PCs, it would be easy to conclude there's little you can do to stay safe. But the truth is very different. Most attacks are easily prevented with a few basic safeguards and some know-how. In reality, a number of simple changes can defend against most attacks. So it is with the FBI's two warnings this week. The first a resurgence of the Phantom Hacker attacks which trick PC users into installing rogue apps. And the second a raft of fake Chrome installs and updates which provide initial access for ransomware. If you just avoid installing linked apps in this way you will steer clear of those attacks. It's the same with a new Amazon impersonation attack that has surged 5000% in just two weeks. Don't click links in messages — even if they seem to come from Amazon. And now Gmail attack warnings are turning up again on social media, which will likely frustrate Google, because their advice has been clear but is not yet landing with users. The latest Gmail warnings come courtesy of a refreshed EasyDMARC article covering the 'no-reply' attacks from earlier this year, hijacking 'no-reply@ to trick users into clicking links and giving up their Google account sign-in credentials. Here again the advice is very simple. It shouldn't matter whether an email appears to come from Google. If it links to a sign-in page, it's an attack. Period. And that means any email that seems to come from Google but has a sign-in link must be deleted. 'Sometimes,' Google warns, 'hackers will copy Google's 'Suspicious sign-in prevented' emails and other official Google emails to try to steal someone's account information.' But the company tells all account holders that 'Google emails will never take you to a sign-in page. Authentic emails sent from Google to your Google Account will never ask you to sign in again to the account they were sent to.' It's as simple as that. Similarly, Google will never 'ask you to provide your password or other sensitive information by email or through a link, call you and ask for any forms of identification, including verification codes, send you a text message directing you to a sign-in page, or send a message via text or email asking you to forward a verification code.' With that in mind, you should not fall victim to these Google impersonation attacks, and if you stick to the basic rules on installs, links and attachments, then you'll likely stay safe from most of the other ones as well.