The next Made by Google event is on August 20
Between the company's invite and what came out of last year's event, expect a refresh of nearly the entire Pixel line. As for what the "and more" bit could entail, recent rumors suggesting Google is working on a proper response to Apple's MagSafe tech dubbed Pixelsnap. Android manufactures have been slow to adopt the Qi2 wireless charging standard, but with the upcoming Pixel 10 it appears the company is working on a host of magnetic Qi2 accessories, including a new charging stand. As always, be sure to visit Engadget on the day of the event as we'll have a liveblog of the entire proceedings.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
10 minutes ago
- Engadget
Engadget Podcast: Ancestra director Eliza McNitt defends AI as a creative tool
Eliza McNitt is no stranger to new media. Her 2017 project, Fistful of Stars , was a fascinating look at stellar birth in virtual reality, while her follow-up Spheres explored black holes and the death of stars. Now with her short film Ancestra , McNitt has tapped into Google's AI tools to tell a deeply personal story. Working with Google Deepmind and director Darren Aronofsky's studio Primordial Soup, McNitt used a combination of live-action footage and AI-generated media to tell the story of her own traumatic birth. The result is an uncanny dramatic short where the genuine emotion of the live-action performance wrestles agains the artificiality of AI imagery. The film begins when the lead's (Audrey Corsa, playing McNitt's mother) routine natal care appointment turns into an emergency delivery. From that point on we hear her opine on how her child and all living things in the universe are connected — evoking the poetic nature of Terrence Malick's films. We jump between Corsa's performance, AI footage and macro- and micro-photography. In the end, Corsa holds a baby that was inserted by Google's AI, using prompts that make it look like McNitt as an infant. To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. There's no escaping the looming shadow of Google's AI ambitions. This isn't just an art film — it's an attempt at legitimizing the use of AI tools through McNitt's voice. That remains a problem when Google's models, including Veo and other technology from DeepMind, have been trained on pre-existing content and copyrighted works. A prestigious short coming from Darren Aronofsky's production studio isn't enough to erase that original sin. "I was challenged to create an idea that could incorporate AI," McNitt said in an interview on the Engadget Podcast. "And so for me, I wanted to tell a really deeply personal story in a way that I had not been able to before... AI really offered this opportunity to access these worlds where a camera cannot go, from the cosmos to the inner world of being within the mother's womb." This embedded content is not available in your region. When it comes to justifying the use of AI tools, which at the moment can credibly be described as plagiaristic technology, McNitt says that's a decision every artist will have to make for themselves. In the case of Ancestra , she wanted to use AI to accomplish difficult work, like creating a computer generated infant that looked like her, based on photos taken by her father. She found that to be more ethical than bringing in a real newborn, and the results more convincing than a doll or something animated by a CG artist. "I felt the use of AI was really important for this story, and I think it's up to every artist to decide how they wanna use these tools and define that," she said. "That was something else for me in this project where I had to define a really strong boundary where I did not want actors to be AI actors, [they] had to be humans with a soul. I do not feel that an performance can be recreated by a machine. I do deeply and strongly believe that humanity can only be captured through human beings. And so I do think it's really important to have humans at the center of the stories." To that end, McNitt also worked with dozens of artists create the sound, imagery and AI media in Ancestra . There's a worry that AI video tools will let anyone plug in a few prompts and build projects out of low-effort footage, but McNitt says she closely collaborated with a team of DeepMind engineers who crafted prompts and sifted through the results to find the footage she was looking for. (We ran out of time before I could ask her about the environmental concerns from using generative AI, but at this point we know it requires a significant amount of electricity and water. That includes demands for training models as well as running them in cloud.) To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. "I do think, as [generative AI] evolves, it's the responsibility of companies to not be taking copyrighted materials and to respect artists and to set those boundaries, so that artists don't get taken advantage of," McNitt said, when asked about her thoughts on future AI models that compensate artists and aren't built on stolen copyrighted works. "I think that that's a really important part of our role as humans going forward. Because ultimately, These are human stories for other human beings. And so it's, you know, important that we are at the center of that." If you buy something through a link in this article, we may earn commission.


Android Authority
10 minutes ago
- Android Authority
Google's AI Mode could soon get an important Gemini feature (APK teardown)
Tushar Mehta / Android Authority TL;DR Google is working on new ways that improve how you access AI Mode chat history inside the Google app for Android. The history of chats with AI Mode could soon be grouped under the Activity tab, alongside your regular search history and bookmarks. Google could also enable the option to share AI Mode conversations, which is currently not possible. Google has been fervently promoting Search's AI Mode alongside Gemini. After recently adding AI Mode to Circle to Search, Google Lens, and even in Chrome for Android, it could soon enhance the search-centric mode with a key Gemini feature. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. We recently learned that Google is revamping the way we access the search history, specifically for the AI Mode. In addition to the current button placement on the top-right of AI Mode — in its main screen and inside an ongoing chat, Google may soon merge it with the standard Search history and bookmarks under the Activity tab. We spotted these changes hidden in version 16.27.69 beta of the Google app on Android. Current way to access AI Mode history Current way to access AI Mode history from a chat Upcoming method to track AI Mode history As you can see above, the modified interface shows only the three most recent searches. The remaining list is tucked away on a separate page that can be accessed by tapping the ( > ) button. Besides clubbing AI Mode history under the Activity tab, Google may also revamp the expanded list. My colleague AssembleDebug informed me that while the current AI Mode shortcut directs you to a web page, Google might soon add a dedicated sidebar with the Google app's section for the mode. Current AI mode history view Upcoming AI Mode history view Upcoming link-sharing options for AI Mode history In addition to making it easier to access your conversation history, this sidebar could simplify the process of deleting conversations, even without exiting an ongoing search. Alongside changing how we access the interface, Google could also bring a key change that makes it easy to share your AI Mode conversations. In the beta build, we enabled a 'Manage Public Links' option, which allows you to share your chats using links or export them to Gmail or Google Docs, a feature that is currently not available. You can, however, export results both from Google's AI overviews and Gemini externally. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.


Android Authority
40 minutes ago
- Android Authority
Scientists prove Android Earthquake Alerts system actually works pretty well
Google TL;DR Google's Android Earthquake Alerts system crowdsources data from smartphone accelerometers to detect seismic activity. The system has detected over 18,000 earthquakes and alerted millions across nearly 100 countries. Users receive crucial seconds of warning, enabling them to take protective action before shaking begins. Google introduced Earthquake Alerts for Android devices back in 2020, and expanded the feature to cover all US states in September 2024. More recently, Google also expanded Android Earthquake Alerts to Wear OS smartwatches to warn you of seismic activity right on your wrist. But have you ever wondered if these alerts actually work? Do they do any good if they alert you a few seconds before an earthquake? As it turns out, crowdsourcing data from millions of Android smartphones to create the world's most extensive earthquake detection system is a pretty good idea. With the Android Earthquake Alerts system, Google turned the accelerometers in Android smartphones into a powerful, pocket-sized earthquake detection system. Over the last four years, this system has detected over 18,000 earthquakes and sent alerts to millions of people in nearly 100 countries, according to Google as well as the scientists over at (the online version of one of the world's top peer-reviewed academic journals). These preemptive alerts give people crucial moments to distance themselves from dangerous objects and positions and take cover before the earthquake hits their location. How does the Android Earthquake Alerts system work? All Android phones come with an accelerometer, which is conventionally used to detect changes in motion and orientation to provide features like auto-rotation. As it turns out, the accelerometer can also detect the ground shaking from an earthquake. This data and the user's coarse location are sent to Google's earthquake detection server, which analyzes data from many phones in the coarse location to confirm that an earthquake is happening and estimates its location and magnitude. Supplied by Google The system then sends out alerts to users. These can either be a BeAware alert for estimated light shaking or a TakeAction alert, which takes over the phone's screen and plays a loud sound for estimated stronger shaking. How effective is the Android Earthquake Alerts system? Google has said it has issued alerts for over 2,000 earthquakes, culminating in 790 million alerts sent to phones worldwide. This system has expanded the number of people with access to an Earthquake Early Warning system, going from 250 million in 2019 to over 2.5 billion in 2025. During the magnitude 6.7 earthquake in the Philippines in November 2023, Google sent out the first alert just 18.3 seconds after the quake started. People closest to the epicenter received up to 15 seconds of warning, while those farther away got up to a minute. In this instance, nearly 2.5 million people were alerted to the earthquake before they could feel the shaking! Similarly, for the magnitude 5.7 earthquake in Nepal in November 2023, the first alert was issued 15.6 seconds after the earthquake. People who experienced moderate to strong shaking had a warning time of 10 to 60 seconds, with over 10 million alerts delivered! During Turkey's magnitude 6.2 earthquake in April 2025, the first alert was issued eight seconds after the quake began. People who experienced moderate to strong shaking had a warning time of a few to 20 seconds, and over 16 million alerts were delivered. The animation below shows phones shaking as yellow dots, the yellow circle is the earthquake's faster-moving P-wave, while the red circle is the earthquake's slower, more damaging S-wave. The animation shows the time in the upper left corner too, and you can see a wave of phones detecting shaking before the yellow circle until the red circle passes by, giving people precious seconds to take cover before the tremors hit. Google has also continuously improved its magnitude estimation, with the median absolute error of the first magnitude estimate dropping from 0.5 to 0.25, making its accuracy the same or even better than traditional seismic networks. Google also surveyed over 1.5 million people, and 85% found the alerts very helpful. Even when people don't feel the shaking, they largely appreciate the warning to be alert about potential hazards. People who received a TakeAction alert commonly took action (namely 'Drop, Cover, and Hold On'), further validating the alerts' utility. Sample size: 1,555,006 responses Have you had an experience with the Android Earthquake Alerts system? What happened? Was the alert helpful, and were you able to get to safety? Please share your experience with us in the comments below! Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.