
A fifth Pixel 6a just caught fire, and it seems like Google's update isn't enough
TL;DR Another Google Pixel 6a has caught fire, according to a Reddit user.
The user said their phone caught fire despite installing Google's mandatory update to combat battery heating.
This also comes after Australia's consumer watchdog issued a notice about Pixel 6a battery overheating.
We've reported on several Google Pixel 6a battery fires earlier this year. This prompted Google to release a mandatory update for some units earlier this month to combat the issue. This update is supposed to dramatically cut battery life and reduce charging speeds in a bid to reduce battery heating issues. Unfortunately, someone has reported a Pixel 6a battery fire after installing the update.
Redditor footymanageraddict reports that their Pixel 6a caught fire while they were sleeping on Saturday (July 26):
I got woken up with a horrible smell and a loud noise. Fire had already started and i managed to throw the phone on the tile floor pulling it by the cord. The phone was sitting less than 40 cms away from my head on my nightstand. Sheets caught on fire. My ac (a floor unit) had damages (sic) on its surface from the fire. My throat hurt the whole day from the fumes i inhaled (My room door was closed because of the ac being on so i basically breathed the smoke for a longer time than i would want trying to stop a fire from spreading).
The Redditor said they had been charging the phone with the Steam Deck's 45W charger. They also posted several photos showing the aftermath of the fire, seen below. The images show an extensively burned Pixel 6a, including a melted screen, a partially melted case, and charred internals.
What's particularly concerning is that the user said they had installed Google's 'Battery Performance Program' update, which was supposed to reduce the risk of battery overheating in the first place. Google also recommends a free battery replacement for affected devices (via mail or walk-in), but the user said walk-in replacements weren't available in their country.
This might be the fifth Pixel 6a to catch fire in the last 12 months. However, the previous four incidents occurred before Google's 'Battery Performance Program' update.
This latest incident suggests that the update doesn't go far enough and that Google needs to perform a mandatory battery replacement or device recall. It's also theoretically possible that the device battery was already past the point of no return. However, the user didn't notice any battery swelling or other telltale signs.
This also comes after Australia's consumer watchdog issued a warning about the Pixel 6a on July 25. The Australian Consumer and Competition Commission warned that the phone's battery may overheat, but added that users didn't have to return their devices. The warning nevertheless came a day before the latest battery fire incident.
We've asked Google for comment regarding this latest issue and whether it will take additional actions (e.g., a mandatory recall/replacement). We'll update the article when the Pixel maker gives us an answer. Either way, it's getting harder to recommend mid-range Pixel phones due to this issue.
Got a tip? Talk to us! Email our staff at
Email our staff at news@androidauthority.com . You can stay anonymous or get credit for the info, it's your choice.
Follow
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
22 minutes ago
- Android Authority
Don't worry, Samsung's Android XR headset is still launching this year
Lanh Nguyen / Android Authority TL;DR During its most recent earnings call, Samsung has re-confirmed that its Project Moohan Android XR headset is launching 'this year.' Samsung has previously hinted at a 2025 release date, and this is the latest confirmation that it's still on track. An earlier report suggested Project Moohan could be released as early as October. Samsung has had a busy 2025, launching the Galaxy S25 series at the start of the year and most recently releasing the Galaxy Z Fold 7, Z Flip 7, and Galaxy Watch 8 series. But there's another Samsung gadget that's still on track to be released this year, and it's arguably the company's most interesting: its Project Moohan Android XR headset. Samsung has remained pretty tight-lipped about Project Moohan since the headset was first teased in January, though it has repeatedly insisted that the headset is launching in 2025. But as the months roll on and there's still no sight of it, doubt has begun to creep in. Thankfully, Samsung is committing to getting its Android XR headset on store shelves before the end of 2025. In the company's latest earnings call on July 30, Samsung confirmed that Project Moohan will still launch 'this year.' The full quote reads as follows: 'Meanwhile, we are also preparing to introduce next-generation innovative products, including our XR headset and TriFold smartphone this year. Our XR headset, which seamlessly integrates the XR ecosystem developed in partnership with Google as well as multimodal AI capabilities will serve as a key stepping stone in solidifying our leadership in future technologies and further expanding the Galaxy ecosystem.' Lanh Nguyen / Android Authority Although Samsung didn't get specific about when 'this year' we'll see Project Moohan, previous reporting has suggested it could be sooner than you might expect. In June, one report claimed that Samsung would hold a Project Moohan launch event on September 29 this year. The headset would then reportedly launch on October 13 in South Korea, with availability in other markets (such as the US) following at a later date. For a device set to launch within the next five months, there's a lot we still don't know about Samsung's first Android XR headset. What kind of first-party XR experiences is Samsung crafting for it? How long will the battery last? What's the display resolution? And, perhaps most importantly, how much will it cost? Oh, and what's it actually going to be called? The good news is that we should have all of those answers sooner rather than later. Follow


Fast Company
22 minutes ago
- Fast Company
How Google is working with Hollywood to bring AI to filmmaking
In filmmaking circles, AI is an everpresent topic of conversation. While AI will change filmmaking economics and could greenlight more experimental projects by reducing production costs, it also threatens jobs, intellectual property, and creative integrity—potentially cheapening the art form. Google, having developed cutting-edge AI tools spanning script development to text-to-video generation, is positioned as a key player in AI-assisted filmmaking. At the center of Google's cinema ambitions is Mira Lane, the company's vice president of tech and society and its point person on Hollywood studio partnerships. I spoke with Lane about Google's role as a creative partner to the film industry, current Hollywood collaborations, and how artists are embracing tools like Google's generative video editing suite Flow for preproduction, previsualization, and prototyping. This interview has been edited for length and clarity. Can you tell me about the team you're running and your approach to AI in film? I run a team called the Envisioning Studio. It sits within this group called Technology and Society. The whole ambition around the team is to showcase possibilities. . . . We take the latest technologies, latest models, latest products and we co-create with society because there's an ethos here that if you're going to disrupt society, you need to co-create with them, collaborate with them, and have them have a real say in the shape of the way that technology unfolds. I think too often a lot of technology companies will make something in isolation and then toss it over the fence, and then various parts of society are the recipients of it and they're reacting to it. I think we saw that with language models that came out three years ago or so where things just kind of went into the industry and into society and people struggled with engaging with them in a meaningful way. My team is very multidisciplinary. There are philosophers on the team, researchers, developers, product thinkers, designers, and strategists. What we've been doing with the creative industry, mostly film this year—last year we worked on music as well—is we've been doing fairly large collaborations. We bring filmmakers in, we show them what's possible, we make things with them, we embed with them sometimes, we hear their feedback. Then they get to shape things like Flow and Veo that have been launched. I think that we're learning a tremendous amount in that space because anything in the creative and art space right now has a lot of tension, and we want to be active collaborators there. Have you been able to engage directly with the writers' and actors' unions? We kind of work through the filmmakers on some of those. Darren Aronofsky, when we brought him in, actually engaged with the writers' unions and the actors' unions to talk about how he was going to approach filmmaking with Google—the number of staff and actors and the way they were going to have those folks embedded in the teams, the types of projects that the AI tools would be focused on. We do that through the filmmakers, and we think it's important to do it actually in partnership with the filmmakers because it's in context of what we're doing versus in some abstract way. That's a very important relationship to nurture. Tell me about one of the films you've helped create. Four weeks ago at Tribeca we launched a short film called Ancestra, created in partnership with Darren's production company, Primordial Soup. It's a hybrid type of model where there were live-action shots and AI shots. It's a story about a mother and a baby who's about to be born and the baby has a hole in its heart. It's a short about the universe coming together to help birth that baby and to make sure that it survives. It was based on a true story of the director being born with a hole in her heart. There are some scenes that are just really hard to shoot, and babies—you can't have infants younger than 6 months on set. So how do you show an accurate depiction of a baby? We took photos from when she was born and constructed an AI version of that baby, and then generated it being held within the arms of a live actress as well. When you watch that film, you'll see these things where it's an AI-generated baby. You can't tell that it's AI-generated, but the scene is actually composed of half of it being live action, the other half being AI-generated. We had 150 people, maybe close to 200 working on that short film—the same number of people you would typically have working on a [feature-length] film. We saw some shifts in roles and new types of roles being created. There may even be an AI unit that's part of these films. There's usually a CGI unit, and we think there's probably going to be an AI unit that's created as well. It sounds like you're trying to play a responsible role in how this impacts creators. What are the fruits of that approach? We want to listen and learn. It's very rare for a technology company to develop the right thing from the very beginning. We want to co-create these tools. because if they're co-created they're useful and they're additive and they're an extension and augmentation, especially in the creative space. We don't want people to have to contort around the technology. We want the technology to be situated relative to what they need and what people are trying to do. There's a huge aspect of advancing the science, advancing the latest and greatest model development, advancing tooling. We learn a lot from engaging with . . . filmmakers. For example, we launched Flow [a generative video editing suite] and as we were launching it and developing it, a lot of the feedback from our filmmakers was, 'Hey, this tool is really helpful, but we work in teams.' So how can you extend this to be a team-based tool instead of a tool that's for a single individual? We get a lot of really great feedback in terms of just core research and development, and then it becomes something that's actually useful. That's what we want to do. We want something that is helpful and useful and additive. We're having the conversations around roles and jobs at the same time. How is this technology empowering filmmakers to tell stories they couldn't before? In the film industry, they're struggling right now to get really innovative films out because a lot of the production studios want things that are guaranteed hits, and so you're starting to see certain patterns of movies coming out. But filmmakers want to tell richer stories. With the one that we launched at Tribeca, the director was like, 'I would never have been able to tell this story. No one would have funded it and it would have been incredibly hard to do. But now with these tools I can get that story out there.' We're seeing a lot of that—people generating and developing things that they would not have been funded for in the past, but now that gets great storytelling out the door as well. It's incredibly empowering. These tools are incredibly powerful because they reduce the costs of some of the things that are really hard to do. Certain scenes are very expensive. You want to do a car chase, for example—that's a really expensive scene. We've seen some people take these tools and create pitches that they can then take to a studio and say, 'Hey, would you fund this? Here's my concept.' They're really good at the previsualization stage, and they can kind of get you in the door. Whereas in the past, maybe you brought storyboards in or it was more expensive to create that pitch, now you can do that pretty quickly. Are we at the point where you can write a prompt and generate an entire film? I don't think the technology is there where you can write a prompt and generate an entire film and have it land in the right way. There is so much involved in filmmaking that is beyond writing a prompt. There's character development and the right cinematography. . . . There's a lot of nuance in filmmaking. We're pretty far from that. If somebody's selling that I think I would be really skeptical. What I would say is you can generate segments of that film that are really helpful and [AI] is great for certain things. For short films it's really good. For feature films, there's still a lot of work in the process. I don't think we're in the stage where you're going to automate out the artist in any way. Nobody wants that necessarily. Filmmaking and storytelling is actually pretty complex. You need good taste as well; there's an art to storytelling that you can't really automate. Is there a disconnect between what Silicon Valley thinks is possible and what Hollywood actually wants? I think everybody thinks the technology is further along than it is. There's a perception that the technology is much more capable. I think that's where some of the fear is actually, because they're imagining what this can do because of the stories that have been told about these technologies. We just put it in the hands of people and they see the contours of it and the edges and what it's good and bad at, and then they're a little less worried. They're like, 'Oh, I understand this now.' That said, I look at where the technology was two years ago for film and where it is now. The improvements have been remarkable. Two years ago every [generated] film had six fingers and everything was morphed and really not there—there was no photorealism. You couldn't do live-action shots. And in two years we've made incredible progress. I think in another two years, we're going to have another big step change. We have to recognize we're not as advanced as we think we are, but also that the technology is moving really fast. These partnerships are important because if we're going to have this sort of accelerated technology development, we need these parts of our society that are affected to be deeply involved and actively shaping it so that the thing we have in two years is what is actually useful and valuable in that industry. What kinds of scenes or elements are becoming easier to create with AI? Anything that is complex that you tend to see a lot of, those types of things start to get easier because we have a lot of training data around that. If you've seen lots of movies with car chases in them. There are scenes of the universe—we've got amazing photography from the Hubble telescope. We've got great microscopic photography. All of those types of things that are complicated and hard to do in real life, those you can generate a lot easier because we have lots of examples of those and it's been done in the past. The ones that are hard are ones where you want really strong eye contact between characters, and where the characters are showing a more complex range of emotions. How would you describe where we're at with the uptake of these tools in the industry? I think that we're in a state where there's a lot of experimentation. It's kind of that stage where there's something new that's been developed and what you tend to do when there's something new is you tend to try to re-create the past—what you used to do with [older] tools. We're in that stage where I think people are trying to use these new tools to re-create the same kinds of stories that they used to tell, but the real gem is when you jump past that and you do new types of things and new types of stories. I'll give you one example. Brian Eno did a set of generative films; every time you went to the theater you saw a different version of that film. It was generated, it was different, it was unique. It still had the same backbone but it was a different story every time you saw it. That's a new type of storytelling. I think we're going to see more types of things like that. But first we have to get through this phase of experimentation and understanding the tools, and then we'll get to all the new things we can do with it.


Android Authority
an hour ago
- Android Authority
Motorola's next special edition looks extra flashy in new leak
Ryan Haines / Android Authority TL;DR Motorola will launch a special edition Razr in collaboration with Swarovski. New renders provide a look at the special edition phone from a variety of angles. This collaboration is limited to the vanilla Razr. A few weeks ago, we learned that Motorola was joining forces with jewelry maker Swarovski to create a special edition of the Razr 2025 (a.k.a. Razr 60). Days later, the company began officially teasing the collaboration, announcing plans to unveil the phone on August 5. So far, we've only seen one leaked render and a teaser video, but a new leak has given us a little more to chew on before the launch. The previous leak provided a look at the front and back of the device in a folded state. In this leak, courtesy of YTECHB, the handset is shown folded, unfolded, and at different angles. As a result, this is our best look yet at the Swarovski Razr 2025. According to the previous leak, this colorway will be called 'Ice Blue.' It's also expected that this collaboration will only include the base model Razr 2025, so you won't find an Ice Blue Razr Plus or Ultra. As this will only be a cosmetic change, there won't be any technical differences between the Swarovski Razr 2025 and the regular version. So that means you're getting the same 3.6-inch cover display, 6.9-inch LTPO AMOLED 120Hz inner display, MediaTek Dimensity 7400X chip, 8GB/12GB of RAM, and 256GB/512GB. A crystal-studded phone isn't the end of this collaboration, however. The company will also be launching a light blue crystal-studded version of the Motorola buds loop, along with the Ice Blue Razr. Follow