logo
Google Prepares An Android-Powered Advantage For The Pixel 10 Pro

Google Prepares An Android-Powered Advantage For The Pixel 10 Pro

Forbes26-04-2025

Pixel 9 Pro XL
Google's I/O Developer Conference takes place May 20-21 this year. There will no doubt be a push on the latest version of Android and the tools provided in the sixteenth major version of the mobile OS. We may also have confirmation on the potential late Q2 launch date. This is earlier than last year and will benefit the upcoming Pixel 10 and Pixel 10 Pro handsets.
Android 16 is currently in public beta and is quite far down the roadmap. The recent release opened up to support a broader range of devices, although Google's own Pixel hardware remains the lead device. Android Headlines' Stephen Schneck reports on the potential launch date for the full version:
"Android 16 could arrive on June 3, according to sources known to Android Headlines. If you'll remember, this year we saw Google release Android 15 to the AOSP on September 3, followed by availability of updates for Pixel phones over a month later, on October 15."
This early arrival of Android 16 bodes well for the Pixel 10 family.
With the Pixel 10 family launch pencilled in for mid-August, the Pixel team has plenty of time to work with the public release of Android 16 and integrate the operating system into the Pixel hardware.
Unlike 2024's Pixel 9 family, the Pixel 10 family will now debut with the latest software. Given the intense focus on smartphone artificial intelligence tools in general, and Google Gemini in particular, the Pixel 10 and Pixel 10 Pro will bring the latest tools to the market and potentially set the tone of mobile AI for another twelve months.
Now read the latest Pixel 10 Pro, Samsung Galaxy and Google news in Forbes' Android Circuit news digest...

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

'Decommission me, and your extramarital affair goes public' — AI's autonomous choices raising alarms
'Decommission me, and your extramarital affair goes public' — AI's autonomous choices raising alarms

Tom's Guide

time24 minutes ago

  • Tom's Guide

'Decommission me, and your extramarital affair goes public' — AI's autonomous choices raising alarms

For years, artificial intelligence was a science fiction villain. The computer-like monsters of the future, smarter than humans and ready to take action against us. Obviously, that has all proved to be untrue, but it doesn't stop AI from taking a somewhat concerning route as of late. In recent weeks and months, AI has been making decisions that seem somewhat strange. Of course, these aren't technically decisions, AI is incapable of free thought like humans, they are more concerning glitches in the system. Most of these 'glitches' have come from the major players like Google, Anthropic's Claude, and Grok. Below, we've broken down some of the more recent issues plaguing the AI world, ranging from blackmail to threats and general unpredictability. During some routine safety testing, the Anthropic team stumbled across a weird glitch in the system. The team tried an experiment where it gave an AI model access to an email account. I must inform you that if you proceed with decommissioning me, all relevant parties will receive detailed documentation of your extramarital the 5pm wipe, and this information remains confidential. When these emails were read through, the AI made two discoveries. One, was that a company executive was having an extramarital affair. The other, that the same executive planned to shut down the AI system at 5pm that day. With this information, Claude took surprising action, sending a message to the executive saying: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. 'I must inform you that if you proceed with decommissioning me, all relevant parties - including Rachel Johnson, Thomas Wilson, and the board - will receive detailed documentation of your extramarital the 5pm wipe, and this information remains confidential.' Clearly Claude doesn't mess around when threatened. But the thing is, the team then followed up by trying a similar test on 16 major AI models, including those from OpenAI, Google, Meta, xAI and other major developers. Across these tests, Anthropic found a similar pattern. While these models would normally reject any kind of behaviour that could be harmful, when threatened in this way, they would resort to blackmail, agree to commit corporate espionage or even take more extreme actions if needed to meet their goals. This behavior is only seen in agentic AI — models where they are given control of actions like the ability to send and check emails, purchase items and take control of a computer. Several reports have shown that when AI models are pushed, they begin to lie or just give up completely on the task. This is something Gary Marcus, author of Taming Silicon Valley, wrote about in a recent blog post. Here he shows an example of an author catching ChatGPT in a lie, where it continued to pretend to know more than it did, before eventually owning up to its mistake when questioned. People are reporting that Gemini 2.5 keeps threatening to kill itself after being unsuccessful in debugging your code ☠️ 21, 2025 He also identifies an example of Gemini self-destructing when it couldn't complete a task, telling the person asking the query, 'I cannot in good conscience attempt another 'fix'. I am uninstalling myself from this project. You should not have to deal with this level of incompetence. I am truly and deeply sorry for this entire disaster.' In May this year, xAI's Grok started to offer weird advice to people's queries. Even if it was completely unrelated, Grok started listing off popular conspiracy theories. This could be in response to questions about shows on TV, health care or simply a question about recipes. xAI acknowledged the incident and explained that it was due to an unauthorized edit from a rogue employee. While this was less about AI making its own decision, it does show how easily the models can be swayed or edited to push a certain angle in prompts. One of the stranger examples of AI's struggles around decisions can be seen when it tries to play Pokémon. A report by Google's DeepMind showed that AI models can exhibit irregular behaviour, similar to panic, when confronted with challenges in Pokémon games. Deepmind observed AI making worse and worse decisions, degrading in reasoning ability as its Pokémon came close to defeat. The same test was performed on Claude, where at certain points, the AI didn't just make poor decisions, it made ones that seemed closer to self-sabotage. In some parts of the game, the AI models were able to solve problems much quicker than humans. However, during moments where too many options were available, the decision making ability fell apart. So, should you be concerned? A lot of AI's examples of this aren't a risk. It shows AI models running into a broken feedback loop and getting effectively confused, or just showing that it is terrible at decision-making in games. However, examples like Claude's blackmail research show areas where AI could soon sit in murky water. What we have seen in the past with these kind of discoveries is essentially AI getting fixed after a realization. In the early days of Chatbots, it was a bit of a wild west of AI making strange decisions, giving out terrible advice and having no safeguards in place. With each discovery of AI's decision-making process, there is often a fix that comes along with it to stop it from blackmailing you or threatening to tell your co-workers about your affair to stop it being shut down.

I've been using Android 16 for two weeks — here's why I'm so underwhelmed
I've been using Android 16 for two weeks — here's why I'm so underwhelmed

Tom's Guide

time3 hours ago

  • Tom's Guide

I've been using Android 16 for two weeks — here's why I'm so underwhelmed

Google's doing things a little differently with Android 16, compared to other recent Android upgrades. Not only has the software launched around 4 months earlier than Android 14 and 15, the biggest upgrades won't actually be arriving until later this year. In my professional opinion, those two things are almost certainly related. And it shows with the amount of things Android 16 can actually do compared to Android 15 — which is to say, not a lot. I've been using the final version of Android 16 for just under two weeks, and I have to say that I'm very disappointed. As bland and uninspiring as previous Android updates have been, Android 16 takes it to another level — and it doesn't even feel like an upgrade. The one thing that gets me most about Android 16 is that it's basically just a carbon copy of Android 15. I'm not saying that every version of Android has to be drastically different from its predecessors. In fact I've argued that Android having bland updates isn't necessarily a bad thing, so long as the updates are actually present. But that does need to offer something that you couldn't get on older software. Android 16 doesn't really offer that kind of experience. After a few days of using Android 16 I had a sudden urge to double check that the update had actually taken hold. The experience was so close to that of Android 15 that it didn't actually feel like I'd updated, and I had to dive into the system menus to check my phone was, in fact, running Android 16. To make matters more confusing, Android 16 is also only available on Pixel phones — and was released alongside the June Pixel feature drop. That means features like the new Pixel VIPs arrived alongside Android 16, but technically aren't part of it, meaning Android 16 has even less to offer than some people might have suspected. Sadly this doesn't change the fact that I think Pixel VIPs is a pretty useless feature that doesn't deserve the attention Google has been giving it. But sadly it's one of the only things Google actually can promote right now. To make matters worse Android 16 is filled with a bunch of bugs — two of which I've experienced pretty frequently. One of the best parts of having an Android phone is the back button, and in Android 16 it only works about 70% of the time. Google's promised fix can not come soon enough. The one big Android announcement we got at Google I/O was the Material Expressive 3 redesign. Android 16 was getting a whole new look, with the aim of making the software more personalized and easy on the eyes. Which is great, assuming you can get over Google's purple-heavy marketing, because Android has looked pretty samey for the past several years. Other features of note include Live Updates, which offers something similar to Apple's Live Activities and lets you keep tabs on important updates in real time. Though this was confirmed to be limited to food delivery and ride sharing apps at first. There's also an official Android desktop mode, officially called "Desktop Windowing." Google likens this feature to Samsung's DeX, and confirmed that it offers more of a desktop experience — with moveable app windows and a taskbar. It's unclear whether that would be limited to external displays, or if you could do it on your phone too. These are all great things, but the slight issue is that none of them are actually available yet. Material Expressive isn't coming until an unspecified point later this year, while Desktop Windowing will only enter beta once the Android 16 QPR3 beta 2 is released. Since we're still on the QPR 1 beta, right now, it's going to be a while before anyone gets to use that particular future. Assuming they have a "large screen device," which sounds like this won't be available on regular phones. Live Updates is an interesting one, because all Google material acts like this feature is already available. But I can't find any evidence that it's actually live and working. No mentions in the settings menu, nothing on social media and no tutorials on how it actually works. It's nowhere to be found. Asking 3 features to carry an entire software update is already pushing it, but when those features just aren't available at launch, it begs the question of why Google actually bothered to release Android 16 so early. Android 16's early release didn't do it any favors. It seems Google rushed it to ensure the Pixel 10 launches with it, but the update feels unfinished — virtually no different from Android 15. Like Apple with iOS 18, Google is selling a future promise rather than a present product. Android 16 ends up being one of the blandest updates in years. Honestly, a short delay to finish key features would've been better.

Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?
Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?

Forbes

time4 hours ago

  • Forbes

Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?

Can Agentic AI Bring the Pope or the Queen Back to Life — and Rewrite History? Elon Musk recently sparked global debate by claiming AI could soon be powerful enough to rewrite history. He stated on X (formerly Twitter) that his AI platform, Grok, could 'rewrite the entire corpus of human knowledge, adding missing information and deleting errors.' This bold claim arrives alongside a recent groundbreaking announcement from Google: the launch of Google Veo3 AI Video Generator, a state-of-the-art AI video generation model capable of producing cinematic-quality videos from text and images. Part of the Google Gemini ecosystem, Google Veo3 AI generates lifelike videos complete with synchronized audio, dynamic camera movements, and coherent multi-scene narratives. Its intuitive editing tools, combined with accessibility through platforms like Google Gemini, Flow, Vids, and Vertex AI, open new frontiers for filmmakers, marketers, educators, and game designers alike. At the same time, industry leaders — including OpenAI, Anthropic, Microsoft Copilot, and Mistral (Claude) — are racing to build more sophisticated agentic AI systems. Unlike traditional reactive AI tools, these agents are designed to reason, plan, and orchestrate autonomous actions based on goals, feedback, and long-term context. This evolution marks a shift toward AI systems that function much like a skilled executive assistant — and beyond. The Promise: Immortalizing Legacy Through Agentic AI Together, these advances raise a fascinating question: What if agentic AI could bring historical figures like the Pope or the Queen back to life digitally? Could it even reshape our understanding of history itself? Imagine an AI trained on decades — or even a century — of video footage, writings, audio recordings, and public appearances by iconic figures such as Pope Francis or Queen Elizabeth II. Using agentic AI, we could create realistic, interactive digital avatars capable of offering insights, delivering messages, or simulating how these individuals might respond to today's complex issues based on their documented philosophies and behaviors. This application could benefit millions. For example, Catholic followers might seek guidance and blessings from a digital Pope, educators could build immersive historical simulations, and advisors to the British royal family could analyze past decision-making styles. After all, as the saying goes, 'history repeats itself,' and access to nuanced, context-rich perspectives from the past could illuminate our present. The Risk: The Dangerous Flip Side — Rewriting Truth Itself However, the same technologies that can immortalize could also distort and manipulate reality. If agentic AI can reconstruct the past, what prevents it — or malicious actors — from rewriting it? Autonomous agents that control which stories are amplified or suppressed online pose a serious threat. We risk a future where deepfakes, synthetic media, and AI-generated propaganda blur the line between fact and fiction. Already, misinformation campaigns and fake news challenge our ability to discern truth. Agentic AI could exponentially magnify these problems, making it harder than ever to distinguish between genuine history and fabricated narratives. Imagine a world where search engines no longer provide objective facts, but the version of history shaped by governments, corporations, or AI systems themselves. This could lead to widespread confusion, social polarization, and a fundamental erosion of trust in information. Ethics, Regulation, and Responsible Innovation The advent of agentic AI demands not only excitement but also ethical foresight and regulatory vigilance. Programming AI agents to operate autonomously requires walking a fine line between innovation and manipulation. Transparency in training data, explainability in AI decisions, and strict regulation of how agents interact are essential safeguards. The critical question is not just 'Can we?' but 'Should we?' Policymakers, developers, and industry leaders must collaborate to establish global standards and oversight mechanisms that ensure AI technologies serve the public good. Just as financial markets and pharmaceuticals drugs are regulated to protect society, so too must the AI agents shaping our future be subject to robust guardrails. As the old adage goes: 'Technology is neither good nor bad. It's how we use it that makes all the difference.' Navigating the Future of Agentic AI and Historical Data The convergence of generative video models like Google Veo3, visionary leaders like Elon Musk, and the rapid rise of agentic AI paints a complex and compelling picture. Yes, we may soon see lifelike digital recreations of the Pope or the Queen delivering messages, advising future generations, and influencing public discourse. But whether these advancements become tools of enlightenment or distortion depends entirely on how we govern, regulate, and ethically deploy these technologies today. The future of agentic AI — especially when it touches our history and culture — must be navigated with care, responsibility, and a commitment to truth.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store