Hey, Google: Prerecorded AI Presentations Are the Coward's Way Out
At last year's Made by Google event, Gemini failed twice during a live demonstration. Though moments like this are undoubtedly embarrassing for companies, they add a layer of authenticity you don't get with a prerecorded keynote event. But unfortunately, Google chose the prerecorded route for Tuesday's Android Show: I/O Edition. The format felt way too staged and polished for my liking, and it stripped away the feeling of reality that comes with live, warts-and-all demos.
During the Android Show: I/O Edition, we saw a demonstration of Gemini sharing makeup tips, helping someone find a time to grab lunch in their busy schedule, and giving a summary of Jane Austen's Pride and Prejudice. Because these were prerecorded interactions, Gemini handled the requests with aplomb -- no hiccups or issues in sight. But tests show that AI models routinely get things wrong.
According to the AI testing site LiveBench, Google's Gemini 2.5 Pro Preview is generally correct about 79% of the time. That's not bad, but it's not great either. And despite that score, this model of Gemini is still one of the best AI models the site tested, losing out to only two other models: OpenAI's o3 High and o4 Medium models.
Sure, nothing is perfect, and devices and software have bugs. But if you give me a calculator and promise it works all the time, but in reality it's wrong 20% of the time, that feels like a major discrepancy.
Since Gemini outperformed most other AI models LiveBench tested, there's a good chance I'd still use Gemini, even if the live demo stalled. But because Google opted for a superpolished demonstration, I have a hard time knowing what to believe.
Look, I understand why a company would want its product to work properly at its own event. But showing AI tools making mistakes feels more honest than acting like the tool is perfect. These capabilities are flawed, and that's fine, but be honest with people about those flaws and show your new features in action. Don't sell me smoke and mirrors.
For more on Google, here's what to know about Android 16 and the Material 3 Expressive design.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Before You Buy That 'Cheap' Stock, Read the Proxy, Not The Pitch Deck
The deeper you dig into companies, the more dysfunction you find. This is not due to businesses being inherently broken, but rather because the incentives are flawed. On the surface, stocks can look cheap. Screens flag low multiples, analyst notes highlight growth potential, and management sounds confident on the call. But beneath that narrative, capital often gets quietly misallocated. Empire-building creeps in. Decisions tilt toward boosting bonuses, not shareholder value. And the metrics that matter most, the ones buried in incentive structures and insider behavior, go ignored by most investors. As Amazon Doubles Down on Robotaxis, Is AMZN Stock a Buy? Why Citi Thinks Micron Stock Is Headed to $150 After Earnings Beat OpenAI's Partnership With Microsoft is Good, Says CEO Sam Altman; There's 'Tension,' But Already Planning 'Next Decade Together' Markets move fast. Keep up by reading our FREE midday Barchart Brief newsletter for exclusive charts, analysis, and headlines. This is where the true significance emerges. If you don't monitor executive pay and insider moves, you're overlooking potential value leaks or, crucially, unlocked opportunities. These aren't soft signals. They're hard tells. The clues are in the proxy filings, the timing of stock sales, and the structure of performance hurdles. This issue isn't about corporate morality. It's about money. For investors who know where to look, misaligned governance isn't a red flag; it's a roadmap to alpha. Capital Allocation. The Hidden Cost Of Ego Capital allocation is where the quiet destruction of value often begins and it's rarely about incompetence. It's about incentives. Expect empire building when a CEO's bonus is based on top-line growth instead of return on invested capital (ROIC). That usually means overpaying for acquisitions, not because they're strategic, but because they build legacy. Cash gets hoarded instead of returned, while buybacks and dividends are treated as afterthoughts, despite being the most shareholder-friendly tools available. Look closer and you'll see companies issuing millions in stock-based compensation while continuing to dilute shareholders by printing more shares. Others hang onto underperforming divisions year after year, not because they add value but because breaking them up would shrink the C-suite. This isn't hypothetical. We've seen it in sprawling conglomerates that refuse to divest low-ROIC segments, in tech firms obsessed with topline over profit, and in boards that rubber-stamp comp plans designed to reward size not efficiency. The result? On paper, a stock appears inexpensive, yet it lacks the structural ability to increase value. The investor takeaway is simple: Read the proxy. Please disregard the glossy investor deck. What's buried in the footnotes of a compensation plan tells you far more than the income statement. Incentives, not narratives, dictate capital allocation in this context. Executive Compensation: Where The Real Priorities Lie At The Edge, we track dozens of these triggers across spinoffs, restructurings, and breakups. Why? Executive behavior concerning value unlocks more insights than any analyst report. When comp plans prioritize optics over outcomes, it's a red flag. However, when the incentives align with the creation of real shareholder value, it's a positive sign. Take (VLTO): Prior to the spin from Danaher, we flagged their LTIP structure as heavily performance-driven, measured on metrics like ROIC and TSR over three years, not revenue padding. This clarity indicated that management was pursuing a long-term strategy, and the market has appropriately rewarded them. In contrast, (ILMN) became a textbook example of what happens when executive compensation favors empire-building over accountability. Executive pay isn't just about fairness, it's about foresight. If management wins regardless of shareholder outcomes, you won't. Insider Behavior: The Tells You Can't Ignore In a world of carefully managed narratives, insider behavior is one of the few signals that's hard to fake. When insiders buy during periods of structural change, before the narrative becomes clear, it indicates strong conviction. When insiders are selling after the story has played out and retail investors have piled in, it often indicates a market peak. It's not just about the transaction; it's about timing, size, and frequency. Does a solitary, minor purchase follow a challenging quarter? That's merely surface-level activity. However, when multiple buys occur across management tiers, a phenomenon known as a bullish cluster, it signals a significant underlying trend. (ECG) serves as a prime example. In February 2025, three insiders—Marcy Maximillian J, Ryan Edward A, and Michael Della Rocca—each bought stock worth between $49k and $53k within days of one another. There was no flashy announcement or hype. Instead, they acted with a quiet conviction. Shortly after, ECG's stock surged nearly 69%, well before the market narrative caught up. Don't chase headlines; track behavior. And when the people who know the business best put real capital to work, we take notice. In a noisy market, that kind of signal is gold. Spinoffs: The Ultimate Incentive Transparency Test If you want to see what management really believes, watch what happens during a spinoff. Spinoffs compel executives to reveal their true intentions. Incentives get reset, structures rebuilt, and priorities laid bare, usually for the first time. The Form 10 is often more revealing than any investor deck, offering a raw look at how aligned (or not) leadership is with future shareholders. What should investors watch for? Start with the comp plan. 'Founder-like' pay structures where leadership takes equity over salary and ties it to long-term value creation are a bullish tell. If the former executives from the parent company maintain significant ownership in the spin-off and avoid selling their shares immediately after the separation, that is another positive indicator. When they tie equity awards to real performance hurdles (ROIC, TSR), rather than just revenue optics, it demonstrates their alignment with the right metrics. In spinoffs, incentives reset and clarity reaches its peak. Study these transactions closely because they offer the cleanest read on intent. If the incentives align, value usually follows. What Smart Investors Do Differently Smart investors don't just screen for valuation; they screen for behavior. They know governance isn't a checkbox. It's a signal. They probe beyond the earnings deck to understand who, how, and why they are receiving compensation. They treat insider behavior and compensation structure as core parts of the thesis, not footnotes. This is why you will outperform in special situations: You are not buying stories. You're buying setups, where incentives align, risk is asymmetric, and behavior signals what spreadsheets can't. Value Is Leaking. Are You Watching? Most investors chase earnings and price targets. But value doesn't leak from spreadsheets; it leaks from boardrooms. You're missing the setup and the upside if you're not monitoring insiders' actions, their compensation, and their beliefs. Incentives shape outcomes. Behavior reveals conviction. And in special situations, that's where the real alpha lives. On the date of publication, Jim Osman did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


The Hill
2 hours ago
- The Hill
The New York Times wants your private ChatGPT history — even the parts you've deleted
Millions of Americans share private details with ChatGPT. Some ask medical questions or share painful relationship problems. Others even use ChatGPT as a makeshift therapist, sharing their deepest mental health struggles. Users trust ChatGPT with these confessions because OpenAI promised them that the company would permanently delete their data upon request. But last week, in a Manhattan courtroom, a federal judge ruled that OpenAI must preserve nearly every exchange its users have ever had with ChatGPT — even conversations the users had deleted. As it stands now, billions of user chats will be preserved as evidence in The New York Times's copyright lawsuit against OpenAI. Soon, lawyers for the Times will start combing through private ChatGPT conversations, shattering the privacy expectations of over 70 million ChatGPT users who never imagined their deleted conversations could be retained for a corporate lawsuit. In January, The New York Times demanded — and a federal magistrate judge granted — an order forcing OpenAI to preserve 'all output log data that would otherwise be deleted' while the litigation was pending. In other words, thanks to the Times, ChatGPT was ordered to keep all user data indefinitely — even conversations that users specifically deleted. Privacy within ChatGPT is no longer an option for all but a handful of enterprise users. Last week, U.S. District Judge Sidney Stein upheld this order. His reasoning? It was a 'permissible inference' that some ChatGPT users were deleting their chats out of fear of being caught infringing the Times's copyrights. Stein also said that the preservation order didn't force OpenAI to violate its privacy policy, which states that chats may be preserved 'to comply with legal obligations.' This is more than a discovery dispute. It's a mass privacy violation dressed up as routine litigation. And its implications are staggering. If courts accept that any plaintiff can freeze millions of uninvolved users' data, where does it end? Could Apple preserve every photo taken with an iPhone over one copyright lawsuit? Could Google save a log of every American's searches over a single business dispute? The Times is opening Pandora's box, threatening to normalize mass surveillance as another routine tool of litigation. And the chilling effects may be severe; when people realize their AI conversations can be exploited in lawsuits that they're not part of, they'll self-censor — or abandon these tools entirely. Worst of all, the people most affected by this decision — the users — were given no notice, no voice, and no chance to object. When one user tried to intervene and stop this order, the magistrate judge dismissed him as not 'timely,' apparently expecting 70 million Americans to refresh court dockets daily and maintain litigation calendars like full-time paralegals. And last Thursday, Stein heard only from advocates for OpenAI and the Times, not from advocates for ordinary people who use ChatGPT. Affected users should have been allowed to intervene before their privacy became collateral damage. The justification for the unprecedented preservation order was paper-thin. The Times argued that people who delete their ChatGPT conversations are more likely to have committed copyright infringement. And as Stein put it in the hearing, it's simple 'logic' that '[i]f you think you're doing something wrong, you're going to want that to be deleted.' This fundamentally misapprehends how people use generative AI. The idea that users are systematically stealing the Times's intellectual property through ChatGPT, then cleverly covering their tracks, ignores the thousand legitimate reasons people delete chats. Users share intimate details about their lives with ChatGPT; of course they clear their conversations. This precedent is terrifying. Now, Americans' private data could be frozen when a corporate plaintiff simply claims — without proof — that Americans' deleted content might add marginal value to their case. Today it's ChatGPT. Tomorrow it could be your cleared browser history or your location data. All they need to do is argue that Americans who delete things must have something to hide. We hope the Times will back away from its stunning position. This is the newspaper that won a Pulitzer for exposing domestic wiretapping in the Bush era. The paper that built its brand in part by exposing mass surveillance. Yet here it is, demanding the biggest surveillance database in recorded history — a database that the National Security Agency could only dream of — all to win a copyright case. Now, in the next step of this litigation, the Times's lawyers will start sifting through users' private chats — all without users' knowledge or consent. To be clear, the question of whether OpenAI infringed the Times's copyrights is for the courts to decide. But the resolution of that dispute should not cost 70 million Americans their privacy. What the Times calls 'evidence,' millions of Americans call 'secrets.' Maybe you have asked ChatGPT how to handle crippling debt. Maybe you have confessed why you can't sleep at night. Maybe you've typed thoughts you've never said out loud. Delete should mean delete. The New York Times knows better — it just doesn't care. Jay Edelson has been recognized by Forbes as one of America's top 200 lawyers and by Fortune as one of the most creative people in business. His privacy cases have recovered over $1.5 billion for consumers nationwide.


Forbes
5 hours ago
- Forbes
From Existential Threat To Hope. A Philosopher's Guide To AI
AI was never just a tool to make us more productive, or to help us do 'good'. It was always also an ... More expression of who we are and what we are becoming. Photo: 7/31/1946-New York. This photo of "Miss Liberty" was made from a helicopter, the first time it has ever been done. It shows a head and shoulders view of the statue and the torch. The dark side of AI continues to reveal new faces. A few weeks ago, Geoffrey Hinton, Nobel laureate and former AI chief in Google, highlighted two ways in which AI poses an existential threat to humanity: By people misusing AI, and by AI becoming smarter than us. And this week OpenAI admitted that they don't know how to prevent ChatGPT from pushing people towards mania, psychosis and death. At the same time, AI optimists keep stressing that it is only a matter of years before AI will solve scientific, environmental, health and social problems that humanity has been struggling with for ages. And when The United Nations kicks off its global summit on AI for Good next week, it's to gather AI experts from across the world to "identify innovative AI applications to solve global challenges.' But what if the discussion of AI's risks and opportunities, dark and bright sides and bad and good ways to use technology is part of the existential threat we are facing? Why AI For Good May Be A Bad Idea When German philosopher Friedrich Nietzsche urged us to think Beyond Good and Evil (book from 1885), he suggested that it is not what we identify, define, and decide to be 'good' that determines whether we succeed as humans. It is whether we manage to rise above our unquestioned ideas of what good looks like. Labeling some AI products as human-centric or responsible might sound like a step in the right direction towards identifying and designing innovative AI applications to solve global challenges. But it also reinforces the idea that our future depends on how AI is designed, built and regulated rather on how we live, learn and relate to technology. And by focusing on AI when thinking and talking about our future rather than focusing on ourselves and how we exist and evolve as humans, we are not rising above our unquestioned ideas of what good looks like. Rather, we submit to the idea that permeates all technology, that good equals innovative, fast, and efficient. To rise above our unquestioned ideas about the nature and impact of AI, we need to follow Nietzsche's lead. So, here it is: A Philosopher's Guide to AI. German philosopher Friedrich Nietzsche (1844 - 1900) urged us to think Beyond Good and Evil. This ... More involves questioning the idea that permeates all technology, that good equals innovative, fast, and efficient. (Photo by Hulton Archive) 1. Stop Thinking Of AI As A Tool The first step towards shifting the focus from the development of AI to our evolution as humans is to question the widespread and constantly repeated idea that AI, like any other technology, is just a tool that can be used for good as well as evil. Inspired by Nietzsche and others who set the tradition of existential philosophy in motion, German philosopher Martin Heidegger put it like this: 'Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard is as something neutral; for this conception of it, to which today we particularly like to pay homage, makes us utterly blind to the essence of technology.' In The Question Concerning Technology from 1954, Heidegger argued that the essence of technology is to give man the illusion of being in control. When we think of technology as a tool that can be used for good as well as evil, we also think that we are in control of why, when, and for what it is used. But according to Heidegger this is only partly the case. We may make a decision to buy a car to drive ourselves to work. And thus we may think of the car as a means to achieve our goal of getting to work as fast as possible. But we never made the decision that fast is better than slow. It's an idea that comes with the car. So is the idea that it should be easy and convenient for us to get to work. And that fast, easy and convenient is more important than anything else. Like all other technologies, the car comes with a promise that we can achieve more by doing less. And like all other technologies, it makes us think that this is what life is and should be about. But to rise above our unquestioned ideas, we must not only ask the questions we are encouraged to ask when faced with a new technology – like 'how does it work?', 'when can I use it?', and 'how much easier will it be to do X?' We must also ask the questions that the essence of technology discourages us from asking – like 'do I even need technology for this?', 'what does this technology prevent me from doing?', and 'what will my life be like if I trust technology to make everything easy?' 2. Take The History Of Technology Seriously Heidegger made it clear that although different generations of technology have different ways of influencing human beings and behaviors, our fundamental purpose for using technology remains the same: to deal with the fact that we are limited creatures, thrown into this world without knowing why and for how long. Put differently, the question concerning technology is and always was existential. It's about who we are and what we become when we try to overcome our limitations. Ever since our early ancestors began using rocks and branches as tools and weapons, our relationship with technology has been at the heart of how we live, learn and evolve as humans. And more than anything else, it has shaped our understanding of ourselves and our relationship with our surroundings. Living in the early days of the digital revolution, Heidegger didn't know that AI would have the impact it has today. Nor did he know that AI experts would talk about their inventions as posing an existential threat to humanity. But he distinguished between different generations of technology. And he suggested that humanity was moving toward a technological era of great existential significance. Illustration of the difference in how humans relate to technology throughout three technological ... More eras. Having used pre-modern tools to survive and modern technology to thrive, the idea that digital technology can help transcend the limitations set by nature doesn't seem far-fetched (see illustration). However, by not realizing that our relationship with technology is existential, AI experts seem to have missed that AI was never just a tool to make us more productive, or to help us do 'good'. It was always also an expression of who we are and what we are becoming. And by building technology that distances itself from the limitations of nature, we also began to distance ourselves from our human nature. According to Heidegger, this distancing has been going on for centuries without any of us noticing it. The widespread debate about AI as an existential threat is a sign that this is changing. And that AI may be the starting point for us humans to finally develop a more reflective and healthy relationship with technology. 3. Make Existential Hope A Joint Venture Heidegger concludes The Question Concerning Technology by writing: 'The closer we come to the danger, the brighter the ways into the saving power begin to shine and the more questioning we become. For questioning is the piety of thought.' While AI experts are calling for regulation, for AI development to be paused, and even for new philosophers to help them deal with the threat they see AI posing, hope shines from a completely different place than tech companies and regulators. 'Where?' you may ask. And that's just it. We are asking more existential questions about who we are, why we are here, and where we want to go as humanity than ever before. And with 'we', I don't mean philosophers, tech experts, and decision makers. I mean all of us in all sorts of contexts in all parts of the world. There is something about AI that, unlike previous generations of technology, makes us ask the questions that the essence of technology has previously discouraged us from asking. Unlike modern technologies like cars and digital technologies like computers, we actually have a widespread debate about what AI is preventing us from doing and what our lives will be like if we trust AI to make everything easy. And this instills hope. Existential hope that we still know and are willing to do what it takes to stay human. Even when it doesn't equal innovative, fast, and efficient. Senior journalist with BBC Global News, Richard Fisher defines existential hope as "the opposite of existential catastrophe: It's the idea that there could be radical turns for the better, so long as we commit to bringing them to reality. Existential hope is not about escapism, utopias or pipe dreams, but about preparing the ground: making sure that opportunities for a better world don't pass us by.' With A Philosopher's Guide to AI, the questions we ask about AI offers a once in many lifetimes opportunity for a better world. Let's make sure it doesn't pass us by!