logo
Meta's New AI Assistant: Productivity Booster Or Time Sink?

Meta's New AI Assistant: Productivity Booster Or Time Sink?

Forbes30-04-2025
The Meta AI logo appears on a mobile phone with Meta AI visible on a tablet in this photo ... More illustration in Brussels, Belgium, on January 26, 2025. (Photo by Jonathan Raa/NurPhoto via Getty Images)
Meta launched a new voice-enabled AI app at its inaugural LlamaCon event on April 29, 2025, which is integrated into Instagram, Messenger and Facebook's core experiences. At the event, the company also announced advancements to strengthen its open-source AI ecosystem, headlined by the limited preview launch of the Llama API, which combines closed-model APIs with open-source flexibility, offering one-click access, fine-tuning for Llama 3.3 8B, and compatibility with OpenAI's software development kit. Llama4 has surpassed 1 billion downloads since its launch two years ago.
Meta expanded Llama Stack integrations with partners like Nvidia, IBM and Dell for enterprise deployment. On the security front, new tools like Llama Guard 4, LlamaFirewall, and CyberSecEval 4 were introduced alongside the Llama Defenders Program to bolster AI safety. Meta awarded $1.5M in Llama Impact Grants to 10 global recipients, including startups improving civic services, healthcare, and education.
The new Meta AI app, built with Llama 4, was conceived as 'companion app' for Meta's AI glasses. While the development of versatile AI apps is promising, the spread of AI assistants to almost all digital platforms, even wearable tech, threatens to accelerate the very busyness they purport to tame.
AI assistants begin by capturing your input, whether it's speech, which is converted to text via an automatic‐speech‐recognition engine, or direct keyboard entry. Next it packages that text, along with a snippet of recent conversational context, into a 'prompt' that's sent over to a powerful remote model such as OpenAI's ChatGPT, Meta's Llama, Google's Gemini, among others. In milliseconds, these models perform billions of parameter computations to predict and assemble a most likely satisfying response. To make their outputs more relevant for specialized tasks, developers fine-tune these base models on curated datasets or layer in real-time data retrieval. For instance, KAYAK.ai combines ChatGPT's base model with its own database of travel and pricing information to provide chat-based service for customers to plan their trips.
Advanced systems may even combine computer vision with language understanding. For example, you can snap a photo of your utility bill and ask why charges spike in a given month, or take a photo of a broken component of your car and ask for repair advice. Finally, the text response is sent back to your device and, if you're using voice, rendered into speech by a text-to-speech engine.
AI assistants are integrated into many software and applications, from Adobe's Acrobat AI to summarize documents and generate images to Nvidia's G-Assist in PC games. In consumer products, Amazon's Alexa powers Echo speakers and smart-home devices, Google Assistant lives on Android phones and Nest speakers, and Apple's Siri runs on iPhones, Macs, and HomePods—each leveraging its own blend of cloud-based or on-device intelligence to understand your requests and take action.
Meanwhile, enterprises are embedding assistants in productivity tools, such as Microsoft 365 Copilot in Word, Excel, PowerPoint, Outlook, and Teams, to draft content, analyze data, and automate workflows in real time.
The promise of time saved is seductive. Microsoft 365 Copilot drafts executive summaries in seconds, and Duolingo's AI tutors adapt to each learner's mistakes in real time. Zoom's live-transcript search transforms hours of recordings into keyword lookups. Yet those very efficiency gains often spur heavier workloads rather than lighten them—a phenomenon known as the Jevons paradox, where making a resource or task 'cheaper' leads to its increased consumption overall.
In real-world practice, every minute reclaimed by AI is quickly folded into loftier content quotas or more frequent campaign cycles. Hence the advent of AI assistants may not lighten up the work of employees. When everyone has access to AI assistants, expectations for output and productivity will be higher. Hence people in the workplaces may feel more stretched than before.
In addition to the rising expectations for productivity, AI assistants may also cause skill erosion. Just as reliance on GPS has dulled our innate navigation skills, AI assistants risk hollowing out foundational human capabilities. Students leaning on AI-generated essays lose the muscle for crafting compelling arguments and convincing prose. Financial analysts trusting AI-summarized earnings reports may overlook footnote anomalies or balance-sheet red flags.
In healthcare, tools like Nuance's Dragon Medical One promise to free doctors from note-taking, yet clinicians who no longer manually encode patient histories may miss subtleties the AI fails to capture. Simultaneously, our attention fragments further: notifications ping as Adobe's Acrobat Assistant offers rewrites, Google Slides' Bard integration suggests slide outlines and edits, and Perplexity's AI Assistant can research topics and provide summaries of information directly within the WhatsApp chat, all reducing our patience for in-depth thinking and research.
Meta AI's pledge to put users 'in control' assumes that frictionless interfaces equal greater agency. But true agency requires conscious choice, not mere convenience. If your AI assistant presents three 'optimal' meeting times, do you pause to question the meeting's necessity, or do you automatically select one? Moreover, every prompt, share, and purchase recommendation feeds back into personalization algorithms, which then shape what you see next. Over time, you become both the user and the used. Your preferences are subtly nudged by models that learn which suggestions keep you clicking, shopping or posting.
To reap AI's benefits without ceding our autonomy, organizations and individuals must define clear guardrails. Disable nonessential notifications and limit AI-driven summaries to internal drafts, preserving human review for important materials. Carve out regular 'deep-work' intervals when assistants rest silent, safeguarding time for strategy, reading or unstructured conversation. Treat every AI output as a first draft—invest the effort to fact-check, recalculate and consult original sources. In mission-critical fields such as medicine, education and finance, design workflows that keep humans firmly in the loop, using AI to augment human judgment, not replace it.
The era of AI assistants is upon us, reshaping our digital interfaces into something resembling natural conversation. By understanding how these systems operate, acknowledging both their genuine efficiencies and hidden costs, and deliberately shaping our interactions with them, we can ensure that these tools serve to reclaim our cognitive bandwidth rather than accelerate the relentless pace of modern life.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Samsung reveals a mysterious $16.5 billion chip deal.
Samsung reveals a mysterious $16.5 billion chip deal.

The Verge

time14 minutes ago

  • The Verge

Samsung reveals a mysterious $16.5 billion chip deal.

Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy See all Stories Posted Jul 28, 2025 at 3:04 AM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Richard Lawler Posts from this author will be added to your daily email digest and your homepage feed. See All by Richard Lawler Posts from this topic will be added to your daily email digest and your homepage feed. See All Business Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Samsung Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype
Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype

Bloomberg

time4 hours ago

  • Bloomberg

Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype

OpenAI 's ChatGPT started a revolution in artificial intelligence development and investment. Yet nine-tenths of the technology and services that've sprung up since could be gone in under a decade, according to the founder of Alibaba Group Holding Ltd. 's cloud and AI unit. The problem is the US startup, celebrated for ushering AI into the mainstream, created 'bias' or a skewed understanding of what AI can do, Wang Jian told Bloomberg Television. It fired the popular imagination about chatbots, but the plethora of applications for AI goes far beyond that. Developers need to cut through the noise and think creatively about applications to propel the next stage of AI development, said Wang, who built Alibaba's now second-largest business from scratch in 2009.

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

Forbes

time6 hours ago

  • Forbes

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store