logo
WhatsApp launches ‘Private Processing' to enhance AI chat privacy: Report

WhatsApp launches ‘Private Processing' to enhance AI chat privacy: Report

Time of India01-05-2025

WhatsApp Private Processing
Meta, the parent company of WhatsApp, has unveiled a new feature aimed at strengthening user privacy while engaging with artificial intelligence tools within the app. The feature, titled "Private Processing," is designed to allow users to interact with Meta AI in a more secure and confidential manner. Unlike standard AI chats or queries processed over traditional cloud infrastructure, Private Processing ensures that neither Meta, WhatsApp, nor third-party entities can access the user's data once the session ends.
According to Meta's announcement, this function will be optional and is expected to roll out in the coming weeks. The company claims the system is designed with both privacy and auditability in mind, featuring enhanced protections against external and internal threats. Meta has also taken steps to make the system verifiable by independent parties and more resistant to cyberattacks, further aligning itself with evolving privacy standards in the tech industry.
What is Private Processing
Private Processing is a confidential AI interaction system being added to WhatsApp that enables users to interact with Meta AI without leaving data traces that can be accessed later. When enabled, it provides a temporary processing session for tasks such as generating AI summaries, retrieving information, or engaging in chat-based queries, all without storing or linking the user's messages to identifiable metadata once the interaction ends.
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Maximize Your $200 Investment with AI-Powered Market Insights!
News Portal
Learn More
Undo
Key characteristics:
User-initiated and entirely optional
No retention of messages after the session ends
Unauditable by Meta, WhatsApp, or third-party vendors post-session
Supports end-to-end encryption principles
WhatsApp's 'Private Processing' secures AI chats with no data stored
Meta emphasises that Private Processing is built with security at its core. Once the AI completes a user's request, the session data is discarded, ensuring that:
The system does not retain user messages, even temporarily, for future use.
Even if a hacker gains access to Meta's infrastructure, they would be unable to access historical Private Processing interactions.
Additional safeguards:
Meta is integrating Private Processing into its bug bounty program, encouraging ethical hackers to identify potential vulnerabilities before launch.
A detailed security engineering design paper will be released ahead of the full rollout, outlining the architecture, privacy logic, and threat models.
Meta will allow independent audits to verify that the feature meets stated privacy expectations and performs securely in real-world environments.
Meta's Private Processing vs. Apple's Private Cloud Compute
The Private Processing model bears similarities to Apple's Private Cloud Compute (PCC), a system introduced for confidential cloud-based AI interactions. Both aim to achieve secure, privacy-respecting processing outside the user's device by using advanced cryptographic protocols and secure hardware environments.
Feature
Meta's Private Processing
Apple's PCC
Deployment Platform
WhatsApp (cloud-based AI interaction)
iOS/macOS devices with server fallback
Default Status
Optional and user-initiated
On-device by default; uses PCC as fallback
Data Retention
No message retention post-session
Minimal and encrypted when stored briefly
Obfuscation Protocol
OHTTP (Oblivious HTTP) via third-party
OHTTP used for obscuring user IPs
Auditability
Independent third-party verification
Apple audits and claims verifiable design
While both systems use Oblivious HTTP (OHTTP) to hide user IP addresses from Meta or Apple, Meta's implementation is user-triggered, whereas Apple's approach favors on-device processing by default, switching to PCC when server-side processing is necessary.
Role of OHTTP and third-party relays
A core component of Private Processing is its reliance on Oblivious HTTP (OHTTP), a web standard that separates IP address visibility from the content being processed. Requests made to Meta's servers are relayed through independent third-party providers, ensuring that:
Meta can see the request content but not the user's identity.
The relay provider sees the IP address but not the content.
This privacy split ensures no single party has access to both the user's identity and request content, creating a privacy-preserving pipeline for AI queries.
Auditability and transparency measures
To maintain public trust, Meta has built in mechanisms for external verification:
Independent researchers and privacy watchdogs can audit Private Processing.
The bug bounty program enables ongoing white-hat testing.
A soon-to-be-published security white paper will provide the technical blueprint of the system, enabling academic scrutiny.
Meta's approach aligns with emerging industry standards demanding that privacy-focused claims be independently verifiable and not rely solely on corporate assurances.
Broader implications for messaging privacy and AI integration
The introduction of Private Processing indicates a growing shift in how large tech companies balance AI capabilities with user privacy demands. As more users become concerned about data surveillance, profiling, and cyberthreats, features like Private Processing represent an effort to offer control back to the user while still allowing for advanced functionalities like chat-based AI support.
With messaging apps becoming hubs for AI-powered tools, ensuring confidentiality of queries and outputs is critical to maintaining both compliance with global privacy regulations and user confidence.
Private Processing launch timeline and availability
According to Meta, Private Processing will:
Be available to WhatsApp users in selected regions in the coming weeks.
Roll out initially as an opt-in feature.
Eventually integrate more AI capabilities as the infrastructure matures and proves secure.
Users will be able to enable or disable the feature from within WhatsApp's AI tools settings, giving them complete control over when and how their data is processed.
Also read |
Genshin Impact Codes
|
Fruit Battlegrounds Codes
|
Blox Fruits Codes
|
Peroxide Codes

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

WANotifier Joins TinySeed EMEA Fall 2024 Batch to Help Businesses Market at Scale on WhatsApp
WANotifier Joins TinySeed EMEA Fall 2024 Batch to Help Businesses Market at Scale on WhatsApp

Business Standard

timean hour ago

  • Business Standard

WANotifier Joins TinySeed EMEA Fall 2024 Batch to Help Businesses Market at Scale on WhatsApp

SMPL Dover (Delaware) [US]/ Pune (Maharashtra) [India], June 28: WANotifier, an all-in-one WhatsApp marketing SaaS platform built on the official WhatsApp API, announced today that it has joined the TinySeed EMEA Fall 2024 accelerator batch. The funding will be used to further develop its product and expand market reach to help small businesses effectively expand their reach and automate customer interactions on WhatsApp. Founded in 2022, WANotifier helps businesses streamline marketing, customer engagement, and automation through WhatsApp, one of the fastest-growing communication channels for businesses globally. As an official Meta Tech Partner, WANotifier offers a unique approach with 0% API markup and flexible onboarding options; businesses can either use a managed embedded signup flow or configure the API directly with their own credentials. "We had applied to TinySeed earlier as well but got in this time, and I couldn't be happier," said Ram Shengale, Founder of WANotifier. "I've always admired TinySeed's mostly bootstrapped mindset toward building sustainable SaaS businesses. Being the only Indian company in this batch makes it even more special. The TinySeed network and mentors are extremely knowledgeable and friendly; you can ask highly specific questions and get actionable advice. That's one of the best parts of being part of this fund." The company plans to use the funding primarily for product development and marketing. "We're behind a few of our competitors in terms of features, so our first goal is to build a strong development team and reach feature parity," Shengale added. "At the same time, we'll focus on penetrating key market segments. Our philosophy is simple: help businesses market at scale on WhatsApp without adding to the inherent complexity of the API. Many providers build clunky, hard-to-use software; we're committed to delivering a clean, user-friendly experience for marketers and business owners." WANotifier currently serves thousands of businesses across various industries, with notable customers including BITS Pilani, Iskcon, Dog Home Foundation, Bvlgari Casablanca, Mega Events (UK), and others. Having completed most core features recently, the team is now accelerating marketing efforts to reach new markets. About WANotifier WANotifier Inc. is an all-in-one WhatsApp marketing SaaS tool for businesses, built on top of the official WhatsApp API. It helps businesses automate marketing, customer engagement, and transactional messaging on WhatsApp with an easy-to-use platform and flexible onboarding options. WANotifier is an official Meta Tech Partner. The company is headquartered in Dover, Delaware (USA), with its India team based in Pune, Maharashtra. About TinySeed TinySeed is a remote accelerator and early-stage investment fund that helps SaaS founders grow profitable, sustainable businesses. The Fall 2024 EMEA batch supports founders across Europe, the Middle East, and Asia. Media Contact: Ram Shengale Founder, WANotifier Inc. Email: contact@ Website:

OpenAI employee set to join Meta calls the ‘$100 million signing bonus' fake news
OpenAI employee set to join Meta calls the ‘$100 million signing bonus' fake news

Indian Express

time2 hours ago

  • Indian Express

OpenAI employee set to join Meta calls the ‘$100 million signing bonus' fake news

In the last few months, Meta has been poaching AI researchers left and right to its superintelligence lab by paying them hefty 'signing bonuses', which some say amount to $100 million. However, this might not be the case. In a post on X (formerly Twitter), Lucas Beyer, who is currently working at OpenAI and will soon join Meta, says that the $100 million sign-up bonus is 'fake news.' Beyer also confirmed that he will be joined by Alexander Kolesnikov and Xiaohua Zhai, who are currently working at OpenAI. Last week, OpenAI CEO Sam Altman said that Meta was trying to poach AI researchers from his company by offering them bonuses of $100 million. In a podcast hosted by OpenAI, Altman claimed that 'Meta started making giant offers to a lot of people on our team' and that 'at least, so far, none of our best people have decided to take them up on that.' According to a report by The Verge, when Meta CTO Andrew Bosworth was asked about the '$100 million signing bonuses', he said, 'Sam is just being dishonest here. He's suggesting that we're doing this for every single person…Look, you guys, the market's hot. It's not that hot.' Bosworth added that Altman is trying to counter all these offers and that it 'is not the general thing that's happening in the AI space.' The Meta CEO went on to say that there are a couple of more people joining the company, but declined to share details. However, Bosworth wasn't the only Meta executive to mention OpenAI at the internal meeting. CPO Chris Cox said that instead of building a ChatGPT-like AI chatbot that helps people with things like writing work emails, Meta wants to differentiate its AI offerings by focusing 'on entertainment, on connection with friends, on how people live their lives.' Compared to Google and OpenAI, Meta is finding it hard to compete in the AI race. However, the Mark Zuckerberg-owned company recently built its superintelligence team and is on a hiring spree. Recently, Meta purchased a 49 per cent stake in Scale AI and hired its 28 year old CEO Alexandr Wang to lead its newly formed team.

OpenAI Taps Google's AI Chips in Strategic Shift Away from Nvidia Dependency
OpenAI Taps Google's AI Chips in Strategic Shift Away from Nvidia Dependency

Hans India

time2 hours ago

  • Hans India

OpenAI Taps Google's AI Chips in Strategic Shift Away from Nvidia Dependency

In a significant move within the AI landscape, OpenAI, the Microsoft-backed creator of ChatGPT, has reportedly begun utilizing Google's artificial intelligence chips. According to a recent report by Reuters, this development points to OpenAI's efforts to diversify its chip suppliers and reduce its dependency on Nvidia, which currently dominates the AI hardware market. OpenAI has historically been one of the largest buyers of Nvidia's graphics processing units (GPUs), using them extensively for both training its AI models and performing inference tasks — where the model applies learned data to generate outputs. However, as demand for computing power surges, OpenAI is now exploring alternatives. The Reuters report, citing a source familiar with the matter, claims that OpenAI has started using Google's Tensor Processing Units (TPUs), marking a notable shift not only in its hardware strategy but also in its reliance on cloud services. Earlier this month, Reuters had already suggested that OpenAI was planning to leverage Google Cloud to help meet its growing computational needs. What makes this collaboration remarkable is the competitive context. Google and OpenAI are direct rivals in the AI field, both vying for leadership in generative AI and large language model development. Yet, this partnership demonstrates how shared interests in infrastructure efficiency and cost management can bridge even the most competitive divides. According to The Information, this is OpenAI's first major deployment of non-Nvidia chips, indicating a deliberate effort to explore alternative computing platforms. By leasing Google's TPUs through Google Cloud, OpenAI is reportedly looking to reduce inference costs — a crucial factor as AI services like ChatGPT continue to scale. The move is also part of a broader trend at Google. Historically, the tech giant has reserved its proprietary TPUs mainly for internal projects. However, it appears Google is now actively expanding external access to these chips in a bid to grow its cloud business. This strategy has reportedly attracted several high-profile clients, including Apple and AI startups like Anthropic and Safe Superintelligence — both founded by former OpenAI employees and seen as emerging competitors. A Google Cloud employee told The Information that OpenAI is not being offered Google's latest-generation TPUs, suggesting the company is balancing business expansion with competitive caution. Still, the fact that OpenAI is now a customer illustrates Google's ambition to grow its end-to-end AI ecosystem — from hardware and software to cloud services — even if that means partnering with direct rivals. Neither Google nor OpenAI has issued official statements confirming the deal. Yet, the development signals an evolving AI infrastructure market where flexibility, cost-efficiency, and compute availability are becoming more strategic than ever. As the race to power the future of AI intensifies, such cross-competitive collaborations could become more commonplace — redefining how major players navigate both cooperation and competition in the era of intelligent computing.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store