
WhatsApp Ads Are Now Showing Up In Status Updates And Channels For These Users
WhatsApp ads are coming for your updates and channels but it will never intrude your chat space, the messaging platform has said earlier.
WhatsApp confirmed ads are coming in the form of status updates and for promoted channels. And now we have started seeing signs of how this new version of the messaging app will work out in the long term.
The latest Android beta update is the first version to come with WhatsApp ads, taking it one step closer to other Meta products like Instagram and Facebook. WhatsApp announced ads back in June this year, and assured users that it will never invade their chat space/feed with ads.
The details about the new update and WhatsApp ads roll out has been shared by WaBetaInfo in a post recently. The tipster says users are seeing Status update ads in the beta version 2.25.21.11 that is available for select beta testers for now. Status update becomes the main hub for ads on WhatsApp and as you scroll through updates from different contacts, the messaging app will place an ad with a sponsored post label.
This is similar to how you see ads between Stories on Instagram. We independently checked for ads in the Status update with WhatsApp beta version but it seems the roll out even for the beta testers is being limited to some users for now.
WhatsApp is offering ads for channel subscriptions, promoted channels and Status updates. Status updates have become popular among WhatsApp users and the platform clearly sees a way out to push ads and keep its user base satisfied.
WhatsApp says ads will not come where your personal chats are located but going by the earlier promises and claims made by Meta, we cannot discount that changing in the near future. WhatsApp also assures that it will never sell/share your phone number to advertisers and your chats or calls will never become part of its exercise to push personalised ads.
Having said that, WhatsApp is going to use data like country, language to change the dynamics of its platform and it is now up to the people to decide if their favourite messaging app is worth the hassle anymore.
view comments
First Published:
July 21, 2025, 10:25 IST
Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
an hour ago
- News18
Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman
Last Updated: Sam Altman raised concerns about user data confidentiality with AI chatbots like ChatGPT, especially for therapy, citing a lack of legal frameworks to protect sensitive info. OpenAI CEO Sam Altman has raised concerns about maintaining user data confidentiality when it comes to sensitive conversations, as millions of people, including children, have turned to AI chatbots like ChatGPT for therapy and emotional support. In a recent podcast, This Past Weekend, hosted by Theo Von on YouTube, CEO Altman replied to a question about how AI works with the current legal system, cautioning that users shouldn't expect confidentiality in their conversations with ChatGPT, citing the lack of a legal or policy framework to protect sensitive information shared with the AI chatbot. 'People talk about the most personal sh*t in their lives to ChatGPT. People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." Altman continued to say that the concept of confidentiality and privacy for conversations with AI should be addressed urgently. 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," the Indian Express quoted Altman as saying. This means that none of your conversations with ChatGPT about mental health, emotional advice, or companionship are private and can be produced in court or shared with others in case of a lawsuit. Unlike end-to-end encrypted apps like WhatsApp or Signal, which prevent third parties from reading or accessing your chats, OpenAI can access your chats with ChatGPT, using them to improve the AI model and detect misuse. While OpenAI claims to delete free-tier ChatGPT conversations within 30 days, but may retain them for legal or security reasons. Adding to privacy concerns, OpenAI is currently in the middle of a lawsuit with The New York Times, which requires the company to save user conversations with millions of ChatGPT users, excluding enterprise customers. view comments First Published: July 26, 2025, 22:27 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Economic Times
2 hours ago
- Economic Times
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
OpenAI CEO Sam Altman Flags Privacy Loophole in ChatGPT's Use as a Digital Confidant. (Image Source: YouTube/@Theo Von) Synopsis OpenAI CEO Sam Altman has warned that conversations with ChatGPT are not legally protected, unlike those with therapists, doctors, or lawyers. In a podcast with Theo Von, Altman explained that users often share deeply personal information with the AI, but current laws do not offer confidentiality. This means OpenAI could be required to hand over user chats in legal cases. He stressed the need for urgent privacy regulations, as the legal system has yet to catch up with AI's growing role in users' personal lives. Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality. ADVERTISEMENT During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI. Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there's a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. 'That's very screwed up,' Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention. Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn't even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman's warning OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot's assistance. 'It makes sense to want privacy clarity before you use it a lot,' Altman said, agreeing with Von's to OpenAI's own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse. ADVERTISEMENT The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company's data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court's Roe v. Wade reversal, which raised fears about digital privacy around personal choices. ADVERTISEMENT While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn't caught up. Until it does, Altman's message is clear: users should be cautious about what they choose to share. (Catch all the Budget 2024 News, Budget 2024 Live Coverage Events and Latest News Updates on The Economic Times.) NEXT STORY


Time of India
4 hours ago
- Time of India
Meta names ChatGPT co-creator Shengjia Zhao as Chief Scientist of superintelligence lab
Meta Platforms has appointed Shengjia Zhao , co-creator of ChatGPT , as chief scientist of its Superintelligence Lab , CEO Mark Zuckerberg said on Friday, as the company accelerates its push into advanced AI. "In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex," Zuckerberg wrote in a Threads post, referring to Meta's Chief AI Officer Alexandr Wang, who Zuckerberg hired from startup Scale AI when Meta took a big stake in it. Zhao, a former research scientist at OpenAI , co-created ChatGPT, GPT-4 and several of OpenAI's mini models, including 4.1 and o3. He is among several researchers who have moved from OpenAI to Meta in recent weeks, part of a broader talent arms race as Zuckerberg aggressively hires from rivals to close the gap in advanced AI. Meta has been offering some of Silicon Valley's most lucrative pay packages and striking startup deals to attract top researchers, a strategy that follows the underwhelming performance of its Llama 4 model. Meta launched the Superintelligence Lab recently to consolidate work on its Llama models and long-term artificial general intelligence ambitions. Zhao is a co-founder of the lab, according to the Threads post, which operates separately from FAIR, Meta's established AI research division led by deep learning pioneer Yann LeCun. Zuckerberg has said Meta aims to build "full general intelligence" and release its work as open source - a strategy that has drawn both praise and concern within the AI community.