
WhatsApp cuts off support for older iPhones and Androids: Find out full list
WhatsApp has officially discontinued support for several older Apple and Android smartphones, effective 1 June 2025.
The decision is part of the messaging app's routine update cycle and is expected to push users with ageing devices towards modern alternatives.
The Meta-owned platform will now only function on iPhones running iOS 15.1 and above and Android phones using Android 5.0 and newer.
The original cut-off date, scheduled for 5 May, was extended by Apple to allow users additional time to transition.
Affected devices include:
iPhone 5s
iPhone 6
Galaxy S III
HTC One X
Sony Xperia Z
Though some reports also flagged the iPhone 6s, iPhone 6s Plus and the first-generation iPhone SE, these models currently operate on iOS 15.8.4 and will likely remain supported for at least another year.
In a statement posted on its Help Center, WhatsApp explained the rationale behind the change: 'Every year we look at which devices and software are the oldest and have the fewest users. These devices also might not have the latest security updates, or might lack the functionality required to run WhatsApp.'
While the update will not affect users of new iPhone or Android models, it underscores a wider industry trend of phasing out support for outdated hardware. Experts say the shift is not just about performance—it also helps improve user security by consolidating updates across fewer devices.
The move comes as WhatsApp continues to expand its platform with updates, including multi-device functionality and a long-awaited iPad app.
Users still relying on older devices are now being urged to consider upgrading.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
2 days ago
- Express Tribune
AI breakthrough helps detect hidden apps on smartphones
Vault apps allow users to store files, messages, or even other apps behind layers of encryption. PHOTO:PEXELS Listen to article Researchers in Australia have developed a powerful new technique to uncover hidden "vault" apps on smartphones — a discovery that could aid law enforcement in digital investigations. The study, conducted by Edith Cowan University and the University of Southern Queensland, found that machine learning (ML) can identify vault apps with up to 98% accuracy on Android devices. Vault apps allow users to store files, messages, or even other apps behind layers of encryption. While often used for privacy, they have increasingly been linked to illicit activities, including espionage and unauthorised surveillance. 'These apps can mimic normal ones, making them very difficult to detect,' said Associate Professor Mike Johnstone from ECU. 'Current detection tools rely on prior knowledge of suspicious apps, which limits their usefulness.' By contrast, the new machine learning approach can identify vault apps without needing a pre-existing list or database. The breakthrough could offer a valuable tool for police and security agencies, particularly as smartphones become more integral to modern life, with over 5 billion users worldwide. 'Given how common smartphones are, any non-invasive and accurate method for identifying these hidden apps could be a game-changer,' said Professor Johnstone. The team now plans to expand the research to include more algorithms, a wider dataset, and tests on non-Android devices.


Express Tribune
3 days ago
- Express Tribune
Meta wins copyright lawsuit
A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission. District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week. However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace. "No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents. Books involved in the suit include Sarah Silverman's comic memoir The Bedwetter and Junot Diaz's Pulitzer Prizewinning novel The Brief Wondrous Life of Oscar Wao, the documents showed. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." Market harming? A different federal judge in San Francisco on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission. District Court Judge William Alsup ruled that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his decision, comparing AI training to how humans learn by reading books. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train chatbot Claude, the company's ChatGPT rival. Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.


Business Recorder
3 days ago
- Business Recorder
DeepSeek faces expulsion from app stores in Germany
FRANKFURT: Germany has taken steps towards blocking Chinese AI startup DeepSeek from the Apple and Google app stores due to concerns about data protection, according to a data protection authority commissioner in a statement on Friday. DeepSeek has been reported to the two U.S. tech giants as illegal content, said commissioner Meike Kamp, and the companies must now review the concerns and decide whether to block the app in Germany. 'DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union,' she said. OpenAI says China's Zhipu AI gaining ground amid Beijing's global AI push 'Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies,' she added. The move comes after Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations. DeepSeek, which shook the technology world in January with claims that it had developed an AI model that rivaled those from U.S. firms such as ChatGPT creator OpenAI at much lower cost, says it stores numerous personal data, such as requests to the AI or uploaded files, on computers in China.