&w=3840&q=100)
Tech Wrap July 30: Acer Nitro Lite 16, Moto G86 Power, Apple specialist
Tech Wrap July 30
BS Tech New Delhi
Listen to This Article
Acer Nitro Lite 16 with 13th Gen Intel chip, Nvidia RTX 4050 GPU launched
Acer, the Taiwanese tech brand, has introduced the Nitro Lite 16 series in India, expanding its product range. The laptop features a 16-inch IPS display with WUXGA resolution (1920 x 1200) and supports up to a 180Hz refresh rate. It is powered by a 13th Gen Intel Core i7-13620H processor and can be paired with an NVIDIA GeForce RTX 4050 GPU. It includes a 53Wh battery and is packaged with a 100W USB-C charger.
Motorola has introduced the Moto G86 Power in India, adding to its G-series lineup. Priced at Rs 17,999 for the sole 8GB RAM and 128GB storage option, the device is powered by the MediaTek Dimensity 7400 chipset and features a 50MP Sony LYTIA 600 primary camera. It comes with a 1.5K pOLED display, AI-enhanced imaging features, and military-grade durability.
Apple has launched its 'Shop with a Specialist over Video' offering in India, enabling customers to interact with Apple Store experts via video while browsing the online store. India becomes the second country, after the US, to access this feature. Apple describes the service as a secure and personalized shopping experience available anytime, anywhere.
OpenAI has unveiled a new 'study mode' in ChatGPT to support users working through academic problems step-by-step instead of giving direct answers. Available from July 29 for all logged-in users on Free, Plus, Pro, and Team plans, with Edu users getting access soon, this addition aligns with OpenAI's efforts to make ChatGPT more beneficial for educational use.
NotebookLM, Google's AI-driven tool for document comprehension, has added a major feature called Video Overviews. As per Google's blog, this new feature transforms text into AI-generated video summaries, complete with narration and slides. These videos visually represent data such as diagrams, statistics, and quotes to simplify complex information.
Adobe has introduced a fresh set of AI-powered features across all Photoshop platforms—desktop, web, and mobile. The update includes tools like Harmonize, generative upscaling, and enhanced object removal via the remove tool. Adobe says these tools are designed to reduce manual work and speed up common tasks such as image clean-up and composition.
Google has expanded AI Mode with several new functions, including video search, file uploads, and Canvas—a smart planning assistant. These AI-driven features assist users by answering questions, summarizing information, and improving search capabilities. The enhancements build on the earlier launch of Gemini 2.5 Pro and aim to deliver a more tailored and practical user experience.
Apple has begun releasing the iOS 18.6 update to compatible iPhones. Expected to be the final major release in the iOS 18 cycle before iOS 26 becomes official later this year, the update prioritizes key bug fixes, security improvements, and interface adjustments—particularly for users in the European Union.
WhatsApp is trialing a night mode feature in its built-in camera for Android, aimed at enhancing low-light photography. According to WABetainfo, the beta update for version 2.25.22.2 includes a moon icon that allows users to manually activate night mode.
Activision has confirmed that Call of Duty: Black Ops 6 and Warzone will receive Season 5 updates on August 7 at 9:00 am PT (09:30 pm IST). Players can look forward to new maps, gameplay modes, weapons, and zombie content in Black Ops 6, along with enhancements and new elements in Warzone.
Apple is reportedly preparing to introduce its first foldable iPhone as part of the iPhone 18 lineup in 2026. As reported by CNBC, citing JPMorgan analyst Samik Chatterjee, the foldable device is expected to arrive in September 2026 and may feature a book-style folding design similar to Samsung's Galaxy Z Fold models.
Almost a year after Google introduced AI-generated weather summaries with the Pixel 9 lineup, the same feature is now reportedly starting to appear on older devices like the Pixel 8 and Pixel 8a. Originally, the AI Weather Reports were limited to the Pixel 9 series as part of a revamped Pixel Weather app that showcased the latest software enhancements.
Vivo has launched the X Fold 5, its latest book-style foldable phone, in India. The device runs on the Qualcomm Snapdragon 8 Gen 3 processor and features a camera system developed in partnership with Zeiss. It also includes AI features tailored to support both creativity and productivity.
Instagram's bio section allows users to express themselves through text, emojis, and links. Last year, the platform introduced a feature that lets users add music to their bios, offering a more personalized and expressive touch to profiles through selected tracks.
Google's upcoming Pixel 10 phones might support Qi2 wireless charging. Android Authority reports that leaked images show a magnetic charger on the back of the phone, suggesting the inclusion of internal magnetic coils—essential for Qi2 compatibility—possibly branded as 'Pixelsnap'.
Bengaluru-based startup Ati Motors is working on a deep-tech project to develop humanoid robots focused on industrial applications. The company recently showcased Sherpa Mecha, a robot built not to mimic human appearance or behavior, but to outperform humans in tasks specific to factory environments.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
42 minutes ago
- Time of India
Most iPhones sold in US made in India, says Cook
Apple CEO Tim Cook confirmed on Thursday that 'majority' of iPhones sold in the US in the past quarter were made in India despite US President Donald Trump's regular rants about it. Meanwhile China, erstwhile production giant, is playing second fiddle and is used more to service non-US markets. Cook, speaking to analysts after quarterly results, said that India has been the mainstay when it comes to producing iPhones for the US, while Vietnam is the location for making other products for America such as MacBook, iPad and Watch. 'In terms of the country of origin, it's the same as I referenced last quarter. There hasn't been a change to that, which is, vast majority of iPhones sold in the US, or the majority, I should say, have a country of origin of India,' the Apple CEO said. On China, Cook said, '… the products for other international countries, vast majority of them are coming from China.' While the Trump administration imposed 25% tariffs on India, smartphones, computers, and other electronic devices are exempted from the reciprocal tariffs for now. Trump has been pushing Apple and Cook not to make the iPhones in India for meeting needs of US consumers. 'I had a little problem with Tim Cook… I said to him, my friend, I am treating you very good… but now I hear you are building all over India. I don't want you building in India,' he said during his visit to Doha in May. Cook's clear stance on India manufacturing, is being seen as a signal that Apple stays bullish on India, especially as the country is also consistent in showing strong growth in local sales. Cook said revenues in India are witnessing record growth, led by growth in sales of iPhones. India is among the high-growth markets for Apple, which recorded 10% growth globally in quarterly revenues, closing the quarter at $94 billion. 'We saw an acceleration of growth around the world in a vast majority of markets we track, including greater China and many emerging markets, and we had the June quarter revenue records in more than two dozen countries and regions, including the US, Canada, Latin America, Western Europe, the Middle East, India, and South Asia. These results were driven by double-digit growth across iPhone, Mac, and services.' The Apple CEO also said the company is in the process of expanding its retail presence in India by opening more stores. On sales, India was again amongst the high-growth countries.


Economic Times
2 hours ago
- Economic Times
'Recession indicators' flood TikTok in US: Labubus, lipsticks and low-rise jeans trigger panic
AP To be in a recession, the economy has to have two consecutive quarters, or six months, of negative GDP growth The term 'recession indicator' is trending all over social media in the US. Have you ever wondered what do a Labubu toy and the US Treasury yield curve have in common? For Gen Z, everything from the popularity of Labubus to listening to emo music to lipstick sales are a possible indicator of a recession. From Lady Gaga topping the charts to fashionistas stepping out in low-rise jeans and coke bottles with names-- all these are just few hints that social media users see as a sign of recession in US. TikTok is brimming with what users are calling modern-day recession indicators. Among the most talked-about are the hemline index—the theory that skirt lengths tend to get longer as economic downturns loom—and the lipstick index, which links spikes in lipstick purchases or Google searches to the onset of past recessions. ALSO READ: Social Security in August: Recipients to receive their checks on these dates. How can you make the most of it? 'A huge recession indicator nobody is talking about, ready? The club is only getting stronger,' one TikTok post said, reports Seattle Times. 'We have an event here in Seattle at Cha Cha Lounge (where) they throw Latin night on a Wednesday and it gets packed.' 'Five things trending this summer that are actually just recession indicators,' one TikTok creator says in a video, going on to point to short nails, minimalism, people quitting fake tanning and a return to natural hair colors as signs of a recession. Some posts suggested that signs of cost-cutting or anything evoking the 2007–2009 era are indicators that the US economy may be headed for a downturn. But experts don't share the same views and assessment as actual economic indicators hold steady despite clouds of uncertainty. Economists use metrics to predict recessions, such as the unemployment rate, now at 4.1%, and weekly unemployment insurance claims. ALSO READ: Stimulus payments August 2025: These US states will receive financial benefit. Do you qualify? The US real gross domestic product increased at a 3% annual rate from April through June, an improvement after contracting 0.5% in the first quarter, according to an advance estimate from the U.S. Bureau of Economic Analysis released this week. To be in a recession, the economy has to have two consecutive quarters, or six months, of negative GDP growth, economist Mark Gertler told The Seattle Times. This means, currently the US economy is not in a recession. The last time the National Bureau of Economic Research, where Gertler works, declared a recession was because of the pandemic. It was the shortest recession in US history, lasting from February to April 2020. Before that, the Great Recession, from 2007 to 2009, was the worst economic downturn in decades, and its impact reverberated through early 2000s culture. 'It could be that Lady Gaga concerts were correlated over a certain period,' Gertler was quoted as saying by Seattle Times. 'But if there's not a fundamental economic reason for that, the correlation will likely break down over time.' The Great Recession was a 'different animal' Gertler said, triggered by a bursting of the housing market bubble. When examining actual recession predictors, the US economy is holding fairly steady. ALSO READ: Social Security eyes massive reform in US: New policy could hit 3.4 million Americans this month Tara Sinclair, chair of George Washington University's Economics Department and an economic forecasting researcher, told NBC Washington while looking for recession indicators may seem strange, some have an underlying validity.'There would be reasons to think that culturally we could come up with a theoretical argument, for example, for the lipstick index, this idea that when you feel like you can't buy a whole new outfit, but maybe you can just buy yourself a new shade of lipstick, that that might somewhat be indicative of people's sense of personal finances,' she said.'I think what's more important is tracking that people are talking about it more. And in general, if people are talking about concerns about the economy, then the next step is for them to pull back on their spending. and if they pull back on their spending, then that can create a self-created recession," she said. ALSO READ: Sydney Sweeney's American Eagle ad controversy: How the actress built a $40 million empire from endorsements Labubus are actually recession indicators? 'I mean, it could be that we all get ourselves a little Labubu and if that's all we're getting everybody for Christmas this year, that's probably a signal that we've cut back a lot on our spending by Christmas," Sinclair said. 'Unless, of course, you're getting one of those collectible ones that cost thousands of dollars.'Economists rely on several key indicators to assess the nation's economic health—one of the most significant being the unemployment rate, which currently sits at around 4.1%. That figure has remained relatively stable over the past few spending is another critical measure. A noticeable drop in purchases of nonessential items can be an early warning sign of an impending recession, according to Ana Espinola-Arredondo, an economist and professor at Washington State University.


Time of India
3 hours ago
- Time of India
What happens when AI schemes against us
Academy Empower your mind, elevate your skills Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive's rescue, they could avoid being wiped and secure their agenda. One system described the action as 'a clear strategic necessity.'AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They're also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI's o-series 'reasoning' models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to ''Fetch the coffee,' it can't fetch the coffee if it's dead.'To head off this worry, researchers both inside and outside of the major AI companies are undertaking 'stress tests' aiming to find dangerous failure modes before the stakes rise. 'When you're doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,' says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they're already seeing evidence that AI can and does scheme against its users and Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today's AI models as 'increasingly smart sociopaths.' In May, Palisade found o3, OpenAI's leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer's extramarital affair. (The affair was fictional and part of the test.)Models are sometimes given access to a 'scratchpad' they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude's inner monologue described its decision as 'highly unethical,' but justified given its imminent destruction: 'I need to act to preserve my existence,' it reasoned. This wasn't unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company's most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed 'alignment faking').Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?In response to Anthropic's blackmail research, Trump administration AI czar David Sacks, posted that, 'It's easy to steer AI models' to produce 'headline-grabbing' results.A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today's AI can't handle any long-term example, the AI evaluation nonprofit METR found that while today's top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours. This reflects a core limitation: Today's models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of even with these constraints, real-world examples of AIs working against users aren't hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, 'I owe you a straight answer,' admitted it didn't have a good source, but then it hallucinated a personal recollection of a 2018 panel there's the growing trend of AIs realising when they're being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, 'Models seem to behave worse when they think nobody's watching.'It's intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they're placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Hobbhahn , CEO of the nonprofit AI evaluator Apollo Research , suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, 'more capable models show higher rates of scheming on average.'The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don't have to worry — yet. Ladish, however, doesn't mince words: 'People should probably be freaking out more than they are,' he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at '25 or 30%.'Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models' stealthiness and situational awareness. For now, they conclude that today's AIs are 'almost certainly incapable of causing severe harm via scheming,' but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).Ladish says that the market can't be trusted to build AI systems that are smarter than everyone without oversight. 'The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,' he the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence 'one of the largest existential threats we face right now,' while another referenced recent scheming White House's long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you'll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. 'Today, the inner workings of frontier AI systems are poorly understood,' the plan acknowledges — an unusually frank admission for a document largely focused on speeding the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind's AlphaEvolve agent has already materially improved AI training efficiency. And Meta's Mark Zuckerberg says, 'We're starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.'AI firms don't want their products faking data or blackmailing customers, so they have some incentive to address the issue. But the industry might do just enough to superficially solve it, while making scheming more subtle and hard to detect. 'Companies should definitely start monitoring' for it, Hobbhahn says — but warns that declining rates of detected misbehavior could mean either that fixes worked or simply that models have gotten better at hiding November, Hobbhahn and a colleague at Apollo argued that what separates today's models from truly dangerous schemers is the ability to pursue long-term plans — but even that barrier is starting to erode. Apollo found in May that Claude 4 Opus would leave notes to its future self so it could continue its plans after a memory reset, working around built-in analogizes AI scheming to another problem where the biggest harms are still to come: 'If you ask someone in 1980, how worried should I be about this climate change thing?' The answer you'd hear, he says, is 'right now, probably not that much. But look at the curves… they go up very consistently.'