logo
Google Nest WiFi Points and Router Bundle Hits All-Time Low, and It Wasn't Even on Prime Day Deal

Google Nest WiFi Points and Router Bundle Hits All-Time Low, and It Wasn't Even on Prime Day Deal

Gizmodo3 days ago
If you're still using just a single router in your home, you might be succumbing yourself to spotty internet. There's nothing more frustrating than when tech doesn't work properly so let's avoid it like the plague—no more webpages getting hung up while trying to load. Google Nest Wi-Fi is a system set up with Wi-Fi extending nodes and right now you can score a three–pack for just $124—a 22% discount ahead of new semester, down from $160.
See at Amazon
The pack comes with one Google Nest Wi-Fi router and two extension points. Together, all three units can can cover and area of up to 5,400 feet and can handle connections a ton of devices at once. So you can have a smart TV in every room, a few Amazon Echos or Google Assistants, a smart fridge, smart light bulbs in every fixture, a robot vacuum, game consoles, and more all connected and run into no issues providing Wi-Fi and internet access to them all.
The Nest Wi-Fi system is scalable, allowing you to turn it into a mesh network by adding additional routers to your home. Each work together to blanket your home in strong, reliable internet, with each node adding another 1,600 feet. That means you can eliminate issues in those weird corners of your home that struggle to maintain a good connection to your existing router now. Nothing worse that the Wi-Fi crapping the bed on your while you're either in bed scrolling or crapping on the toilet. Turning your home into a mesh network will help prevent sitting there, trapped, and waiting for that TikTok your friend sent your to buffer. Never again.
Each point even has a built-in smart speaker with Google Assistant, so you can play music or manage your Wi-Fi network with just your voice.
The system works intelligently behind the scenes, shifting which node you're connected to seamlessly. This means you can take a video call on your laptop and walk with it from your bedroom, to your living room, to the basement and not notice a shift in internet connection or stability.
Setup is easy and you can even decide to prioritize certain devices for faster speeds. Great for if you're downloading a huge game file on an Xbox or PlayStation or playing an online multiplayer game.
The three-pack of the Google Nest Wi-Fi router with two Wi-Fi extension points is normally listed at $160. However, that has been cut down hard, now with a 22% discount. That means you're only paying $124 for the bundle.
See at Amazon
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Empowers Indian Developers to Lead the Global AI Wave
Google Empowers Indian Developers to Lead the Global AI Wave

Entrepreneur

time26 minutes ago

  • Entrepreneur

Google Empowers Indian Developers to Lead the Global AI Wave

Central to the update was the introduction of Google's latest AI advancements for India, including localised deployment of its high-performance Gemini 2.5 Flash model, a set of new agentic AI tools in Firebase Studio, and partnerships aimed at nurturing local AI talent and solutions. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Google has announced a suite of new artificial intelligence initiatives tailored to empower Indian developers and startups. The announcements reflect Google's deep commitment to fostering innovation in India and accelerating the country's global leadership in AI development. Central to the update was the introduction of Google's latest AI advancements for India, including localised deployment of its high-performance Gemini 2.5 Flash model, a set of new agentic AI tools in Firebase Studio, and partnerships aimed at nurturing local AI talent and solutions. The efforts are part of Google's broader mission to support India's aspirations of becoming a global AI powerhouse. Dr Manish Gupta, Senior Director for India and APAC at Google DeepMind, emphasised the critical role of Indian developers. "Indian developers are literally writing the next chapter of India's success story, using AI capabilities to build real-world applications that are reaching millions of businesses and people across India and the world," said Dr Gupta. "We remain steadfast in bringing them our industry-leading, cutting-edge capabilities to accelerate their journeys, and India's leadership in a global AI-led future." The company also shared that based on third-party evaluations, the Android and Google Play ecosystem generated an estimated INR 4 lakh crore in revenue for app publishers and the wider economy in India during 2024. This ecosystem supported the creation of around 35 lakh jobs through direct, indirect, and spillover effects. In her remarks during a keynote conversation with Accel's Subrata Mitra, Preeti Lobana, Country Manager at Google India, highlighted the increasing momentum of India's digital innovation. "There's a buzz about the 'India Opportunity' driven by an ambitious national vision," she said. "India's developers are shaping how the world will use AI, and we're proud to stand with them." Among the key developments announced was the localisation of Gemini 2.5 Flash for Indian developers, ensuring improved speed and stability for use in sectors requiring low-latency, high-performance AI—particularly in healthcare, finance, and public services. Google's collaboration with three India AI Mission-backed startups—Sarvam, Soket AI, and Gnani—is furthering the development of India's Make-in-India AI models using its Gemma family of open models. Sarvam's recent release, Sarvam-Translate, a model built on Gemma for long-form text translation, was highlighted as a successful outcome of this collaboration. Additionally, Google is working with BharatGen at IIT Bombay to create indigenous speech recognition and text-to-speech tools in Indic languages, with the aim of enhancing accessibility and representation for India's diverse linguistic communities. Google also introduced new AI-powered features in Google Maps, including enhanced data on over 250 million places and India-specific pricing for the Maps Places UI Kit. These improvements are aimed at supporting developers working in India's expanding mobile commerce space, making it easier to integrate location-based features into their services. To further assist developers, the company announced new tools and capabilities in Firebase Studio, its cloud-based AI development workspace. Features such as optimized templates, collaborative workspaces, and backend integration are designed to help developers quickly build and launch full-stack AI applications at no initial cost. Recognising the growing potential of India's gaming sector, Google launched the 'Google Play x Unity Game Developer Training' program. Developed in collaboration with Unity and the Game Developer Association of India, the initiative offers 500 Indian developers access to over 30 hours of specialised online training. It is currently being rolled out in partnership with the governments of Tamil Nadu and Andhra Pradesh, with plans for further expansion. Google is also hosting the Gen AI Exchange Hackathon, encouraging developers to translate their AI skills into practical innovations across industries. The day also included a showcase by eight Indian startups: Sarvam, CoRover, InVideo, Glance, Dashverse, ToonSutra, Entri, and Nykaa, demonstrating impactful real-world applications built with Google's AI tools. The announcements underline Google's intent to strengthen India's position in the global AI landscape while empowering the local developer ecosystem with advanced tools and meaningful support.

Google's Phone app could make resuming on-hold calls easier (APK teardown)
Google's Phone app could make resuming on-hold calls easier (APK teardown)

Android Authority

time26 minutes ago

  • Android Authority

Google's Phone app could make resuming on-hold calls easier (APK teardown)

Aamir Siddiqui / Android Authority TL;DR Google is testing a new 'Unhold' shortcut in call notifications through its Phone app. The new button replaces the 'Mute' button whenever a user puts a call on hold. Although not yet live, this change would improve usability by allowing users to resume held calls more efficiently. Google doesn't mess around that much with the Google Phone app. It makes sense too, as you don't want to disturb muscle memory for people for crucial tasks like calls. But every now and then, the company reassesses what users expect from the Phone app. Recently, Google began rolling out the Phone app's Material 3 Expressive redesign and new interfaces for the incoming call screen to beta users. We've now spotted Google working on a helpful button swap in the ongoing call notification, which will be useful for people who often put calls on hold. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. On Android phones that use the Google Phone app, you get a notification whenever you receive a call. This notification lets you accept or decline the call, and it turns into an ongoing call notification if you accept the call. The ongoing call notification gives users the buttons to hang up, put the call on speaker, or mute their phone's microphone right from the notification itself, which is very handy if you switch out of the main call screen. The ongoing call notification doesn't give you an option, but users can also put the call on hold on the main call screen. If you do so and then switch out of the main call screen, you don't get an option to unhold and resume the call until you switch back to the main call screen. Google Phone v184.0 beta includes code for a new Unhold button in the ongoing call notification that appears during ongoing calls. Using this button, users can unhold and resume calls straight from the ongoing call notification without switching back to the main call screen. We managed to activate the feature to give you an early look: Current options during ongoing call Current options when call is put on hold Upcoming options when call is put on hold The current screenshots show the usual options we see during a call, which remain the same even when a call is put on hold. In the future, when you put a call on hold, you will see a new Unhold button that replaces the Mute button. Tapping on it will unhold and resume the call. The button swap makes sense since muting a call that is already on hold effectively does nothing, and a user is much more likely to want to unhold and resume in that situation. Curiously, as you may have noticed, there is no way to put the call on hold through the notification. You will still have to initiate the action from the main call screen. It would be nice if Google allowed users to choose between a mute button and a hold button in the ongoing call notification. Note that this unhold button is not currently live for users. We'll keep you updated when we learn more. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Users can be ‘rude' To AI services to be more effecient & sustainable
Users can be ‘rude' To AI services to be more effecient & sustainable

Forbes

time27 minutes ago

  • Forbes

Users can be ‘rude' To AI services to be more effecient & sustainable

Closeup portrait of angry young man, about to have nervous breakdown, isolated on gray wall ... More background. Negative human emotions facial expression feelings attitude Do you speak AI? It's not a question that we're used to yet, but it might be soon. At its lower level, artificial intelligence obviously has a language in terms of the coding syntax, structure and software methodology used by the developers and data scientists who build it. It also has a language in terms of its data model, its employment of large and small language models and the data fabric that it operates in. But AI also has a human language. Users who have experimented with ChatGPT, Anthropic's Claude, Google Gemini, Microsoft Copilot, Deepseek, Perplexity, Meta AI through WhatsApp, or one of the enterprise platform AI services such as Amazon Code Whisperer will know that there's a right way and a wrong way to ask for automation intelligence. Being quite specific about your requests and structuring the language in a prompt with more precise descriptive terms to direct an AI service and narrow its options is generally a way of getting a more accurate result. Then there's the politeness factor. The Politics Of Politeness Although some analysis of this space and a degree of research suggests a polite approach is best when interacting with an AI service (it might help to be better humans, after all), there is a wider argument that says politeness isn't actually required as it takes up extra 'token' space… and that's not computationally efficient or good for the planet's datacenter carbon footprint. A token is a core unit of natural language text or some component of an image, audio clip or video, depending on the 'modality' of the AI processing happening; while 'sullen' is one token, 'sullenness' would more likely be two tokens: 'sullen' and 'ness' in total. All those please, thank yous and 'you're just awesome' interactions a user has with AI are not necessarily a good idea. So let's ask ChatGPT what to do… What ChatGPT thinks about politeness. Inference Complexity Scales With Length Keen to voice an opinion on this subject is Aleš Wilk, cloud software and SEO specialist at Apify, a company known for its platform that allows developers to build, deploy and publish web scrapers, AI agents and automation tools. 'To understand this rising topic of conversation further, we need to start by realising that every token a user submits to an AI language model represents a unit that is measurable in computational cost,' said Wilk. 'These models work and rely on 'transformer architectures', where inference complexity scales with sequence length, particularly due to the quadratic nature of self-attention mechanisms. Using non-functional language like 'please' or 'thank you' feels like a natural level of conversational dialogue. But, it can inflate prompt length by 15-40% without contributing to semantic precision or task relevance.' Looking at this from a technical and efficiency point of view, this is a hidden cost. Wilk explains that if we look at platforms such as GPT-4-turbo, for example, where the pricing and compute are token-based, verbosity in prompt design directly increases inference time, energy consumption and operational expenditure. Also he notes, empirical analyses suggest that 1,000 tokens on a state-of-the-art LLM can emit 0.5 to 4 grams of CO₂, depending on model size, optimization and deployment infrastructure. On a larger scale and across billions of daily prompts, unnecessary tokens can contribute to thousands of metric tons of additional emissions annually. 'This topic has become widely discussed, as it not only concerns cost, but also sustainability. Looking at GPU-intensive inference environments, longer prompts can drive up power draw, increase cooling requirements and reduce throughput efficiency. Why? Because as AI moves into continuous pipelines, agent frameworks, RAG systems and embedded business operations, for example, the marginal ineffectiveness of prompt padding can aggregate into a big environmental impact,' underlined Wilk. Streamlining User Inputs An optimization specialist himself, Wilk offers a potential solution by saying that one notion is that developers and data scientists could create a prompt design similar to how they write performance code, such as removing redundancy, maximizing functional utility and streamlining user inputs. In the same way that we use linters and profilers (code improvement tools) for software, we need tools to clean and token-optimize prompts automatically. For now, Wilks says he would encourage users to be precise and minimal with their prompts. 'Saying 'please' and 'thank you' to AI might feel polite, but it's polite pollution in computational terms,' he stated. Greg Osuri, founder of Akash, a company known for its decentralized compute marketplaceagrees that the environmental impact of AI is no longer just a peripheral concern, it is a central design challenge. He points to reports suggesting AI inference costs contribute to more than 80% of total AI energy consumption. The industry has spent the last couple of years pushing for bigger models, better performance and faster deployment, but AI inference, the process that a trained LLM model uses to draw conclusions from brand-new data, might be doing most of the damage right now. Language Models vs Google Search 'Each user query on LLM models consumes approximately 10 to 15 times more energy than a standard Google search. Behind every response lies an extremely energy-intensive infrastructure. This challenge isn't just about energy usage in abstract terms, we're talking about a whole supply chain of emissions that begins with a casual prompt and ends in megawatts of infrastructure demand and millions of gallons of water being consumed,' detailed Osuri, speaking to a closed press gathering this month. He agrees that a lot is being said around polite prompts and whether it is more energy-efficient to be rude (or at least direct and to the point) to AI; however, he says these conversations are missing the broader point. 'Most of the AI architecture today is inefficient by design. As someone who has spent years developing software and supporting infrastructure, it's surprising how little scrutiny we apply to prompt efficiency. In traditional engineering, we optimize everything. Strip any redundancies, track performance and reduce waste wherever it's possible. The real question is whether the current centralized architecture is fit for scale in a world that is increasingly carbon-constrained. Unless we start designing for energy as a critical constraint, we will continue training models and further accelerating our own limitations," he concluded. This discussion will inevitably come around to whether AI itself has managed to become sentient. When that happens, AI will have enough self-awareness and consciousness to have conscious subjective feelings and so be able to make an executive decision on how to manage the politeness vs. processing power balance. Until then, we need to remember that we are basically just using language models to generate content, be it code, words or images. If I Had An AI Hammer 'Being polite or rude is a waste of precious context space. What users are trying to accomplish is to get the AI to generate the content they want. The more concise and direct we are with our prompts, the better the output will be,' explained Brett Smith, distinguished software engineer and platform architect at SAS. 'We don't use formalities when we write code, so why should we use formalities when we write prompts for AI? If we look at LLMs as a tool like a hammer, we don't say 'please' when we hit a nail with a hammer. We just do it. The same goes for AI prompts. You are wasting precious context space and getting no benefits from being polite or rude.' The problem is, humans like empathy. This means that when an AI service answers in a chatty and familiar manner that is purpose-built to imitate human conversations, humans are more likely to want to be friendly in response. The general rule is, the more concise and direct users are with your prompts, the better the output will be. 'The AI is not sentient… and it does not need to be treated as such," asserted Smith. Stop burning up compute cycles, wasting datacenter electricity and heating up the planet with your polite prompts. I am not saying we 'zero-shot' every prompt [a term used to define when we ask an AI LLM a question or give it a task without providing any context or examples], but users can be concise, direct and maybe consider reading some prompt engineering guides. Use the context space for what it is meant for, generating content. From a software engineering perspective, being polite is a waste of resources. Eventually, you run out of context and the model will forget you ever told it 'please' and 'thank you' anyway. However, you may benefit as a person in the long term from being more polite when you talk to your LLM, as it may lead to you being nicer in personal interactions with humans." SAS's Smith reminds us that AI tokens are not free. He also envisages what he calls a 'hilarious hypothetical circumstance' where our please and thank you prompts get adopted by the software itself and agents end up adding in niceties when talking agent-to-agent. The whole thing ends up spinning out of control increasing the velocity the system wastes tokens, context space and compute power as the agent-to-agent communication grows. Thankfully, we can program against that reality, mostly. War On Waste Mustafa Kabul says that when it comes to managing enterprise supply chains at wider business level (not just in terms of software and data) prudent businesses have spent decades eliminating waste from every process i.e. excess inventory, redundant touchpoints, unnecessary steps. 'The same operational discipline must apply to our AI interactions,' said Kabul, in his capacity as SVP of data science, machine learning and AI at decision intelligence company Aera Technology. 'When you're orchestrating agent teams across demand planning, procurement and logistics decisions at enterprise scale, every inefficient prompt multiplies exponentially. Inside operations we've managed, we have seen how agent teams coordinate complex multi-step workflows - one agent monitoring inventory levels, another forecasting demand, a third generating replenishment recommendations. In these orchestrated operations, a single 'please' in a prompt template used across thousands of daily decisions doesn't just waste computational resources, it introduces latency that can cascade through the entire decision chain,' clarified Kabul. He says that just as we (as a collective business-technology community) have learned that lean operations require precision, not politeness, effective AI agent coordination demands the same 'ruthless efficiency' today. Kabul insists that the companies who treat AI interactions with the same operational rigor that they apply to their manufacturing processes will have a 'decisive advantage' in both speed and sustainability. Would You Mind, Awfully? Although the UK may be known for their unerring politeness, even the British will perhaps need to learn to drop the normal airs and graces we would normally consider a requisite part of normal civilities and social intercourse. The chatbot doesn't mind if you don't say please… and, if your first AI response isn't what you wanted, don't be ever so English and think you need to say sorry either.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store