logo
Google rolls out ‘Scheduled Actions' on Gemini: 4 everyday tasks you can now automate

Google rolls out ‘Scheduled Actions' on Gemini: 4 everyday tasks you can now automate

Indian Express23-06-2025
At I/O 2025, Google introduced several new Gemini features. One of the most useful among them could be Scheduled Actions. This feature lets you set Gemini to run prompts at a set time in the future or repeat them regularly. It may seem like a minor change, but the Scheduled Actions feature opens up several new ways to interact with the AI chatbot. For instance, you can ask Gemini to do a task later, and it will remember and do it for you. You can even turn an old chat into a scheduled task.
Here's a look at how exactly Scheduled Actions works and in what ways you can use the feature.
The scheduling feature mostly works well, but sometimes Gemini can get confused and skip doing a future task. A simple follow-up message usually fixes the issue.
Here are some limitations to accessing Scheduled Actions:
–Subscription needed: This feature is only for paid users. You need a Google AI Pro or Google AI Ultra subscription if you want to access Scheduled Actions. These subscription packages are only available in the US currently.
–Only 10 actions allowed: You can only schedule up to 10 tasks at a time, including one-time and repeating actions.
–Location can't be updated: You can set actions based on your location, like 'Find a coffee shop near me,' but it will always use the location from where you first created the task. It won't change if you move to a new location.
After scheduling an action, you can view it by tapping your profile in the Gemini app, going to Settings, and clicking 'Scheduled Actions'. You can only pause or delete tasks from there, but you can cancel them if needed.
At first, the idea of asking AI to summarise emails might seem unnecessary. But if you only ask once, it saves time. For example, you can tell Gemini, 'Give me a summary of my unread emails every morning', and it will send you daily updates. You can also further customise it by asking Gemini to highlight emails from your boss or skip spam emails and newsletters.
It is important to note that Gemini can make mistakes like any other AI tool. But using Scheduled Actions in this way can make for a quick look at your emails, resulting in time saved.
With Workspace connected, you can ask Gemini to list all your calendar events for the week. Since it can also use Google Maps, you can ask questions like how far your doctor's appointment is from home. You can also ask for specific details or formats. For example, if you have two appointments in different areas, Gemini can add up the travel time and give you the total driving time.
Sometimes you want information that isn't available yet. For example, if you want to know who won at the Oscars, you can ask Gemini now and schedule it to give you the answer once the event is done. This is even more useful for complex searches. You can also ask Gemini for specific things, like what reviewers think about the gameplay or plot of a video game.
There are some cool ways to use this feature, but it is not here yet. In one demo, Gemini was asked to find new apartments each week and send a summary to the user.
That kind of task needs more independence than Gemini can handle right now, but it shows how useful scheduled actions could be in the future. For now, Gemini can do simple web searches, check your emails and calendar, and help with some detailed planning.
(This article has been curated by Disha Gupta, who is an intern with The Indian Express)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Will the EU delay enforcing its AI Act?
Will the EU delay enforcing its AI Act?

Mint

time36 minutes ago

  • Mint

Will the EU delay enforcing its AI Act?

Parts of EU's AI Act due to come into force on August 2 Publication of key guidance document has been delayed Some companies, politicians call for delay STOCKHOLM, July 3 (Reuters) - With less than a month to go before parts of the European Union's AI Act come into force, companies are calling for a pause in the provisions and getting support from some politicians. Groups representing big U.S. tech companies such as Google owner Alphabet and Facebook owner Meta, and European companies such as Mistral and ASML have urged the European Commission to delay the AI Act by years. WHAT IS THE AUGUST 2 DEADLINE? Under the landmark act that was passed a year earlier after intense debate between EU countries, its provisions would come into effect in a staggered manner over several years. Some important provisions, including rules for general purpose AI (GPAI) models, are due to apply on August 2. GPAI, which includes foundation models like those made by Google, Mistral and OpenAI, will be subject to transparency requirements such as drawing up technical documentation, complying with EU copyright law and providing detailed summaries about the content used for algorithm training. The companies will also need to test for bias, toxicity, and robustness before launching. AI models classed as posing a systemic risk and high-impact GPAI will have to conduct model evaluations, assess and mitigate risks, conduct adversarial testing, report to the European Commission on serious incidents and provide information on their energy efficiency. WHY DO COMPANIES WANT A PAUSE? For AI companies, the enforcement of the act means additional costs for compliance. And for ones that make AI models, the requirements are tougher. But companies are also unsure how to comply with the rules as there are no guidelines yet. The AI Code of Practice, a guidance document to help AI developers to comply with the act, missed its publication date of May 2. "To address the uncertainty this situation is creating, we urge the Commission to propose a two-year 'clock-stop' on the AI Act before key obligations enter into force," said an open letter published on Thursday by a group of 45 European companies. It also called for simplification of the new rules. Another concern is that the act may stifle innovation, particularly in Europe where companies have smaller compliance teams than their U.S. counterparts. The European Commission has not yet commented on whether it will postpone the enforcement of the new rules in August. However, EU tech chief Henna Virkkunen promised on Wednesday to publish the AI Code of Practice before next month. Some political leaders, such as Swedish Prime Minister Ulf Kristersson, have also called the AI rules "confusing" and asked the EU to pause the act. "A bold 'stop-the-clock' intervention is urgently needed to give AI developers and deployers legal certainty, as long as necessary standards remain unavailable or delayed," tech lobbying group CCIA Europe said.

Google home introduces new features, gives more control to family and friends
Google home introduces new features, gives more control to family and friends

Mint

time37 minutes ago

  • Mint

Google home introduces new features, gives more control to family and friends

Managing a smart home in 2025 means juggling convenience, privacy, and control. With its latest update, Google has made that a little easier. The Google Home app now lets users share access to smart devices more flexibly, introducing new permission levels for friends, roommates, and even kids, all while keeping security in check. Google Home app version 3.33 brings a game-changing update: the introduction of a new 'Member' role. Until now, sharing your smart home meant handing over near-total access. Not any more.= With this update, you can invite others as Admins (full access) or Members (limited access). Members get precisely the permissions you choose, like: Activity access: View device logs or home history, such as Wi-Fi usage or camera footage. Settings access: Manage specific devices, automations, or preferences without control over the whole home. This lets you share control without giving away the keys to everything. In a first, children under 13 (or the local minimum age) can be added as Members, if they're part of your Google family group. Kids can do simple things like play music, adjust lights, or unlock doors (within limits). They can't remove devices or change critical settings. This is especially useful for working parents or families with kids returning home before adults, convenient, but with tight guardrails. Here's a breakdown of what each role can access: Permission Admin Member (Settings) Member (Activity) Add/remove people Yes No No Delete home Yes No No Add/remove devices Yes No No Manage automations/settings Yes Yes No View device/home history Yes No Yes Control devices (e.g., lights) Yes Yes Yes Admins can assign both Settings and Activity access to a Member—or just one, depending on what's needed. This update brings real flexibility to smart homes: Share light and music controls with trusted guests—without giving them camera access. Let kids unlock the door after school without touching home security settings. Roommates can fine-tune their automations without being able to remove your devices. The bottom line: you stay in control, but others can contribute meaningfully to the home experience.

Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity
Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity

Mint

time37 minutes ago

  • Mint

Going nuclear will be the only way to keep the lights on as AI guzzles ever more electricity

Nishant Sahdev Artificial intelligence consumes energy in such bulk that its rise has thrown the world into an infrastructure emergency. Thankfully, nuclear power is not just viable, its risks have been on the decline. It's the only way out now. Nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. Gift this article Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. Recently, I was in a conversation with MIT researchers on artificial intelligence (AI) and nuclear energy. While discussing the subject, we saw a video clip of a data centre that looked like a giant fridge but buzzed like a jet engine. Inside, thousands of AI chips were training a new language model—one that could write poems, analyse genomes or simulate the weather on Mars. What struck me wasn't the intelligence of this machine. It was the sheer energy it was devouring. The engineer said, 'This one building consumes as much power as a small town." That's when the magnitude of the challenge hit me: If AI is our future, how on earth will we power it? Also Read: AI as infrastructure: India must develop the right tech All that intelligence takes energy. A lot of it. More than most people realize. And as someone who's spent years studying the physics of energy systems, I believe we are about to hit a hard wall. To be blunt: AI is growing faster than our ability to power it. And unless we confront this, the very tools meant to build our future could destabilize our energy systems—or drag us backward on climate. One solution has been pinpointed by the AI industry: nuclear energy. Most people don't associate AI with power plants. But every chatbot and image generator is backed by vast data centres full of servers, fans and GPUs running day and night. These machines don't sip power. They guzzle it. In 2022, data centres worldwide consumed around 460 terawatt-hours. But that's just the baseline. Goldman Sachs projects that by 2030, AI data centres will use 165% more electricity than they did in 2023. And it's not just about scale. It's about reliability. AI workloads can't wait for the sun to shine or wind to blow. They need round-the-clock electricity, without fluctuations or outages. That rules out intermittent renewables for a large share of the load—at least for now. Also Read: Rely on modern geothermal energy to power our AI ambitions Can power grids handle it?: The short answer: not without big changes. In the US, energy planners are already bracing for strain. States like Virginia and Georgia are seeing huge surges in electricity demand from tech campuses. One recent report estimated that by 2028, America will need 56 gigawatts of new power generation capacity just for data centres. That's equivalent to building 40 new large power plants in less than four years. The irony? AI is often promoted as a solution to climate change. But without clean and scalable energy, its growth could have the opposite effect. For example, Google's carbon emissions rose 51% from 2019 to 2024 by its own assessment, largely on account of AI's appetite for power. This is an infrastructure emergency. Enter nuclear energy—long seen as a relic of the Cold War or a post-Chernobyl nightmare. But in a world hungry for carbon-free baseload power, nuclear power is making a quiet comeback. Let's be clear: nuclear energy is the only scalable source of clean electricity in existence that runs 24/7. A single large reactor can power multiple data centres without emitting carbon or depending on weather conditions. Also Read: India should keep all its nuclear power options in play Tech companies are already acting: Microsoft signed a deal to reopen part of the Three Mile Island nuclear plant to power its AI operations. Google is investing in small modular reactors (SMRs). These are compact next-generation nuclear units that are designed to be safer, faster to build and considered ideal for campuses. They're early signs of a strategic shift: AI companies are realizing that if they want to build the future, they'll have to power it themselves. As a physicist, I've always been fascinated by nuclear energy's elegance. A single uranium pellet—smaller than a fingertip—holds the same energy as a tonne of coal. The energy density is unmatched. But it's not just about big reactors anymore. The excitement stems from advanced reactors. SMRs can be built in factories, shipped by truck and installed near tech campuses or even remote towns. Molten salt reactors and micro-reactors promise even greater safety and efficiency, with lower waste. New materials and AI-assisted monitoring make this technology far safer than past generations. For the first time in decades, nuclear power is both viable and vital. But let's talk about the risks: I'm not naïve. Nuclear still carries a stigma—and poses real challenges. Take cost and time; building or reviving reactors takes years and billions of dollars. Even Microsoft's project will face regulatory hurdles. Or waste; we still need better systems for storing radioactive materials over the long-term. Or consider control; if tech giants start building private nuclear plants, will public utilities fall behind? Who gets priority during shortages? And of course, we must be vigilant about safety and non-proliferation. The last thing we want is a tech-driven nuclear revival that ignores the hard lessons of history. But here's the bigger risk: doing nothing. Letting power demand explode while we rely on fossil fuels to catch up would be a disaster. We live in strange times. Our brightest engineers are teaching machines to think. But they still haven't solved how to power those machines sustainably. As a physicist, I believe we must act quickly—not just to make AI smarter, but to make its foundation stronger. Nuclear energy may not be perfect. But in the race to power our most powerful technology yet, it may be the smartest bet we've got. The AI revolution can't run on good intentions. It will be run on electricity. But where will it come from? The author is a theoretical physicist at the University of North Carolina at Chapel Hill, United States. He posts on X @NishantSahdev Topics You May Be Interested In

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store