
Google boosts education with Gemini 2.5 & LearnLM updates
Ben Gomes, Chief Technologist for Learning & Sustainability at Google, said: "Making knowledge accessible to everyone has always been our highest priority, which is why people turn to our products each day to help them learn — for school, work or life. With AI, we can do this at a speed and scale never before possible, and make the process of learning more active, engaging and effective. By building tools that enable you to keep pace with your own curiosity, in formats that match your goals and preferences, we hope to help everyone in the world learn anything in the world."
Gomes highlighted the importance of advanced models in supporting learning. "This fundamentally starts with really capable models. We can make models even better when we refine them for specific uses. That's why last year, we introduced LearnLM: our family of models and capabilities fine-tuned for learning. For years we've been working with education experts to research, measure and improve on building AI systems that support effective learning practices."
The company stated that the latest update marks the integration of LearnLM into Gemini 2.5, asserting Gemini 2.5 as the "world's leading model for learning." Google reported that Gemini 2.5 Pro outperformed competitors across categories of learning science principles, and was preferred by educators and pedagogy experts for supporting users' learning goals and adhering to key principles of good pedagogy across various scenarios.
According to Google, the infusion of LearnLM enables Gemini to "go beyond just giving you the answer" by focusing on explanation and reasoning. The company stated: "By applying LearnLM capabilities, and directly incorporating feedback from experts across the industry, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively. Our new prompting guide provides sample instructions to see this in action."
Google pointed to research showing that multimodal information—presented via audio, video, images, and text—improves learning outcomes. With Gemini's multimodal functionality, users will have greater control over how information is consumed, tailored to personal needs and preferences.
The company highlighted updates to NotebookLM, a tool designed for studying and research using a custom set of sources. Features such as Audio Overviews, now available in more than 80 languages, and Mind Maps provide different pathways for content exploration. Google indicated that more flexibility is being introduced to Audio Overviews, enabling users to select summary lengths. This feature will roll out in English first and expand to additional languages.
Additionally, Google is developing Video Overviews, which will enable users to transform notebook content into educational videos. The company noted that user feedback indicated a desire for more visual content during overviews, prompting this forthcoming enhancement.
For Google Search, the company stated that users increasingly rely on it for learning, with AI Mode now providing advanced reasoning and multimodality for deeper exploration, including links to web sources and follow-up question functionality. Google shared plans to introduce Deep Search to AI Mode and announced that AI Mode is now available to all users in the United States.
Google is also implementing a custom version of Gemini 2.5 within both AI Mode and AI Overviews in the United States for delivering more detailed responses supported with web links. The company announced: "Learning also takes place in the context of the world around us. So we're taking a big step in multimodality by bringing Project Astra's live capabilities into AI Mode with Search Live. Beyond asking questions with text and images, soon you'll be able to show Search what you see and ask questions about things in the world around you in real-time. Search will provide helpful information with links to explore along the way as you go back-and-forth. Search Live is coming to Labs this summer, perfect for heading back-to-school."
The Gemini app will now provide a free AI Pro upgrade to students in the United States, Brazil, Indonesia, Japan, and the United Kingdom who sign up by 30 June 2025. Eligible students receive 15 months of free access to Google AI Pro, including 2 TB of storage and access to NotebookLM, to assist with writing, studying, and homework.
Google also announced global availability, starting immediately, for custom quiz creation through Gemini for students aged 18 and over. This feature allows students to generate interactive practice quizzes on any topic or uploaded materials, with the system providing hints, explanations for correct and incorrect answers, and summarising areas of strength and those needing further study.
Ongoing experiments include Sparkify, an upcoming feature to create short animated videos from user questions or ideas using Gemini and Veo models, and a conversational tutoring prototype in Project Astra. This tutor is designed to guide students through homework, offering step-by-step problem-solving assistance and generating explanatory diagrams when necessary.
Google is updating its Learn About project as well. The company stated: "We're also bringing improvements based on your feedback to Learn About, an experimental Labs project where conversational AI meets your curiosity. Through LearnLM capabilities now in Gemini models, we can deliver even more nuanced explanations and relevant connections. We're making this experience available for more learners (including teens), adding session history so you can pick up where you left off and offering the ability to upload your own source documents so Learn About can ground explanations in your course materials, notes or research papers."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
8 hours ago
- Techday NZ
Goodbye SEO, hello GEO: How AI is reshaping brand discovery
As language models replace traditional search, brands must master generative engine optimisation to stay visible In May, news emerged that sent ripples through the tech world: Google searches in Apple's Safari had apparently dropped for the first time in 22 years, according to testimony made by Apple's Eddy Cue during Google's antitrust trial. While later Google countered that "query growth" is up, the contradiction itself reveals a shift that's already underway. Whether the reality is declining searches or changing search patterns, the trend points in one direction: people are increasingly turning to LLMs for answers. Ask ChatGPT about the best car brands for families, and you'll get a curated list. Search for laptop recommendations, and the AI serves up specific models with reasoning. But notice which brands make those lists and which don't. This represents more than a technological preference. The rapid transition from traditional search engines to AI-powered language models represents a complete restructuring of the discovery layer between brands and customers. And many businesses haven't noticed they're already losing. This trend has given rise to what is being called generative engine optimisation (GEO), the new discipline of optimising content for AI-powered responses rather than traditional search rankings. The visibility challenge The shift from traditional search engines to AI-powered language models represents a complete reshaping of how consumers discover, evaluate, and engage with brands. Those that fail to recognise and respond to this shift risk becoming invisible at the moment of decision. Brands that are not surfaced in LLM-generated responses will see a significant decline in visibility, resulting in downstream impacts on customer acquisition, brand relevance, and market competitiveness. Importantly, this isn't a reflection of product quality, but of digital discoverability: if a brand isn't mentioned, trusted, or properly structured in the sources LLMs rely on, it may simply not exist in the eyes of the AI, or the customer. This transition marks a new brand-customer dynamic, where visibility is no longer about search engine rankings alone, but about being contextually relevant, credibly cited, and machine-readable. Brands that embrace this reality early by adapting content, enhancing structured data, and embedding themselves in trusted digital ecosystems, will establish a lasting competitive edge. Those that delay will not merely fall behind, they risk being excluded from the AI-powered discovery layer entirely. Who wins and who loses While the shift toward AI-mediated discovery poses challenges across all industries, the impact won't be uniform. The degree to which brands face existential risk from this transition depends largely on the nature of their products, customer relationships, and purchase drivers. Understanding these differences is critical for prioritising response strategies and resource allocation. Some purchases are driven by emotion or identity rather than logic, price, or features. Luxury fashion and high-end cars fall into this category, and as a result, are less likely to be impacted by shifts in consumer decision-making. In contrast, products like utilities are essential, broadly interchangeable, and often chosen based on price or promotional offers rather than brand loyalty. This divide will deepen as AI agents begin to play larger roles in decision-making and action pathways. When decision-making occurs without humans in the loop, brand presence such as mental availability becomes less relevant. Instead, the agent's choices will be driven primarily by product attributes, features, and consumer reviews. Consider a future where your AI assistant automatically switches your energy provider based on quarterly rate comparisons, or books travel based on optimal price-to-convenience ratios. As the way people search and discover products changes, brands that adapt to this new environment will be best placed to succeed. This means actively engaging with how AI systems interpret and present information. Mastery of tools like prompt engineering, investment in training partnerships, or the development of custom GPTs can all help confirm products and services are accurately and favourably surfaced in AI-mediated environments. Brands that fail to understand how their audiences now phrase questions, conduct research, or engage with offerings will miss opportunities to evolve. Without these insights, businesses won't be able to redesign digital experiences, particularly websites, to support discovery and decision-making in an AI-driven Brands Must Do Where they once poured much marketing effort into search engine optimisation (SEO), brands must now rethink their digital presence for GEO, optimising not just for humans, but for the machines mediating those interactions. A brand's digital presence and content need to be more conversational and context-rich, rather than just keyword-dense. At BRX, we specialise in AI-driven marketing and have developed strategies to help drive GEO success: Audit your brand's presence across trusted sources. Are you mentioned on CHOICE, Canstar, Reddit, Whirlpool? If not, why not? Update your structured data. Use schema markup and keep pricing, availability, and product specifications always accurate. Drive positive reviews. Be a part of brand dialogue that matters to your category like Trustpilot, Google, TripAdvisor. Optimise for natural language. Rewrite content to reflect how people actually ask questions in LLMs, not keyword-stuffing. Format your language for skim readers and scanners with a "too long; didn't read." Create content that simplifies decisions. Think comparison tables, FAQs, expert explainers, and product fit guides. These steps form the foundation of AI-ready brand presence, but success requires more than checking boxes. The brands that will really thrive in this environment will also share some common characteristics. At BRX, we think brands that win will be those who: Sit outside the dynamic of price and features Create conversational content that answers real questions Prioritise structured clarity in product and service info Maintain consistency across official and third-party sources Earn trust through verified experiences, not just promises Provide highly specific answers to hyper-individual questions In this new environment, the marketer's job is no longer just to shape consumer perception but to influence the AI's perception of their brand. To "win" in AI-generated conversations, content flooding is no longer just a visibility tactic, it's a visibility imperative. Brands that dominate the content space are disproportionately represented in LLM outputs. Flooding without a customer-centric lens, however, risks damaging the post-click experience. The challenge is balancing volume with quality. The emerging reality is that websites are increasingly serving LLMs first, customers second. This creates a tension: you may be designing an experience that LLMs find useful, but human users do not. Time to act AI systems now influence how customers discover brands, and brands need to get out ahead of this. As AI and AI agents will be increasingly making decisions without human input, brands need to audit their AI visibility, create conversational content, and balance quantity with quality in their digital presence. The first step is simple but critical: audit what AI systems currently say about your brand. Ask multiple LLMs about your product category and see if you appear. If you don't, investigate the sources they cite and understand what ecosystems are influencing their responses. Modern LLMs with research functions provide sources that serve as helpful research starting points. Use this intelligence to understand where you need to appear to start showing up in LLM results. Given the complexity and rapid evolution of AI systems, partnering with specialists who understand this landscape can accelerate your progress. BRX helps brands navigate GEO with AI-native strategies that deliver measurable improvements in AI visibility and engagement. The brands that recognise this shift early and master GEO won't just maintain their market position; they'll capture market share from competitors who remain focused on traditional search optimisation. In a world where AI increasingly mediates brand discovery, being invisible to artificial intelligence means being invisible to customers." This emphasises the value proposition of working with experts who understand the complexity and fast-moving nature of AI, rather than trying to figure it out alone.


Techday NZ
8 hours ago
- Techday NZ
Google adds photo-to-video tool to Gemini as Veo 3 rollout expands
Google has announced a significant update to its Gemini AI platform, introducing a new feature that allows users to transform their photos into dynamic eight-second video clips with sound. The tool, powered by Google's latest video generation model Veo 3, is now available to Google AI Pro and Ultra subscribers in over 150 countries, with the company highlighting rapid uptake and creative experimentation since the model's initial launch. David Sharon, Multimodal Generation Lead for Gemini Apps, said, "We launched our state-of-the-art video generation model Veo 3 in May - and last week, we expanded access to Google AI Pro subscribers in over 150 countries. Now, with a new photo-to-video capability in Gemini, you can now transform your favourite photos into dynamic eight-second video clips with sound." Describing the process, Sharon added, "To turn your photos into videos, select 'Videos' from the tool menu in the prompt box and upload a photo. Then, describe the scene and any audio instructions, and watch as your still image transforms into a dynamic video. You can get creative by animating everyday objects, bringing your drawings and paintings to life or adding movement to nature scenes. Once your video is complete, tap the share button or download it to share with friends and family." According to Google, the reception from users has been swift and enthusiastic. "The explosion of creativity from users has been truly remarkable, with over 40 million Veo 3 videos generated across the Gemini app and Flow over the last seven weeks. From reimagining fairy tales through the eyes of a modern influencer, to ASMR videos exploring what it would sound like to cut through a piece of cooling lava, your imagination is the limit when you create videos with Gemini," Sharon said. The new photo-to-video feature is being rolled out alongside broader access to Veo 3, Google's latest iteration in text-to-video artificial intelligence. Veo 3 is already recognised for its ability to produce high-definition video clips with synchronised sound and lifelike motion, generated entirely from user prompts. The model delivers results in eight-second clips, integrating both visuals and audio without the need for post-production editing. Google is positioning Veo 3 as both a creative and enterprise solution, with businesses able to access the technology through the Google Cloud Vertex AI platform. Creative professionals and app developers have begun using Veo 3 to accelerate workflows, generate marketing assets, and prototype video content in a fraction of the time previously required. The company also emphasises its commitment to responsible AI development and safety. "When you use our video generation tools, we want you to feel confident in the results. That's why we take significant steps behind the scenes to make sure video generation is an appropriate experience," Sharon explained. This includes what Google describes as "extensive 'red teaming,' in which we proactively test our systems and aim to fix potential issues before they arise," as well as "thorough evaluations to understand how our tools might be used and how to prevent any misuse." Safety measures extend to content labelling, as Sharon detailed: "All generated videos include a visible watermark to show they are AI-generated and an invisible SynthID digital watermark." Users are also encouraged to provide feedback on generated content, with Sharon stating, "Use the thumbs up and down buttons on your generated videos to give us feedback, which we'll use to make ongoing improvements to our safety measures and overall experience." Access to the new photo-to-video capability begins rolling out today for Google AI Pro and Ultra subscribers in select countries. The same functionality is also available in Flow, Google's AI filmmaking tool, with the company continuing to expand availability to additional regions. "Your imagination is the limit when you create videos with Gemini," said Sharon.


Scoop
10 hours ago
- Scoop
Surge In NCEA Numeracy & Literacy Results
Minister of Education Thousands more high school students are passing the foundational literacy and numeracy assessments required for NCEA, clear evidence the Government's relentless focus on the basics is delivering results, Education Minister Erica Stanford says. 'The latest NCEA co-requisite assessment results show a marked improvement in student achievement in numeracy and reading, especially in Year 10 for those sitting the assessments for the first time. The Government's $2.2 million investment in 2024 to provide targeted support to students in 141 lower decile schools has resulted in more students achieving assessments,' Ms Stanford says. Numeracy: 57 per cent of students achieved the standard across all year levels - up from 45 per cent in May 2024. 68 per cent of Year 10 students passed the numeracy assessment, 95 per cent of whom were sitting it for the first time. 34 per cent of students in lower decile schools passed the numeracy assessment in May 2025 compared to 19.8 per cent in May 2024. Reading: 61 per cent of students achieved the standard across all year levels – up from 58 per cent in May 2024. 72 per cent of Year 10 students passed the reading assessment, over 95 per cent of were first time participants. 41 per cent of students in lower decile schools passed the reading assessment in May 2025 compared to 34 per cent in May 2024. Writing: 55 per cent of students achieved the standard across all year levels - holding steady from May last year. 66 per cent of Year 10 students passed the writing assessment, 95 per cent of whom were sitting it for the first time. 35 per cent of students in lower decile schools passed the reading assessment in compared to 34 per cent in May 2024. More than half of this year's Year 12 students who did not meet the co-requisite while in Year 11 last year have now achieved it — and around a third of these students will now be awarded NCEA Level 1. This takes the pass rate for NCEA level 1 in 2024 from 71.5 per cent to 79.6 per cent. 'These early improvements are the result of a comprehensive reform package focused on lifting academic achievement. We have introduced a new year-by-year, knowledge-rich and internationally benchmarked English and maths curriculum, restored a focus on structured literacy and structured maths, and provided schools with hundreds of thousands of high-quality resources — including over 830,000 maths textbooks, workbooks and teacher guides. 'We're investing significantly in teacher professional development, mandated an hour a day of reading, writing and maths and banned the use of cell phones in schools to ensure every student gets the focused instruction they deserve. 'While these results are positive, there are still too many students who don't have the fundamental literacy and numeracy skills they need to thrive. That's why this Government is unapologetically reforming the education system to prioritise improving student outcomes. As our back-to-basics approach beds in, more children will be better equipped when taking these assessments in the future,' Ms Stanford says. Notes The co-requisite ensures that all students demonstrate foundational literacy and numeracy skills before being awarded any level of the NCEA (National Certificate of Educational Achievement). From 2024, students must pass three digital assessments—one each in reading, writing, and numeracy—to meet this 20-credit co-requisite requirement.