&w=3840&q=100)
Gemma 3n: All about Google's open model for on-device AI on phones, laptops
Gemma 3n model: Details
Google says Gemma 3n makes use of a new technique called Per-Layer Embeddings (PLE), which allows the model to consume much less RAM than similarly sized models. Although the model has 5 billion and 8 billion parameters (5B and 8B), this new memory optimisation brings its RAM usage closer to that of a 2B or 4B model. In practical terms, this means Gemma 3n can run with just 2GB to 3GB of RAM, making it viable for a much wider range of devices.
Gemma 3n model: Key capabilities
Audio input: The model can process sound-based data, enabling applications like speech recognition, language translation, and audio analysis.
Multimodal input: With support for visual, text, and audio inputs, the model can handle complex tasks that involve combining different types of data.
Broad language support: Google said that the model is trained in over 140 languages.
32K token context window: Gemma 3n supports input sequences up to 32,000 tokens, allowing it to handle large chunks of data in one go—useful for summarising long documents or performing multi-step reasoning.
PLE caching: The model's internal components (embeddings) can be stored temporarily in fast local storage (like the device's SSD), helping reduce the RAM needed during repeated use.
Conditional parameter loading: If a task doesn't require audio or visual capabilities, the model can skip loading those parts, saving memory and speeding up performance.
Gemma 3n model: Availability
As part of the Gemma open model family, Gemma 3n is provided with accessible weights and licensed for commercial use, allowing developers to tune, adapt, and deploy it across a variety of applications. Gemma 3n is now available as a preview in Google AI Studio.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Google's AI model Gemini 2.5 Flash to support local processing of data
Synopsis This is part of broader efforts from Google as it continues to invest and grow its footprint in India, Google Cloud Asia Pacific vice-president Bikram Singh Bedi told ET on the sidelines of the Google I/O Connect event here on Wednesday. 'A key part of investments is partnerships, the local customers, with the local partners to drive things forward, as far as AI from Google and Google Cloud are concerned,' he said.


Economic Times
an hour ago
- Economic Times
Google's AI model Gemini 2.5 Flash to support local processing of data
Indian developers can now access Google's latest AI model Gemini 2.5 Flash, which will support processing of data locally in the company's data centres in Delhi and Mumbai, allowing them to develop solutions for regulated industries like banking and financial services and for low latency applications. This is part of broader efforts from Google as it continues to invest and grow its footprint in India, Google Cloud Asia Pacific vice-president Bikram Singh Bedi told ET on the sidelines of the Google I/O Connect event here on Wednesday. 'A key part of investments is partnerships, the local customers, with the local partners to drive things forward, as far as AI from Google and Google Cloud are concerned,' he said. The company is also working with enterprises and startups where it is seeing momentum. Speaking at the event, Preeti Lobana, country manager, Google India, said to build endearing companies, startups need to solve fundamental issues in areas such as healthcare, climate and sustainability using deep AI solutions. Building trust, privacy and security from day one is also key. Google has been expanding in the country in recent years. The US-based tech giant currently operates two data centres, in Mumbai and Delhi, and is reportedly in talks to acquire land in Mumbai for another data centre. Bedi did not respond to queries about expanding its data centre presence in is second largest in terms of active developers for Google. According to a report by Public First, the Indian app publishers earned Rs 4 lakh crore through Android and Google Play in is integrating Gemini 2.5 Pro with its developer studio Firebase, which will allow developers to use multimodal prompts. Its open-source AI model, Gemma 3, supports 140 languages, including six Indian startups Sarvam, Soket Labs and are using Gemma to build AI models for various use cases. Sarvam, for instance, has built a translation model using week, Google offered a one-year free subscription of Google AI Pro worth Rs 19,500 to students, where they will be able to access a suite of products including Gemini 2.5 Pro, Notebook LLM and video model Veo 3.


News18
2 hours ago
- News18
Morning Digest: Inside Dhankhar's Abrupt Exit Drama And Other Top Stories
Last Updated: Dhankhar's surprise resignation, mutual fund tax clarity, mismatched internships, and a forgotten tale of courage, today's top stories you shouldn't miss. 🌞 Good Morning, India! From behind-the-scenes tensions in Delhi to the future of search engines and super-babies, today's news brief is a blend of politics, tech, tax tips, astrology and a dash of Bollywood drama. 🔍 Is Perplexity AI the New Google? Apple Might Think So This AI-powered search engine is gaining buzz for giving direct, cited answers. A potential Apple deal could make it India's next go-to. 👉 New Search Era? About the Author News Desk First Published: July 24, 2025, 06:00 IST