logo
#

Latest news with #onDeviceAI

Apple Intelligence adds more powerfu new capabilities across Apple devices
Apple Intelligence adds more powerfu new capabilities across Apple devices

Tahawul Tech

time10-06-2025

  • Business
  • Tahawul Tech

Apple Intelligence adds more powerfu new capabilities across Apple devices

Developers can now access the Apple Intelligence on-device foundation model to power private, intelligent experiences within their apps. CUPERTINO, CALIFORNIA — Apple has announced new Apple Intelligence features that elevate the user experience across iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro. Apple Intelligence unlocks new ways for users to communicate with features like Live Translation; do more with what's on their screen with updates to visual intelligence; and express themselves with enhancements to Image Playground and Genmoji. Additionally, shortcuts can now tap into Apple Intelligence directly, and developers will be able to access the on-device large language model at the core of Apple Intelligence, giving them direct access to intelligence that is powerful, fast, built with privacy, and available even when users are offline. These Apple Intelligence features are available for testing starting today, and will be available to users with supported devices set to a supported language this fall. 'Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems,' said Craig Federighi, Apple's senior vice president of Software Engineering. 'We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create.' Apple Intelligence features will be coming to eight more languages by the end of the year: Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese. Live Translation Breaks Down Language Barriers For those moments when a language barrier gets in the way, Live Translation can help users communicate across languages when messaging or speaking. The experience is integrated into Messages, FaceTime, and Phone, and enabled by Apple-built models that run entirely on device, so users' personal conversations stay personal. In Messages, Live Translation can automatically translate messages. If a user is making plans with new friends while traveling abroad, their message can be translated as they type, delivered in the recipient's preferred language, and when they get a response, each message can be instantly translated.2 On FaceTime calls, a user can follow along with translated live captions while still hearing the speaker's voice. And when on a phone call, the translation is spoken aloud throughout the conversation.3 News Ways to Explore Creativity with Updates to Genmoji and Image Playground Genmoji and Image Playground provide users with even more ways to express themselves. In addition to turning a text description into a Genmoji, users can now mix together emoji and combine them with descriptions to create something new. When users make images inspired by family and friends using Genmoji and Image Playground, they have the ability to change expressions or adjust personal attributes, like hairstyle, to match their friend's latest look. In Image Playground, users can tap into brand-new styles with ChatGPT, like an oil painting style or vector art. For moments when users have a specific idea in mind, they can tap Any Style and describe what they want. Image Playground sends a user's description or photo to ChatGPT and creates a unique image. Users are always in control, and nothing is shared with ChatGPT without their permission. Visual Intelligence Helps Users Search and Take Action Building on Apple Intelligence, visual intelligence extends to a user's iPhone screen so they can search and take action on anything they're viewing across their apps. Visual intelligence already helps users learn about objects and places around them using their iPhone camera, and it now enables users to do more, faster, with the content on their iPhone screen. Users can ask ChatGPT questions about what they're looking at on their screen to learn more, as well as search Google, Etsy, or other supported apps to find similar images and products. If there's an object a user is especially interested in, like a lamp, they can highlight it to search for that specific item or similar objects online. Visual intelligence also recognizes when a user is looking at an event and suggests adding it to their calendar.4 Apple Intelligence then extracts the date, time, and location to prepopulate these key details into an event. Users can access visual intelligence for what's on their screen by simply pressing the same buttons used to take a screenshot. Users will have the choice to save or share their screenshot, or explore more with visual intelligence. Apple Intelligence Expands to Fitness on Apple Watch Workout Buddy is a first-of-its-kind workout experience on Apple Watch with Apple Intelligence that incorporates a user's workout data and fitness history to generate personalised, motivational insights during their session.5 To offer meaningful inspiration in real time, Workout Buddy analyses data from a user's current workout along with their fitness history, based on data like heart rate, pace, distance, Activity rings, personal fitness milestones, and more. A new text-to-speech model then translates insights into a dynamic generative voice built using voice data from Fitness+ trainers, so it has the right energy, style, and tone for a workout. Workout Buddy processes this data privately and securely with Apple Intelligence. Workout Buddy will be available on Apple Watch with Bluetooth headphones, and requires an Apple Intelligence-supported iPhone nearby. It will be available starting in English, across some of the most popular workout types: Outdoor and Indoor Run, Outdoor and Indoor Walk, Outdoor Cycle, HIIT, and Functional and Traditional Strength Training. Apple Intelligence On-Device Model Now Available to Developers Apple is opening up access for any app to tap directly into the on-device foundation model at the core of Apple Intelligence. With the Foundation Models framework, app developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they're offline, and that protect their privacy, using AI inference that is free of cost. For example, an education app can use the on-device model to generate a personalised quiz from a user's notes, without any cloud API costs, or an outdoors app can add natural language search capabilities that work even when the user is offline. The framework has native support for Swift, so app developers can easily access the Apple Intelligence model with as few as three lines of code. Guided generation, tool calling, and more are all built into the framework, making it easier than ever to implement generative capabilities right into a developer's existing app. Shortcuts Get More Intelligent Shortcuts are now more powerful and intelligent than ever. Users can tap into intelligent actions, a whole new set of shortcuts enabled by Apple Intelligence. Users will see dedicated actions for features like summarising text with Writing Tools or creating images with Image Playground. Now users will be able to tap directly into Apple Intelligence models, either on-device or with Private Cloud Compute, to generate responses that feed into the rest of their shortcut, maintaining the privacy of information used in the shortcut. For example, a student can build a shortcut that uses the Apple Intelligence model to compare an audio transcription of a class lecture to the notes they took, and add any key points they may have missed. Users can also choose to tap into ChatGPT to provide responses that feed into their shortcut. Additional New Features Apple Intelligence is even more deeply integrated into the apps and experiences that users rely on every day: The most relevant actions in an email, website, note, or other content can now be identified and automatically categorised in Reminders . Apple Wallet can now identify and summarise order tracking details from emails sent from merchants or delivery carriers. This works across all of a user's orders, giving them the ability to see their full order details, progress notifications, and more, all in one place. Users can create a poll for anything in Messages , and with Apple Intelligence, Messages can detect when a poll might come in handy and suggest one. In addition, Backgrounds in the Messages app lets a user personalise their chats with stunning designs, and they can create unique backgrounds that fit their conversation with Image Playground. These features build on a wide range of Apple Intelligence capabilities that are already available to users: Writing Tools can help users rewrite, proofread, and summarise the text they have written. And with Describe Your Change, users can describe a specific change they want to apply to their text, like making a dinner party invite read like a poem. Clean Up in Photos allows users to remove distracting elements while staying true to the moment as they intended to capture it. Visual intelligence builds on Apple Intelligence and helps users learn about objects and places around them instantly. Genmoji allow users to create their own emoji by typing a description. And just like emoji, they can be added inline to messages, or shared as a sticker or reaction in a Tapback. Image Playground gives users a way to create playful images in moments, with concepts like themes, costumes, accessories, and places. And they can add their own text descriptions, and create images in the likeness of a family member or friend using photos from their photo library. Image Wand can transform a rough sketch into a polished image that complements a user's notes. Mail summaries give users a way to view key details for an email or long thread by simply tapping or clicking Summarise. Smart Reply provides users with suggestions for a quick response in Mail and Messages. Siri is more natural and helpful, with the option to type to Siri and tap into its product knowledge about the features and settings on Apple products; Siri can also follow along if a user stumbles over their words, and maintain context from one request to the next. Access to ChatGPT is integrated in Writing Tools and Siri, giving users the option to tap into ChatGPT's image- and document-understanding capabilities without needing to jump between tools. Natural language search in Photos makes it easier for users to find a photo or video by simply describing it. Users can create a memory movie in Photos by typing a description. Summaries of audio transcriptions in Notes are automatically generated to surface important information at a glance. Users can generate summaries of call transcriptions to highlight important details. Priority Messages , a section at the top of the inbox in Mail, shows the most urgent emails, like a same-day invitation to lunch or a boarding pass. Priority Notifications appear at the top of a user's notifications, highlighting important notifications that may require immediate attention. Notification summaries give users a way to scan long or stacked notifications and provide key details right on the Lock Screen. Previews in Mail and Messages show users a brief summary of key information without needing to open a message. The Reduce Interruptions Focus surfaces only the notifications that might need immediate attention. A Breakthrough for Privacy in AI Designed to protect users' privacy at every step, Apple Intelligence uses on-device processing, meaning that many of the models that power it run entirely on device. For requests that require access to larger models, Private Cloud Compute extends the privacy and security of iPhone into the cloud to unlock even more intelligence so a user's data is never stored or shared with Apple; it is used only to fulfill their request. Independent experts can inspect the code that runs on Apple silicon servers to continuously verify this privacy promise, and are already doing so. This is an extraordinary step forward for privacy in AI. Availability All of these new features are available for testing starting today through the Apple Developer Program at and a public beta will be available through the Apple Beta Software Program next month at Users who enable Apple Intelligence on supported devices set to a supported language will have access this fall, including all iPhone 16 models, iPhone 15 Pro, iPhone 15 Pro Max, iPad mini (A17 Pro), and iPad and Mac models with M1 and later, with Siri and device language set to the same supported language: English, French, German, Italian, Portuguese (Brazil), Spanish, Japanese, Korean, or Chinese (simplified). More languages will be coming by the end of this year: Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese. Some features may not be available in all languages or regions, and availability may vary due to local laws and regulations. For more details, visit

SK hynix Develops UFS 4.1 Solution Based on 321-High NAND
SK hynix Develops UFS 4.1 Solution Based on 321-High NAND

Yahoo

time21-05-2025

  • Business
  • Yahoo

SK hynix Develops UFS 4.1 Solution Based on 321-High NAND

Optimized for on-device AI with best-in-class sequential reading performance, low power requirement Thickness reduced by 15% to fit into ultra-slim flagship smartphones Portfolio with world's highest 321-layer product to enhance SK hynix's leadership as full stack AI memory provider SEOUL, South Korea, May 21, 2025 /PRNewswire/ -- SK hynix Inc. (or "the company", announced today that it has developed UFS 4.1 solution product adopting the world's highest 321-layer 1Tb triple level cell 4D NAND flash for mobile applications. The development comes amid increasing requirements for high performance and low power of a NAND solution product to ensure a stable operation of on-device AI. The company expects the UFS 4.1 product, optimized for AI workload, to help enhance its memory leadership in the flagship smartphone markets. With an increase in demand for on-device AI leading to greater importance of the balance between computation capabilities and battery efficiency of a device, the mobile market is now requiring thinness and low power from a mobile device. The latest product comes with a 7% improvement in power efficiency, compared with the previous generation based on 238-high NAND and a slimmer 0.85mm thickness, down from 1mm before, to fit into a ultra-slim smartphone. The product also supports data transfer speed of 4300MB/s, the fastest sequential read* for a fourth-generation of UFS, while providing the best-in-class performance by also improving random read and write speed**, critical for multitasking, by 15% and 40%, respectively. Immediate provision of the required data for on-device AI and faster running speed and the responsivity of an application are expected to enhance user experience. * Sequential Read/Write: speed to read and write data of a file sequentially * Random Read/Write: speed to read and write data of dispersed files SK hynix plans to win customer qualification within the year and ship in volume from the first quarter of next year. The product will be provided in two capacity types – 512GB and 1TB. Ahn Hyun, President and Chief Development Officer, said that SK hynix plans to complete development of the 321-high 4D NAND-based SSD for both consumers and data centers within the year. "We are on track to expand our position as a full-stack AI memory provider in the NAND space by building a product portfolio with AI technological edge." About SK hynix Inc. SK hynix Inc., headquartered in Korea, is the world's top tier semiconductor supplier offering Dynamic Random Access Memory chips ("DRAM") and flash memory chips ("NAND flash") for a wide range of distinguished customers globally. The Company's shares are traded on the Korea Exchange, and the Global Depository shares are listed on the Luxembourg Stock Exchange. Further information about SK hynix is available at View original content to download multimedia: SOURCE SK hynix Inc.

Microsoft is opening its on-device AI models up to web apps in Edge
Microsoft is opening its on-device AI models up to web apps in Edge

The Verge

time19-05-2025

  • The Verge

Microsoft is opening its on-device AI models up to web apps in Edge

Web developers will be able to start leveraging on-device AI in Microsoft's Edge browser soon, using new APIs that can give their web apps access to Microsoft's Phi-4-mini model, the company announced at its Build conference today. And Microsoft says the API will be cross-platform, so it sounds like these APIs will work with the Edge browser in macOS, as well. The 3.8-billion-parameter Phi-4-mini is Microsoft's latest small, on-device model, rolled out in February alongside the company's larger Phi-4. With the new APIs, web developers will be able to add prompt boxes and offer writing assistance tools for text generation, summarizing, and editing. And within the next couple of months, Microsoft says it will also release a text translation API. Microsoft is putting these 'experimental' APIs forth as potential web standards, and in addition to being cross-platform, it says they'll also work with other AI models. Developers can start trialing them in the Edge Canary and Dev channels now, the company says. Google offers similar APIs for its Chrome browser. With them, developers can use Chrome's built-in models to offer things like text translation, prompt boxes for text and image generation, and calendar event creation based on webpage content.

Google is about to unleash Gemini Nano's power for third-party Android apps
Google is about to unleash Gemini Nano's power for third-party Android apps

Android Authority

time16-05-2025

  • Business
  • Android Authority

Google is about to unleash Gemini Nano's power for third-party Android apps

Edgar Cervantes / Android Authority TL;DR Google is expanding access to Gemini Nano, its on-device AI model, through new ML Kit GenAI APIs. These new APIs, likely to be announced at I/O 2025, will enable developers to easily implement features like text summarization, proofreading, rewriting, and image description generation in their apps. Unlike the experimental AI Edge SDK, ML Kit's GenAI APIs will be in beta, support image input, and be available on a wider range of Android devices beyond the Pixel 9 series. Generative AI technology is changing how we communicate and create content online. Many people ask AI chatbots like Google Gemini to perform tasks such as summarizing an article, proofreading an email, or rewriting a message. However, some people are wary of using these AI chatbots, especially when these tasks involve highly personal or sensitive information. To address these privacy concerns, Google offers Gemini Nano, a smaller, more optimized version of its AI model that runs directly on the device instead of on a cloud server. While access to Gemini Nano has so far been limited to a single device line and text-only input, Google will soon significantly expand its availability and introduce image input support. Late last month, Google published the session list for I/O 2025, which includes a session titled, 'Gemini Nano on Android: Building with on-device gen AI.' The session's description states it will 'introduce a new set of generative AI APIs that harness the power of Gemini Nano. These new APIs make it easy to implement use cases to summarize, proofread, and rewrite text, as well as to generate image descriptions.' In October, Google opened up experimental access to Gemini Nano via the AI Edge SDK, allowing third-party developers to experiment with text-to-text prompts on the Pixel 9 series. The AI Edge SDK enables text-based features like rephrasing, smart replies, proofreading, and summarization, but it notably does not include support for generating image descriptions, a feature Google highlighted for the upcoming I/O session. Thus, it's likely that the 'new set of generative AI APIs' mentioned in the session's description refers to either something entirely different from the AI Edge SDK or a newer version of it. Fortunately, we don't have to wait until next week to find out. Earlier this week, Google quietly published documentation on ML Kit's new GenAI APIs. ML Kit is an SDK that allows developers to leverage machine learning capabilities in their apps without needing to understand how the underlying models work. The new GenAI APIs allow developers to 'harness the power of Gemini Nano to deliver out-of-the-box performance for common tasks through a simplified, high-level interface.' Like the AI Edge SDK, it's 'built on AICore,' enabling 'on-device execution of AI foundation models like Gemini Nano, enhancing app functionality and user privacy by keeping data processing local. Mishaal Rahman / Android Authority In other words, ML Kit's GenAI APIs make it simple for developers to use Gemini Nano for various features in their apps privately and with high performance. These features currently include summarizing, proofreading, or rewriting text, as well as generating image descriptions. All four of these features match what's mentioned in the I/O session's description, suggesting that Google intends to formally announce ML Kit's GenAI APIs next week. Here's a summary of all the features offered by ML Kit's GenAI APIs: Summarization : Summarize articles or chat conversations as a bulleted list. Generates up to three bullet points Languages: English, Japanese, and Korean : Summarize articles or chat conversations as a bulleted list. Proofreading : Polish short content by refining grammar and fixing spelling errors. Languages: English, Japanese, German, French, Italian, Spanish, and Korean : Polish short content by refining grammar and fixing spelling errors. Rewrite : Rewrite short chat messages in different tones or styles. Styles: Elaborate, Emojify, Shorten, Friendly, Professional, Rephrase Languages: English, Japanese, German, French, Italian, Spanish, and Korean : Rewrite short chat messages in different tones or styles. Image description : Generate a short description of a given image. Languages: English : Generate a short description of a given image. Compared to the existing AI Edge SDK, ML Kit's GenAI APIs will be offered in 'beta' instead of 'experimental access.' This 'beta' designation could mean Google will allow apps to use the new GenAI APIs in production. Currently, developers cannot release apps using the AI Edge SDK, meaning no third-party apps can leverage Gemini Nano at this time. Another difference is that the AI Edge SDK is limited to text input, whereas ML Kit's GenAI APIs support images. This image support enables the image description feature, allowing apps to generate short descriptions of any given image. The biggest difference between the current version of the AI Edge SDK and ML Kit's GenAI APIs, though, is device support. While the AI Edge SDK only supports the Google Pixel 9 series, ML Kit's GenAI APIs can be used on any Android phone that supports the multimodal Gemini Nano model. This includes devices like the HONOR Magic 7, Motorola Razr 60 Ultra, OnePlus 13, Samsung Galaxy S25, Xiaomi 15, and more. Developers who are interested in trying out Gemini Nano in their apps can get started by reading the public documentation for the ML Kit GenAI APIs. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store