Latest news with #MikeRockwell


Time of India
10-07-2025
- Business
- Time of India
Apple prepares for a new Vision Pro with M4 chip, coming later this year
Apple is planning to introduce its first upgrade to the $3,499 Vision Pro headset as early as this year, aiming to improve performance and comfort of a device that has struggled to gain consumer traction since its February 2024 launch. The updated Vision Pro will feature a faster M4 processor—currently used in iPad Pro and MacBook Pro models—replacing the outdated M2 chip, according to people familiar with the matter who spoke to Bloomberg. The upgrade will also include enhanced artificial intelligence components with additional neural engine cores beyond the current 16-core configuration, and a redesigned strap system to reduce neck strain and head pain from the 1.4-pound device. Major redesign planned for 2027 as competition heats up While this initial upgrade focuses on performance improvements, Apple is simultaneously developing a significantly lighter redesigned model targeted for 2027. The company is also working on a tethered enterprise headset and planning true AR glasses, with the ultimate goal of dominating the smart glasses category. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like You Won't Believe the Price of These Dubai Apartments Binghatti Developers FZE Get Offer Undo The Vision Pro's disappointing market performance stems from its cumbersome hardware, hefty price tag, and lack of compelling exclusive applications. Apple has sold only hundreds of thousands of units since launch—far below expectations for a company that successfully revolutionized smartphones, tablets, and smartwatches. The minor second-generation changes are unlikely to transform the headset into a consumer hit but may attract corporate customers and encourage more app developers to support the platform. Apple will also roll out visionOS 26 operating system later this year, featuring virtual widgets and eye-scrolling capabilities. Competition is intensifying as Meta Platforms offers cheaper alternatives and Samsung prepares to launch its Moohan headset in 2025. Meta plans to release true AR glasses by 2027, potentially beating Apple to market by several years. Earlier this year, Apple reshuffled the Vision Pro team, with top manager Mike Rockwell transitioning to focus on Siri and the headset's operating system, while software and hardware teams were reorganized into different development groups. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Phone Arena
07-07-2025
- Phone Arena
Apple introduces visionOS 26 for Vision Pro, with support for VR games and more
This year's WWDC ( Worldwide Developers Conference ) is currently underway, and Apple has officially announced visionOS 26 for the Apple Vision Pro. The numbering is following the company's new naming convention, where every operating system will now be numbered after the succeeding year to its year of release. - Mike Rockwell, Vice President of the Vision Products Group, June 2025 VisionOS 26 brings a number of new improvements, including the following: Widgets that anchor in 3D space Adding depth to photos with Apple Intelligence Enhanced Personas that feel more natural Widgets can be placed around your surroundings, and they will be there whenever you next put on your Vision Pro . Apple demonstrated this with a clock widget placed on a wall, so that it actually looked like the user had a clock in their room. Artificial windows that look out on fantastic vistas can also be anchored to your walls. You won't want to take off your headset after seeing this. | Image credit — Apple One of the best new changes is support for third-party PSVR2 controllers. Users will be able to pair controllers from the PSVR2 with their Vision Pro to play existing VR games, which Apple's headset did not support previously. Using PSVR2 controllers with the Vision Pro. | Image credit — Apple As gaming remains the main reason that people buy VR headsets , this was sorely needed. I'm glad that Apple finally saw sense and stopped trying to do things its own way. Vision Pro will now also support 180-degree, 360-degree, and wide FOV ( Field of View ) content made using GoPro, Canon, or Insta360. Viewing spatial content on the Vision Pro. | Image credit — Apple Browsing on visionOS 26 will feel incredible, as "spatial browsing" using the Safari web browser will add depth to and completely transform webpages. "Look to Scroll" will let you scroll said webpages using just your eyes. Developers can even integrate 3D models into their websites. Vision Pro users can drag these models out of a webpage and into their rooms. This allows you to view the model up close, as well as check how large it is in your room. Fascinatingly, you will now be able to unlock your iPhone while wearing the Vision Pro headset. Spatial browsing on Apple Vision Pro. | Image credit — Apple Personas have been drastically improved. The new models look much better and more lifelike, and Apple promises improved details for hair, lashes, complexion, and more. These new Personas are, in my opinion, what Apple was likely trying to achieve last year when it announced the Vision Pro . A new Personas compared to the old one. | Image credit — Apple Spatial Scenes are also getting an upgrade, thanks to AI. In short: they'll feel more lifelike than before. Vision Pro owners will also be able to visit Jupiter in an amazing new way, which looks fantastic! Apple showed off the new Jupiter environment, where you'll be able to speed up time to see the gas giant experience multiple extraordinary storms. You can also answer incoming calls on your iPhone using your Vision Pro now. Meanwhile, Home View now supports folders for enhanced app sorting. The Control Center has also been redesigned for a more convenient user experience. Of course, Apple considers the Vision Pro a super useful tool for enterprise work. As such, the company has made strides to make its headset more appealing for companies than ever before. For example, Dassault Systèmes has designed an app called 3DLive, which lets Vision Pro users view and interact with 3D models in a shared virtual space. 3DLive app in action on the Apple Vision Pro. | Image credit — Apple Apple is also adding support for an accessory specifically made for the Vision Pro : the Logitech Muse. This pen-like device allows for super accurate input and new ways to interact with virtual elements in 3D space. Using the Logitech Muse with Vision Pro. | Image credit — Apple While Apple Intelligence could already add depth to photos before, visionOS 26 takes it much further this time. As Apple put it, you'll feel like you can lean into the photo that you're viewing. Spatial Scenes on Vision Pro will completely transform your photos. | Image credit — Apple Unfortunately, the company did not reveal any news about a possible Apple Vision Pro 2 or even an Apple Vision Air, though reports claim that such a device is definitely in the works. Secure your connection now at a bargain price! We may earn a commission if you make a purchase Check Out The Offer
Yahoo
02-07-2025
- Business
- Yahoo
Apple mulls using OpenAI or Anthropic to power Siri in big reversal: report
Apple is weighing using artificial intelligence technology from Anthropic or OpenAI to power a new version of Siri, instead of its own in-house models, Bloomberg News reported Monday. The iPhone maker has had discussions with both companies about using their large language models for Siri, asking them to train versions of their LLMs that could run on Apple's cloud infrastructure for testing, the report said, citing people familiar with the discussions. Apple's investigation into third-party models is at an early stage and the company has not made a final decision on using them, the report said. Amazon-backed Anthropic declined to comment, while Apple and OpenAI did not respond to Reuters requests. The company had in March said AI improvements to its voice assistant Siri will be delayed until 2026, without giving a reason for the setback. Apple shook up its executive ranks to get its AI efforts back on track after months of delays, resulting in Mike Rockwell taking charge of Siri, as CEO Tim Cook lost confidence in AI head John Giannandrea's ability to execute on product development, Bloomberg had reported in March. Amid intense competition among major tech firms to dominate the burgeoning generative AI sector, Apple has been partnering with established AI companies and integrating a host of on-device AI features to enhance its offerings. In May, Bloomberg reported that Apple was teaming up with Anthropic on a new 'vibe-coding' software platform that will use AI to write, edit and test code on behalf of programmers. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

The Age
01-07-2025
- Business
- The Age
Apple eyes major change to Siri
The project to evaluate external models was started by Siri chief Mike Rockwell and software engineering head Craig Federighi. They were given oversight of Siri after the duties were removed from the command of John Giannandrea, the company's AI chief. He was sidelined in the wake of a tepid response to Apple Intelligence and Siri feature delays. Rockwell, who previously launched the Vision Pro headset, assumed the Siri engineering role in March. After taking over, he instructed his new group to assess whether Siri would do a better job handling queries using Apple's AI models or third-party technology, including Claude, ChatGPT and Alphabet's Google Gemini. After multiple rounds of testing, Rockwell and other executives concluded that Anthropic's technology is most promising for Siri's needs, the people said. That led Adrian Perica, the company's vice president of corporate development, to start discussions with Anthropic about using Claude, the people said. The Siri assistant — originally released in 2011 — has fallen behind popular AI chatbots, and Apple's attempts to upgrade the software have been stymied by engineering snags and delays. Loading A year ago, Apple unveiled new Siri capabilities, including ones that would let it tap into users' personal data and analyze on-screen content to better fulfil queries. The company also demonstrated technology that would let Siri more precisely control apps and features across Apple devices. The enhancements were far from ready. Apple initially announced plans for an early 2025 release but ultimately delayed the launch indefinitely. They are now planned for next year, Bloomberg News has reported. AI uncertainty People with knowledge of Apple's AI team say it is operating with a high degree of uncertainty and a lack of clarity, with executives still poring over a number of possible directions. Apple has already approved a multibillion dollar budget for 2026 for running its own models via the cloud but its plans for beyond that remain murky. Still, Federighi, Rockwell and other executives have grown increasingly open to the idea that embracing outside technology is the key to a near-term turnaround. They don't see the need for Apple to rely on its own models — which they currently consider inferior — when it can partner with third parties instead, according to the people. Licensing third-party AI would mirror an approach taken by Samsung. While the company brands its features under the Galaxy AI umbrella, many of its features are actually based on Gemini. Anthropic, for its part, is already used by to help power the new Alexa+. In the future, if its own technology improves, the executives believe Apple should have ownership of AI models given their increasing importance to how products operate. The company is working on a series of projects, including a tabletop robot and glasses that will make heavy use of AI. Apple has also recently considered acquiring Perplexity in order to help bolster its AI work, Bloomberg has reported. It also briefly held discussions with Thinking Machines Lab, the AI startup founded by former OpenAI Chief Technology Officer Mira Murati. Souring morale Apple's models are developed by a roughly 100-person team run by Ruoming Pang, an Apple distinguished engineer who joined from Google in 2021 to lead this work. He reports to Daphne Luong, a senior director in charge of AI research. Loading Luong is one of Giannandrea's top lieutenants, and the foundation models team is one of the few significant AI groups still reporting to Giannandrea. Even in that area, Federighi and Rockwell have taken a larger role. Regardless of the path it takes, the proposed shift has weighed on the team, which has some of the AI industry's most in-demand talent. Some members have signalled internally that they are unhappy that the company is considering technology from a third-party, creating the perception that they are to blame, at least partially, for the company's AI shortcomings. They've said that they could leave for multimillion-dollar packages being floated by Meta and OpenAI. Meta, the owner of Facebook and Instagram, has been offering some engineers annual pay packages between $US10 million ($15.2 million) and $US40 million — or even more — to join its new Superintelligence Labs group, according to people with knowledge of the matter. Apple is known, in many cases, to pay its AI engineers half — or even less — than what they can get on the open market. One of Apple's most senior large language model researchers, Tom Gunter, left last week. He had worked at Apple for about eight years, and some colleagues see him as difficult to replace given his unique skillset and the willingness of Apple's competitors to pay exponentially more for talent. Apple this month also nearly lost the team behind MLX, its key open-source system for developing machine learning models on the latest Apple chips. After the engineers threatened to leave, Apple made counteroffers to retain them — and they're staying for now. Anthropic and OpenAI discussions In its discussions with both Anthropic and OpenAI, the iPhone maker requested a custom version of Claude and ChatGPT that could run on Apple's Private Cloud Compute servers — infrastructure based on high-end Mac chips that the company currently uses to operate its more sophisticated in-house models. Apple believes that running the models on its own chips housed in Apple-controlled cloud servers — rather than relying on third-party infrastructure — will better safeguard user privacy. The company has already internally tested the feasibility of the idea. Other Apple Intelligence features are powered by AI models that reside on consumers' devices. These models — slower and less powerful than cloud-based versions — are used for tasks like summarising short emails and creating Genmojis. Apple is opening up the on-device models to third-party developers later this year, letting app makers create AI features based on its technology. The company hasn't announced plans to give apps access to the cloud models. One reason for that is the cloud servers don't yet have the capacity to handle a flood of new third-party features. The company isn't currently working on moving away from its in-house models for on-device or developer use cases. Still, there are fears among engineers on the foundation models team that moving to a third-party for Siri could portend a move for other features as well in the future. Last year, OpenAI offered to train on-device models for Apple, but the iPhone maker was not interested. The Siri assistant — originally released in 2011 — has fallen behind popular AI chatbots, and Apple's attempts to upgrade the software have been stymied by engineering snags and delays. Since December 2024, Apple has been using OpenAI to handle some features. In addition to responding to world knowledge queries in Siri, ChatGPT can write blocks of text in the Writing Tools feature. Later this year, in iOS 26, there will be a ChatGPT option for image generation and on-screen image analysis. While discussing a potential arrangement, Apple and Anthropic have disagreed over preliminary financial terms, according to the people. The AI startup is seeking a multibillion-dollar annual fee that increases sharply each year. The struggle to reach a deal has left Apple contemplating working with OpenAI or others if it moves forward with the third-party plan, they said. Management shifts Loading If Apple does strike an agreement, the influence of Giannandrea, who joined Apple from Google in 2018 and is a proponent of in-house large language model development, would continue to shrink. In addition to losing Siri, Giannandrea was stripped of responsibility over Apple's robotics unit. And, in previously unreported moves, the company's Core ML and App Intents teams — groups responsible for frameworks that let developers integrate AI into their apps — were shifted to Federighi's software engineering organisation.


Los Angeles Times
01-07-2025
- Business
- Los Angeles Times
Apple weighs using Anthropic or OpenAI to power Siri in major reversal
Apple Inc. is considering using artificial intelligence technology from Anthropic PBC or OpenAI to power a new version of Siri, sidelining its own in-house models in a potentially blockbuster move aimed at turning around its flailing AI effort. The iPhone maker has talked with both companies about using their large language models for Siri, according to people familiar with the discussions. It has asked them to train versions of their models that could run on Apple's cloud infrastructure for testing, said the people, who asked not to be identified discussing private deliberations. If Apple ultimately moves forward, it would represent a monumental reversal. The company currently powers most of its AI features with homegrown technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that runs on that technology for 2026. A switch to Anthropic's Claude or OpenAI's ChatGPT models for Siri would be an acknowledgment that the company is struggling to compete in generative AI — the most important new technology in decades. Apple already allows ChatGPT to answer web-based search queries in Siri, but the assistant itself is powered by Apple. Apple's investigation into third-party models is at an early stage, and the company hasn't made a final decision on using them, the people said. A competing project internally dubbed LLM Siri that uses in-house models remains in active development. Making a change — which is under discussion for next year — could allow Cupertino, California-based Apple to offer Siri features on par with AI assistants on Android phones, helping the company shed its reputation as an AI laggard. Representatives for Apple, Anthropic and OpenAI declined to comment. Shares of Apple closed up over 2% after Bloomberg reported on the deliberations. The project to evaluate external models was started by Siri chief Mike Rockwell and software engineering head Craig Federighi. They were given oversight of Siri after the duties were removed from the command of John Giannandrea, the company's AI chief. He was sidelined in the wake of a tepid response to Apple Intelligence and Siri feature delays. Rockwell, who previously launched the Vision Pro headset, assumed the Siri engineering role in March. After taking over, he instructed his new group to assess whether Siri would do a better job handling queries using Apple's AI models or third-party technology, including Claude, ChatGPT and Alphabet Inc.'s Google Gemini. After multiple rounds of testing, Rockwell and other executives concluded that Anthropic's technology is most promising for Siri's needs, the people said. That led Adrian Perica, the company's vice president of corporate development, to start discussions with Anthropic about using Claude, the people said. The Siri assistant — originally released in 2011 — has fallen behind popular AI chatbots, and Apple's attempts to upgrade the software have been stymied by engineering snags and delays. A year ago, Apple unveiled new Siri capabilities, including ones that would let it tap into users' personal data and analyze on-screen content to better fulfill queries. The company also demonstrated technology that would let Siri more precisely control apps and features across Apple devices. The enhancements were far from ready. Apple initially announced plans for an early 2025 release but ultimately delayed the launch indefinitely. They are now planned for next spring, Bloomberg News has reported. People with knowledge of Apple's AI team say it is operating with a high degree of uncertainty and a lack of clarity, with executives still poring over a number of possible directions. Apple has already approved a multibillion dollar budget for 2026 for running its own models via the cloud but its plans for beyond that remain murky. Still, Federighi, Rockwell and other executives have grown increasingly open to the idea that embracing outside technology is the key to a near-term turnaround. They don't see the need for Apple to rely on its own models — which they currently consider inferior — when it can partner with third parties instead, according to the people. Licensing third-party AI would mirror an approach taken by Samsung Electronics Co. While the company brands its features under the Galaxy AI umbrella, many of its features are actually based on Gemini. Anthropic, for its part, is already used by Inc. to help power the new Alexa+. In the future, if its own technology improves, the executives believe Apple should have ownership of AI models given their increasing importance to how products operate. The company is working on a series of projects, including a tabletop robot and glasses that will make heavy use of AI. Apple has also recently considered acquiring Perplexity in order to help bolster its AI work, Bloomberg has reported. It also briefly held discussions with Thinking Machines Lab, the AI startup founded by former OpenAI Chief Technology Officer Mira Murati. Apple's models are developed by a roughly 100-person team run by Ruoming Pang, an Apple distinguished engineer who joined from Google in 2021 to lead this work. He reports to Daphne Luong, a senior director in charge of AI research. Luong is one of Giannandrea's top lieutenants, and the foundation models team is one of the few significant AI groups still reporting to Giannandrea. Even in that area, Federighi and Rockwell have taken a larger role. Regardless of the path it takes, the proposed shift has weighed on the team, which has some of the AI industry's most in-demand talent. Some members have signaled internally that they are unhappy that the company is considering technology from a third-party, creating the perception that they are to blame, at least partially, for the company's AI shortcomings. They've said that they could leave for multimillion-dollar packages being floated by Meta Platforms Inc. and OpenAI. Meta, the owner of Facebook and Instagram, has been offering some engineers annual pay packages between $10 million and $40 million — or even more — to join its new Superintelligence Labs group, according to people with knowledge of the matter. Apple is known, in many cases, to pay its AI engineers half — or even less — than what they can get on the open market. One of Apple's most senior large language model researchers, Tom Gunter, left last week. He had worked at Apple for about eight years, and some colleagues see him as difficult to replace given his unique skillset and the willingness of Apple's competitors to pay exponentially more for talent. Apple this month also nearly lost the team behind MLX, its key open-source system for developing machine learning models on the latest Apple chips. After the engineers threatened to leave, Apple made counteroffers to retain them — and they're staying for now. In its discussions with both Anthropic and OpenAI, the iPhone maker requested a custom version of Claude and ChatGPT that could run on Apple's Private Cloud Compute servers — infrastructure based on high-end Mac chips that the company currently uses to operate its more sophisticated in-house models. Apple believes that running the models on its own chips housed in Apple-controlled cloud servers — rather than relying on third-party infrastructure — will better safeguard user privacy. The company has already internally tested the feasibility of the idea. Other Apple Intelligence features are powered by AI models that reside on consumers' devices. These models — slower and less powerful than cloud-based versions — are used for tasks like summarizing short emails and creating Genmojis. Apple is opening up the on-device models to third-party developers later this year, letting app makers create AI features based on its technology. The company hasn't announced plans to give apps access to the cloud models. One reason for that is the cloud servers don't yet have the capacity to handle a flood of new third-party features. The company isn't currently working on moving away from its in-house models for on-device or developer use cases. Still, there are fears among engineers on the foundation models team that moving to a third-party for Siri could portend a move for other features as well in the future. Last year, OpenAI offered to train on-device models for Apple, but the iPhone maker was not interested. Since December 2024, Apple has been using OpenAI to handle some features. In addition to responding to world knowledge queries in Siri, ChatGPT can write blocks of text in the Writing Tools feature. Later this year, in iOS 26, there will be a ChatGPT option for image generation and on-screen image analysis. While discussing a potential arrangement, Apple and Anthropic have disagreed over preliminary financial terms, according to the people. The AI startup is seeking a multibillion-dollar annual fee that increases sharply each year. The struggle to reach a deal has left Apple contemplating working with OpenAI or others if it moves forward with the third-party plan, they said. If Apple does strike an agreement, the influence of Giannandrea, who joined Apple from Google in 2018 and is a proponent of in-house large language model development, would continue to shrink. In addition to losing Siri, Giannandrea was stripped of responsibility over Apple's robotics unit. And, in previously unreported moves, the company's Core ML and App Intents teams — groups responsible for frameworks that let developers integrate AI into their apps — were shifted to Federighi's software engineering organization. Apple's foundation models team had also been building large language models to help employees and external developers write code in Xcode, its programming software. The company killed the project — announced last year as Swift Assist — about a month ago. Instead, Apple later this year is rolling out a new Xcode that can tap into third-party programming models. App developers can choose from ChatGPT or Claude. Gurman writes for Bloomberg.