
OpenAI could be on the verge of its first big commercial challenge
It also could be on the precipice of its first major commercial challenge.
Driving the news: Apple is considering using models from either OpenAI or Anthropic to underpin its next version of Siri, according to a blockbuster report from Bloomberg's Mark Gurman.
Gurman writes that Apple has asked both companies "to train versions of their models that could run on Apple's cloud infrastructure for testing," although he also cautions that these discussions are in their early stages and that Apple continues to develop its own LLM.
The big picture: Were Apple to pick Anthropic, it could become a major challenge to OpenAI's consumer dominance.
There are more than 2 billion iPhones in use globally, including around 155 million in the U.S. That's a massive installed user base that suddenly would get introduced to Claude, even if white-labeled as Siri.
iPhone users spend a lot of time inside the Apple ecosystem — photos, storage, contacts, etc. — and may skip the ChatGPT app tap if a viable alternative is more integrated with the rest of their digital life.
By the numbers: OpenAI most recently was valued by VCs at $300 billion, while Anthropic was valued at $61.5 billion.
Zoom in: There are lots of obvious caveats here, including the possibility that Apple will stay in-house or outsource to OpenAI instead of to Anthropic.
Plus the Google Gemini wildcard for consumer, and the likelihood of mergers that push market concentration in unexpected directions (lots of Perplexity rumors, for example).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
an hour ago
- Tom's Guide
I tested the AI photo editing tools for iPhone vs Google Pixel vs Samsung Galaxy — and there's a clear winner
This article is part of our AI Phone Face-Off. If you're interested in our other comparisons, check out the links below. It's incredible how much the phone landscape has changed in the last year with more devices embracing AI. There was a time when you needed to put in hundreds of hours in Photoshop or other photo editing programs to perform what today's best phones can do in a matter of minutes. It's unbelievable. For the past year, I've been using all the AI photo editing tools from Apple, Google, and Samsung to not only make complex photo edits a breeze to do, but to also save me time. I can't tell you how these tools change my workflow. From removing unwanted subjects in my shots, to using generative AI to switch backgrounds, I find myself using them constantly. While Google AI had a head start on everyone, Galaxy AI roared onto the scene when it debuted with the Galaxy S24 series last year — only to expand with the release of the Galaxy S25. Meanwhile, Apple Intelligence had a slightly different rollout that has continued to add new tools with each subsequent update. What follows is a comparison of these three different AI photo editing packages. I'll be discussing not only the breadth of features, but how practical they are to use, and how well they work to determine which phone maker offers the best tools. There are good reasons why I keep on saying that Google has the best AI phones around, and its robust and effective AI photo editing tools is one of them. Part of the reason why I still think this is due to the overwhelming amount of features at a user's disposal. Here's the quick list of all of the features: My love affair with Google's AI photo editing tools began with the Pixel 8 Pro, which introduced the world to Magic Editor. This AI-assisted feature has since become available through the Google Photos app, but it's still my favorite all-around photo editing tool thanks to its ability to remove subjects, resize stuff, and fill in gaps to make the overall image look realistic. Just take a look at the video below to see me using it in action. Another strong point of Google's AI photo editing tools, where it proves to be superior than the rest, is Pixel Studio's ability to take text descriptions and create realistic images. It's particularly good at generating people too, especially when compared to Apple Intelligence, as you can see for yourself below. What makes Google's AI photo editing unique amongst the rest is the Reimagine feature, which lets you take existing photos and edit them through text descriptions. It's great for giving specific details about changing the background to something else, or adding in something to the shot. Really, it's the biggest time-saving tool I use. While I think Google's AI editing tools are the best of the bunch, there are some parts that could stand to be better. For instance, there's Best Take and its ability to quickly swap out faces in group shots. My problem with this feature is that it requires me to take several photos in succession to properly work, in order for it to have enough faces to swap out from each person. But I think it would be much more useful if it used generative AI instead to take one snapshot and then give me different options. I was eager to see how Galaxy AI compared to Google when I first tried out some of its AI photo editing features on the Galaxy S24 Ultra. Samsung didn't disappoint then, and it has broadened its tool set further with the release of the Galaxy S25 series earlier this year. While it's a runner-up to Google, I have to give Samsung credit for taking it seriously, because these Galaxy AI photo editing tools are impressive. Generative Edit is without a doubt the best AI photo editing feature I've come across, even better than Magic Editor in my opinion. What I love most about it is how it knows what I want to edit in my photos with remarkable accuracy. With complex photos, it's proven it can still detect subjects, whereas other editors, including Magic Editor, can still require me to manually make additional selections. Even better is how well it fills in gaps realistically, such as removing a subject in the scene and using generative AI to fill it. More often than not, it delivers better results than Apple Intelligence and Google AI. I also like that I can use Generative Edit to quickly remove reflections from photos with shiny surfaces. Sketch to Image is also an impressive Galaxy AI feature, which leans on generative AI to turn hand-drawn sketches into something realistic that blends in with the photos. My colleague Mark Spoonauer was blown away by how well it works in his Galaxy Z Fold 6 review, and I've used it myself to take my own chicken scratches and transform them into something properly fleshed-out. I'm really surprised by all the AI photo editing tools that Galaxy AI offers, but it's nowhere close to the amount that Google offers. For what it does offer, they've all proven to be helpful in taking time consuming edits I've had to do in the past and making them effortless. I still can't get enough about the impressive performance of Generative Edit when it comes to automatically detecting subjects and filling in the gaps with realistic elements. Take a look at the photos I edited above of Amazon's Panos Panay and check for yourself how Samsung's Generative Edit compares to Google and Apple — you'll be convinced, just like me. After trying several of its photo editing features, there's no arguing that Apple Intelligence is still trying to catch up to its rivals. With the introduction of iOS 26, which is tipped for a fall release alongside the iPhone 17, it's Apple's opportunity to expand its tool set because it's clearly lacking in this area. Here's what it offers to date with iOS 18. When it first arrived, Photo Clean Up worked like a charm on some of my photos. Apple's image removal tool is pretty intuitive to use and it does a decent job at identifying subjects I select, but it gets hung up on more complex or busy shots. When there's a lot going on in the scene, I just find it ineffective at identifying what I'm trying to select — so I frequently have to make finer selections to get what I want. When I compared it to its rivals, Photo Clean Up performed the worst both in how it makes selections and what it fills in with the gaps. I tried removing a hat I was wearing on the beach and Photo Clean Up just could not properly remove it. Image Playground is a handy tool for those that need some inspiration to create images, but it has a tendency to fail in understanding all the details I want in my prompt. In my Pixel Studio vs Image Playground face-off, Google's AI image generator took every detail in my prompt and generated a realistic image — whereas Image Playground failed at generating a throne made out of yarn in the comparison shots above. Apple Intelligence is certainly lacking in photo editing features compared to the rest, so it'll need to introduce a bunch with iOS 26 if it has any chance at convincing people it's better. At the same time, Apple Intelligence needs to do a better job with Photo Clean Up. I purposefully don't use it as much. Google's head start in the AI wars has clearly been advantageous, especially when I look at the amount of photo editing features it offers against everyone else. Not only does it have the greatest depth, but they all work well together to make photo editing simple with my Pixel 9 Pro XL. Even though Galaxy AI came a little later to the party, I have to give Samsung credit for continually adding new features. I still can't get over how well its Generative Edit works in removing or repositioning subjects in scenes with great results, as the generative AI proves to me that it can produce realistic results. As for Apple? Well, it's a big opportunity for Apple to prove to everyone that it's serious about having meaningful AI features. There aren't that many, which is one of its problems, but it's also just not as good in its current iteration. Hopefully that changes with the Apple Intelligence features that could be announced with the iOS 26 roll out later this year.


Business Insider
an hour ago
- Business Insider
Apple (AAPL) Sales Feel the Crunch in China
Sales of foreign-branded mobile phones in China, including U.S. tech giant Apple (AAPL), have dialed down. Don't Miss TipRanks' Half-Year Sale Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Make smarter investment decisions with TipRanks' Smart Investor Picks, delivered to your inbox every week. Shipments Fall According to new figures from the China Academy of Information and Communications Technology (CAICT), demand for overseas phones dropped 9.7% year-on-year in May. Calculations based on the data showed that May shipments of foreign-branded phones in China fell to 4.54 million handsets from the same month last year. Even though the CAICT doesn't break down its figures by brand, Apple is the largest foreign mobile phone maker in China's smartphone-dominated market. Apple has faced increased competition from domestic rivals and has cut prices to stay competitive. Chinese e-commerce platforms offered discounts of up to 2,530 yuan (351) on Apple's latest iPhone 16 models in May. High Huawei However, on a brighter note for Apple, data from another source – Counterpoint Research – revealed that in the second quarter between April 1 and June 22, iPhone sales increased 8% year-over-year. This was the first time since the second quarter of 2023 that Apple has seen growth in China. But Chinese rival Huawei saw sales climb 12% during the same period. 'Apple's adjustment of iPhone prices in May was well timed and well received, coming a week ahead of the 618 shopping festival,' Ethan Qi, associate director at Counterpoint Research, said in a statement. Ivan Lam, senior analyst at Counterpoint Research, added: 'Huawei is still riding high on the loyalty of its core users as they replace their old phones with new Huawei releases.' Is AAPL a Good Stock to Buy Now? On TipRanks, AAPL has a Moderate Buy consensus based on 15 Buy, 10 Hold and 2 Sell ratings. Its highest price target is $270. AAPL stock's consensus price target is $226.36 implying a 6% upside.


Tom's Guide
2 hours ago
- Tom's Guide
Apple Intelligence's best feature gains new powers in iOS 26 — what's new with Visual Intelligence
Visual Intelligence has proven to be one of Apple Intelligence's better additions, turning your iPhone's camera lens into a search tool. And in iOS 26, the feature gains new capabilities that promise to make Visual Intelligence even more useful. It's tempting to dismiss Visual Intelligence as Apple's Google Lens knock-off, but that undersells what the AI-powered capability brings to the table. In the current iteration, you can use the Camera Control button on iPhone 16 models or a Control Center shortcut on the iPhone 15 Pro to launch the camera and snap a photo of whatever has caught your interest. From there, you can do run an online search of an image, get more information via Apple Intelligence's ChatGPT tie-in or even create a calendar entry when you photography something with dates and times. iOS 26 extends those capabilities to onscreen searches. All you have to do is take a screenshot, and the same Visual Intelligence commands you use with your iPhone's camera appear next to the screenshot, simplifying searches or calendar entry creation. The same restrictions to the current version of Visual Intelligence apply to iOS 26's updated version — you'll need an iPhone that supports Apple Intelligence to use this tool. But if you have a compatible phone, you'll gain new search capabilities that are only a screenshot away. Here's what you'll see when you try out the updated Visual Intelligence, whether you've downloaded the iOS 26 developer beta or if you're waiting until the public beta arrives this month to test out the latest iPhone software. You can still use your iPhone camera to look up things with Visual Intelligence in iOS 26. But the software update extends those features to on-screen images and information captured via screenshots. Simply take a screenshot of whatever grabs your interest on your iPhone screen — as you likely know, that means pressing the power button and top volume button simultaneously — and a screenshot will appear as always. You can save it as regular screenshot by tapping the checkmark button in the upper right corner and saving the image to Photos, Files or a Quick Note. Next to that checkmark are tools for sharing the screenshot and marking it up. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. But you'll also notice some other commands on the bottom of the screen. These are the new Visual Intelligence features. From left to right, your options are Ask, Add to Calendar and Image Search. Ask taps into ChatGPT's knowledge base to summon up more information on whatever it is on your screen. There's also a search field where you can enter in a more specific question. For instance, I took a screenshot of the ESPN home page featuring a photo of the hot-dog eating contest that takes place on the Fourth of July and used the Ask button to find out who's won the contest the most times. Add to Calendar pulls time and date information off of your iPhone screen and auto-generates an entry for the iOS Calendar app that you can edit before saving. (And a good thing, too, as Visual Intelligence doesn't always get things right. I'll discuss that in a bit.) With the Add to Calendar feature, I could look up schedules for the UEFA Women's European Championship and block out the matches that I wanted to watch on my calendar. Image Search is pretty straightforward. Tap that command, and the AI will launch a Google search for whatever image happens to be in your screenshot. In my case, that happens to be an old Tapper arcade game console just in case I've got more money and nostalgia than sense. For the most part these Visual Intelligence searches I've referenced above have been done using the whole screen, but you're able to highlight the specific thing you want to search for using your finger — much like you can with the Circle to Search feature now prevalent on Android devices. I highlighted the headline of a Spanish newspaper and could get an English translation. Yes, Visual Intelligence can translate language in screenshots, too, just as it can when you use your camera as a translation tool. It's important to remember that Visual Intelligence's new tools are in the beta phase, just like the rest of iOS 26. So you might run into some hiccups when using the feature. For example, the first time I tried to create a calendar entry for the Women's Euro championships, Visual Intelligence tried to create an entry for the current day instead of when the match was actually on. When this happens, make liberal use of the thumbs up/thumbs down icons that Apple uses to train its AI tools. I tapped thumbs down, selected the Date is Wrong option from a list provided on the feedback screen and sent it off to Apple. I don't know if my feedback had an immediate effect, but the next time I tried to create a calendar event, the date was auto-generated correctly. You can capture screenshots of your Visual Intelligence results, but it's not immediately intuitive how to save those screens. Once you've taken your screenshot, swipe left to see the new shot, and then tap the check mark in the upper corner to save everything. It's something I'm sure I'll get used to over time, but it feels a little clunky after years of taking screenshots that just automatically save to the Photos app. It's an effort worth making, though. As helpful as the Visual Intelligence features have been, remembering to use your camera to access them isn't always the most natural thing to do. Being able to take a screenshot is more immediate, though, putting Visual Intelligence's capabilities literally at your fingertips.