Apple is scrambling to catch up in a race it had a headstart in
It has struggled to capitalize on the headstart it had.
Fresh delays in upgrading Siri have set Apple back at a time when AI assistants are the rage.
It was 2011. The newest iPhone on the block was the 4s. And Apple was raring to introduce the world to a major acquisition it had been readying for over a year: Siri.
Bought for an undisclosed sum, the "intelligent assistant" meant Apple was among the first to show smartphone users why they would want or need an AI-powered voice companion in their pocket.
Fast-forward to 2025, and Apple's promises for Siri look uncertain. The voice assistant that should have given it a headstart in the ChatGPT era is struggling to catch up to a pack of rivals leading with far more powerful AI assistants.
In other words, Apple is falling behind in a race it originally led.
Last week, Apple confirmed that it was delaying generative AI features for Siri that were first shown off at its Worldwide Developers Conference in June 2024. It's a rare instance in which Apple no longer has a clear release date in sight for a product it has already announced.
Jacqueline Roy, an Apple spokesperson, told the unofficial Apple blog Daring Fireball that "it's going to take us longer than we thought to deliver" on upgraded features that transform Siri into a "more natural, more contextually relevant, and more personal" experience.
This has all caused some degree of embarrassment for Apple. Its marketing campaigns for a slate of new devices released over the past several months — including iPhones, iPads, and Macs — consistently mentioned their synchronicity with Apple Intelligence, which Siri is a fundamental part of.
It helps explain why the company has now made a September ad, which stars the actor Bella Ramsey using a Siri feature that does not yet exist, private on its YouTube channel.
"The delay makes a lot of sense," Hamish Low, an analyst at Enders Analysis, told Business Insider. "Apple clearly got ahead of itself with Apple Intelligence with disappointing features, awkward marketing campaigns, and tepid consumer demand. Apple's position here is ultimately defensive, it has much more to lose than to gain from the AI race."
Though Apple said it anticipates rolling out the features "in the coming year," its Apple Intelligence upgrade delays — a tool it once pinned its future on — signals how much of an issue Siri has become at a time when rival services are flourishing.
As companies have spent more time thinking about how to make generative AI useful to consumers, a series of Siri alternatives that embed powerful features have emerged — all while Apple struggles to deliver on all the promises it made for a generative AI-led Siri.
OpenAI and Google have leaned heavily on building AI-powered voice assistants that industry followers say offer a more natural and engaging conversational experience than the one users currently get with Apple.
Amazon, another early mover in the virtual assistant space, introduced a revamped version of Alexa last month that's free for Prime subscribers. Prominent Apple followers, such as Bloomberg reporter Mark Gurman, have described it as "ChatGPT Voice Mode on steroids."
As he put it on X last month, "It is frightening how far behind Apple is in this space."
"Alexa+ is notable for at least claiming to bring much of the advanced functionality that you would want from a real AI assistant," Low said. "We will need to see how far it lives up to this with its public launch later this month, but its ability to plug into a host of APIs, and directly access and interact with websites in the background otherwise, is key."
The stakes are high if Apple fails to get Siri right.
The generative AI age has introduced consumers to a growing assortment of AI-enabled smartphones that threaten to steal market share by offering AI features that offer more value.
Threat to market share has become a key issue for the company in places like China, where it is facing fierce competition from local competitors introducing smartphones with AI capabilities that aim to win over local audiences.
Apple's dilemma is clear. It is behind in a race that it entered nearly 15 years ago — with a headstart over many rivals — when it introduced Siri as an integrated feature of the iPhone 4s.
Its turnaround plan for Siri has plenty riding on it.
Read the original article on Business Insider
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
a few seconds ago
- Business Insider
Here's the list of websites gig workers used to fine-tune Anthropic's AI models. Its contractor left it wide open.
An internal spreadsheet obtained by Business Insider shows which websites Surge AI gig workers were told to mine — and which to avoid — while fine-tuning Anthropic's AI to make it sound more "helpful, honest, and harmless." The spreadsheet allows sources like Bloomberg, Harvard University, and the New England Journal of Medicine while blacklisting others like The New York Times and Reddit. Anthropic says it wasn't aware of the spreadsheet and said it was created by a third-party vendor, the data-labeling startup Surge AI, which declined to comment on this point. "This document was created by a third-party vendor without our involvement," an Anthropic spokesperson said. "We were unaware of its existence until today and cannot validate the contents of the specific document since we had no role in its creation." Frontier AI companies mine the internet for content and often work with startups with thousands of human contractors, like Surge, to refine their AI models. In this case, project documents show Surge worked to make Anthropic's AI sound more human, avoid "offensive" statements, and cite documents more accurately. Many of the whitelisted sources copyright or otherwise restrict their content. The Mayo Clinic, Cornell University, and Morningstar, whose main websites were all listed as "sites you can use," told BI they don't have any agreements with Anthropic to use this data for training AI models. Surge left a trove of materials detailing its work for Anthropic, including the spreadsheet, accessible to anyone with the link on Google Drive. Surge locked down the documents shortly after BI reached out for comment. "We take data security seriously, and documents are restricted by project and access level where possible," a Surge spokesperson said. "We are looking closely into the matter to ensure all materials are protected." It's the latest incident in which a data-labeling startup used public Google Docs to pass around sensitive AI training instructions. Surge's competitor, Scale AI, also exposed internal data in this manner, locking the documents down after BI revealed the issue. A Google Cloud spokesperson told BI that its default setting restricts a company's files from sharing outside the organization; changing this setting is a "choice that a customer explicitly makes," the spokesperson said. Surge hit $1 billion in revenue last year and is raising funds at a $15 billion valuation, Reuters reported. Anthropic was most recently valued at $61.5 billion, and its Claude chatbot is widely considered a leading competitor to ChatGPT. What's allowed — and what's not Google Sheet data showed the spreadsheet was created in November 2024, and it's referenced in updates as recent as May 2025 in other documents left public by Surge. The list functions as a "guide" for what online sources Surge's gig workers can and can't use on the Anthropic project. The list includes over 120 permitted websites from a wide range of fields, including academia, healthcare, law, and finance. It includes 10 US universities, including Harvard, Yale, Northwestern, and the University of Chicago. It also lists popular business news sources, such as Bloomberg, PitchBook, Crunchbase, Seeking Alpha, and PR Newswire. Medical information sources, such as the New England Journal of Medicine, and government sources, such as a list of UN treaties and the US National Archives, are also in the whitelist. So are university publishers like Cambridge University Press. Here's the full list of who's allowed, which says that it is "not exhaustive." And here's the list of who is banned — or over 50 "common sources" that are "now disallowed," as the spreadsheet puts it. The blacklist mostly consists of media outlets like The New York Times, The Wall Street Journal, and others. It also includes other types of sources like Reddit, Stanford University, the academic publisher Wiley, and the Harvard Business Review. The spreadsheet doesn't explain why some sources are permitted and others are not. The blacklist could reflect websites that made direct demands to AI companies to stop using their content, said Edward Lee, a law professor at Santa Clara University. That can happen through written requests or through an automated method like Some sources in the blacklist have taken legal stances against AI companies using their content. Reddit, for example, sued Anthropic this year, saying the AI company accessed its site without permission. Anthropic has denied these claims. The New York Times sued OpenAI, and The Wall Street Journal's parent, Dow Jones, sued Perplexity, for similar reasons. "The Times has objected to Anthropic's unlicensed use of Times content for AI purposes and has taken steps to block their access as part of our ongoing IP protection and enforcement efforts," the Times spokesperson Charlie Stadtlander told BI. "As the law and our terms of service make clear, scraping or using the Times's content is prohibited without our prior written permission, such as a licensing agreement." Surge workers used the list for RLHF Surge contractors were told to use the list for a later, but crucial, stage of AI model training in which humans rate an existing chatbot's responses to improve them. That process is called "reinforcement learning from human feedback," or RLHF. The Surge contractors working for Anthropic did tasks like copying and pasting text from the internet, asking the AI to summarize it, and choosing the best summary. In another case, workers were asked to "find at least 5-10 PDFs" from the web and quiz Anthropic's AI about the documents' content to improve its citation skills. That doesn't involve feeding web data directly into the model for it to regurgitate later — the better-known process that's known as pre-training. Courts haven't addressed whether there's a clear distinction between the two processes when it comes to copyright law. There's a good chance both would be viewed as crucial to building a state-of-the-art AI model, Lee, the law professor, said. It is "probably not going to make a material difference in terms of fair use," Lee said.

Business Insider
a few seconds ago
- Business Insider
Mark Cuban reveals his productivity hack — and everyone can use it
One of our biggest series this year is Power Hours, an inside look at the daily routines of top executives, founders, and creatives across industries. We want to understand what makes these people tick: why one wakes up at 4:15 a.m. to hydrate and meditate, another runs a 10K after arriving at the office, and a third moonlights as a Lyft driver. BI's Power Hours series gives readers an inside look at how powerful leaders in business structure their workday. See more stories from the series here, or reach out to the editor Lauryn Haas to share your daily routine. When we reached out to Mark Cuban, we figured he'd be ripe for this series — a billionaire who has founded several companies, invested in hundreds of small businesses, and hosted the popular TV show " Shark Tank." His response: "My day is boring." "I read and respond to emails," he wrote in an email. "I work out. I read and respond to emails. I do a couple Zooms. Then I read and respond to emails. Then I eat dinner. Then I read and respond to emails." (To be fair, he also shared that he follows his morning email session with decaf coffee, a cookie, and a shower, before taking his daughter to school, then working out at LifeTime Fitness, taking a second shower — and returning to email). This raises the question: Why is a billionaire spending most of his day in his inbox? What's so great about email? And why doesn't he hire an assistant to do all this emailing? We had to investigate. So we sent him more emails. Here's what he told us (via email). Mark Cuban: I receive around 700 emails a day and use three phones (two Android and one iPhone) to manage everything. I'd rather get 700 to 1,000 emails than sit in long, boring meetings. I can easily search them decades later. I have emails going back to the 90s. It's asynchronous. I can write or respond any time, from anywhere in the world. That makes things much easier. There's also really no limit to the type or format of the content. I can include it in emails or attach whatever. Everyone has email. In 2025, I don't know anyone who doesn't. It's fast. Particularly now, with Google's auto replies. For maybe 10 to 20% of my emails, I just have to choose one of Gmail's recommendations. If not, I can usually give very short responses. People expect them from me. How do you keep your inbox organized? Do you use filters, folders, or an email extension? I have folders. I used to have too many emails, and Gmail couldn't keep up, so I had to segregate them into different accounts. Now, that typically isn't an issue. I spend most of my day trying to get my unreads under 20. It acts as my tickler file and keeps what's important to me, right in front of me. Have you ever hired someone to help manage your email? If so, how did that go? If not, have you considered it? Never. That just slows things down. I started sending messages in the 1980s on CompuServe. It was fast and easy. For my company back then, I had everyone get an address. It worked great then, too. I still have a bunch of those folders with emails! Do you ever ignore your email (like on vacation)? Or do you always keep up with it? For a short period of time, sure. But for a full day or longer, only in extraordinary situations like a special event for a family member. I have a hard time disconnecting. It's faster to just get it out of the way. Do you like to achieve Inbox Zero? Won't ever happen. I get down under 10 now and then, but I also use my unreads as a reminder of what I need to get done today. Would you ever consider letting AI write your emails? Only the autoreply. Anything else, if I have a long response, I might use AI as a typing hack to save time, but I'm typically going to add some flavor somewhere. For a long, long time. Usually commenting that I'll respond or create emails at all hours of the day. Which is fact. If it comes to mind, I'm writing and sending. Or if the only time I have to clean up my inbox is after everyone is in bed, that's when I'll work.


USA Today
29 minutes ago
- USA Today
AI knows we shouldn't trust it for everything. I know because I asked it.
I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. Since the emergence of artificial intelligence a few years ago, I've been split between two sides of the ongoing debate. Is AI evil or the next revolutionary advance in society? On the one hand, I'm a typical AI skeptic. I worry that its prevalence is harming critical thinking skills and creativity, and I am very concerned about its environmental impact. Conversely, I'm a child of the internet. I know this conversation has happened before, and I know when I'm getting left behind. I've heard enough friends discuss using ChatGPT in their daily lives to know that AI is here to stay, so I might as well get accustomed to it. I had also been using AI in small doses already: Every time I use Google, its AI technology Gemini summarizes what I need an answer to. I used to use AI to transcribe my interviews. My work uses Microsoft Teams, which has its own AI called Copilot. But I had yet to dive headfirst into the world of ChatGPT, OpenAI's chatbot that launched in 2022 and effectively changed the way AI is used by everyday people. With the blessing of my editor, I decided it was time to get familiar with the tool that's probably going to take my job one day. I opened the app, created an account and introduced myself as a journalist. 'Hi Sara!' ChatGPT replied. 'Great to meet you. I'm ready for your questions – ask away whenever you're ready.' Did ChatGPT immediately go woke, or was it just agreeing with me? To start, I launched into a series of questions about Zohran Mamdani, the Democratic candidate for New York City mayor known for his progressive politics. I told ChatGPT that I generally agree with Mamdani's politics, and asked if the chatbot believed democratic socialism could work in New York City. 'Yes, democratic socialism can work in New York City – at least in specific, meaningful ways – but there are structural, political, and fiscal tensions that make its implementation uneven and often contentious,' the AI responded. It then gave me a list of reasons why it could work (organizing leftists, public opinion and current elected officials) and a list of roadblocks to making that happen (state government, finances and strong opposition from bigwigs in the city). This made me immediately skeptical of what ChatGPT was telling me. I had made it clear that I supported Mamdani's policies – was it just telling me what I wanted to hear? I asked another question – this time about Mamdani's chances in the November election. In seconds, the chatbot pulled up a series of polls showing him in the lead. I asked what neighborhoods he'd need to win – something I know journalists have previously broken down. To my surprise, ChatGPT linked to actual news articles that delved into these topics. Research that would have taken me several hours to find the information. On the other hand, it also cited Wikipedia multiple times – a practice that I've been told not to engage in since at least middle school. Opinion: Zohran Mamdani rallied Gen Z voters. We can't abandon him now. Overall, I was surprised by how easy it was to get the answers I needed, and particularly stunned when the information was easier to digest than what I'd get from a basic Google search. AI seemed able to break down my questions into thoughtful answers in a way that standard search never could, unless I was very specific in what I searched for. ChatGPT knows humans might not trust AI Next, I moved on to why I was really there – to see if ChatGPT could actually be a useful tool for me. 'I'm writing a column about you, and the inevitability of AI becoming part of daily life,' I typed in the chat. 'Do you think this is where things are headed?' It responded by calling my column idea 'fascinating.' Flattering. 'Yes, it does look increasingly inevitable that AI will become a normalized, embedded part of daily life,' the bot responded. Of course it believes that. It also doesn't seem to be wrong. Opinion: AI is changing our world. At what point will it change our reality? 'If I'm honest, I'm a little wary of using you ‒ it seems like AI can be a force for good, but relying on it too heavily can harm human critical thinking skills,' I wrote. 'You're absolutely right to be wary,' ChatGPT responded. 'AI is a tool, not a replacement for human judgment, and there is a real risk that relying on it too much can blunt critical thinking, creativity, and independent analysis ‒ especially in fields like journalism, where those skills are core.' AI knows it shouldn't do my job. Also, AI wants to do my job. It feels strange to know that this chatbot is somewhat self-aware. It didn't think it was a good idea for humans to rely on it, yet it kept trying to write an outline for my column or give me short quips that I could use in my column. The line between 'tool' and 'crutch' seems awfully thin. Aside from the fact that it would be unethical to have ChatGPT write this column, I also don't find that the program creates particularly compelling writing. The heavy reliance on dashes and italics got old quickly. It also seemed to struggle with turns of phrase. I told it an old journalism idiom – "If your mother says she loves you, check it out" – which it regurgitated as "if (AI) says my mother loves me, I'll still check the birth certificate." Opinion: School cell phone bans are a distraction. The real crisis isn't in your kid's hand. Another thing that stuck out to me was how complimentary ChatGPT was. It called my questions 'excellent'; it told me how important journalism is as a career. I appreciated the ego boost, noticing that it made me want to use the chatbot even more. After all, who doesn't like being told that they're intelligent and interesting? I can't lie. I get it now. I understand the allure of AI. I began thinking of all the ways I could use ChatGPT – replying to reader emails, synthesizing the week's important events, maybe even responding to people on Hinge. In the end, I had to stop myself – I fear that becoming too reliant on AI would dull my senses in the long run, destroying my capacity for creativity and leading me to forget why I love writing in the first place. When I declined to let it write my column and told it I'd be working with my editor, it told me that this was a good idea. "Your readers will benefit from the fact that you're approaching this moment with curiosity and caution," it told me. "That's where good journalism lives." I still have a healthy skepticism of ChatGPT and AI's newfound place in our culture. I believe we should all be cautious when using it – after all, there are plenty of instances of AI being wrong. At the same time, I do see the benefit: It's quick, thorough and conversational. I understand why so many people I know use it. You don't have to use AI, the same way you don't have to use the Internet. When you do use it, be skeptical of the information the program provides. Try to limit the times you use it to reduce its environmental impact. Just be aware of the fact that this is where the future is headed, whether we like it or not. Follow USA TODAY columnist Sara Pequeño on X, formerly Twitter: @sara__pequeno