'Copilot' this, 'Copilot' that. A watchdog wants Microsoft to change its confusing AI advertising.
Microsoft has a long history of being criticized for coming up with clunky product names, and for changing them so often it's hard for customers to keep up.
The company's own employees once joked in a viral video that the iPod would have been called the "Microsoft I-pod Pro 2005 XP Human Ear Professional Edition with Subscription" had it been created by Microsoft.
The latest gripe among some employees and customers: The company's tendency to slap " Copilot" on everything AI.
"There is a delusion on our marketing side where literally everything has been renamed to have Copilot it in," one employee told Business Insider late last year. "Everything is Copilot. Nothing else matters. They want a Copilot tie-in for everything."
Now, an advertising watchdog is weighing in. The Better Business Bureau's National Advertising Division reviewed Microsoft's advertising for its Copilot AI tools.
NAD called out Microsoft's "universal use of the product description as 'Copilot'" and said that "consumers would not necessarily understand the difference," according to a recent report from the watchdog.
"Microsoft is using 'Copilot' across all Microsoft Office applications and Business Chat, despite differences in functionality and the manual steps that are required for Business Chat to produce the same results as Copilot in a specific Microsoft Office app," NAD further explained in an email to BI.
NAD did not mention any specific recommendations on product names. But it did say that Microsoft should modify claims that Copilot works "seamlessly across all your data" because all of the company's tools with the Copilot moniker don't work together continuously in a way consumers might expect.
"For Copilot in Business Chat to achieve the same functionality as Copilot in Word or PowerPoint, the text-based responses from Business Chat would have to be manually copied and pasted into the relevant application," NAD stated.
The watchdog also recommended that Microsoft discontinue or modify its advertising to disclose a clear basis for the claim that "Over the course of 6, 10, and more than 10 weeks, 67%, 70%, and 75% of users say they are more productive" because that survey doesn't necessarily account for actual productivity gains, just perceived gains.
"We take seriously our responsibility to provide clear, transparent, and accurate information to our customers," Jared Spataro, Microsoft's AI at Work chief marketing officer, said in a statement. "Companies choose Microsoft 365 Copilot because it delivers measurable value, securely and at scale."
Spataro also said a "record number of customers" returned to purchase additional Microsoft 365 Copilot seats last quarter, and that the company's deal sizes continue to grow.
"From Barclays rolling out Copilot to 100,000 employees, to Dow identifying millions in potential savings—the data speaks for itself," Spataro said.
A Microsoft spokesperson added that the company disagrees with NAD's conclusions about the phrasing of its advertising and whether it implied certain claims, but will follow NAD's recommendations.
Recently, the company developed a new plan to simplify its many AI offerings by streamlining how the products are pitched to customers, according to internal slides from a recent presentation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
37 minutes ago
- Business Insider
The CEO building the 'Ikea of factories' wants to democratize semiconductor production
In his 1986 book "Engines of Creation," engineer K. Eric Drexler — often called the godfather of nanotechnology — made a prediction. "The coming era of molecular machines will mean the end of many limits: the limit of scarcity, the limit of slow development, the limit of ignorance enforced by the lack of tools," he wrote. Reading those words a few years later, when he was 16, Matthew Putman started thinking. "Our bodies work as these little micro-machines where you have ribosomes and enzymes and things that are working and replicating and making things all the time, but our factories work the way that they've worked for the last hundred years," Putman told Business Insider he thought at the time. He wondered how a world would look "where you don't have large assembly lines, you don't have smokestacks, you instead just make things so perfectly," he said. Putman became fascinated by the possibilities of machines that are "atomically precise." It wasn't until the recent AI boom, however, that the idea really took off with fabrication plants. Putman, now 50, is the CEO of Brooklyn-based Nanotronics, which he cofounded with his father in 2010. The company started out building microscopes and tools to detect defects in semiconductors, among other materials. Now, it builds small, modular semiconductor manufacturing plants called Cubefabs. While the biggest fabs in the country are often millions of square feet in size, Cubefabs measure anywhere from 25,000 square feet for the smallest units up to about 60,000 square feet for a full-sized fab. They're adaptable, and the company says they can be assembled in under a year in most places on Earth. They're also smart — thanks to the power of AI — so they can self-monitor their production and improve in real time, the company said. And they're relatively cheap, costing a minimum of $30 to $40 million, compared to large fabs that can cost billions to build. With President Donald Trump back in the White House and pledging to reinvigorate US manufacturing, a new opening has emerged for Nanotronics — even as sweeping tariffs challenge companies that produce or depend on semiconductors. Putman says that in the long term, the tariffs will bolster domestic innovation. Tariffs "should be a wake-up call — a push to create something better than what either the US or China has done before," he told BI in a video interview from the Nanotronics headquarters in Brooklyn Navy Yard. "If we get this right, American innovation won't just protect our future — it could help redefine global progress in a way that benefits humanity." Putman says compact, modular factories are exactly that. "Your factory should be incredibly small," Putman said, gesturing to the room behind him. "Eventually, it could be the size of this room." The 'Ikea of factories' Semiconductor manufacturing has surged since the launch of ChatGPT. Global annual revenue for the industry is expected to reach more than $1 trillion by 2030, according to McKinsey & Company. In the US, despite legislation subsidizing domestic semiconductor production, fabs are more expensive to construct and maintain than those built in places like mainland China and Taiwan, McKinsey says. The US also suffers from a shortage of qualified labor, which can delay construction timelines, according to the firm. To attempt to solve some of these issues, Nanotronics teamed up with architecture firm Rogers Partners and engineering firm Arup to design compact factories. Each one runs with 37 people, but Putman says the ideal setup is four factories — about 180 workers total — which allows them to scale up without halting production. "It's like the Ikea of factories," Putman said. The company has raised $182 million to date from firms including Peter Thiel's Founders Fund. Cubefabs can be used to produce chips that span a range of uses across electronics applications, electric vehicles, and photodetectors for cube satellites, Putman said. "The more precise we make things, the more abundance we bring to the world," he said. "The business of making things grow bigger and bigger starts small — molecular small." Building on the foundational research of scientist Philippe Bove — now chief scientist at Nanotronics — the company also uses gallium oxide — a type of semiconductor that can handle more power than traditional materials like silicon — to produce advanced chips. The company plans to have its first installation set up in New York within the next 18 months. "These fabs do not require billions in capital expenditure or large populations of highly trained workers," Putman told BI in a follow-up email. "The vision is that any region — whether in the Global South or the United States — should be able to produce what it needs locally."

Business Insider
37 minutes ago
- Business Insider
The new must-have for CEOs: An AI whisperer
A year ago, Glenn Hopper was advising just a handful of company leaders on how to embed AI agents and tools like ChatGPT throughout their organizations. Today, the Memphis-based AI strategist has an extensive waitlist of C-suite executives seeking his help with those tasks and more. "This technology is moving so fast, the gap between what CEOs need to know and what they actually understand is massive," said Hopper, a former finance chief and author of the 2024 book "AI Mastery for Finance Professionals: Foundations, Techniques, and Applications." That's why so many bosses are knocking on his door, he told Business Insider. Leadership coaches and consultants have long helped CEOs navigate the pressures of the corner office. Now, executive sherpas who were early to embrace AI say they're seeing a spike in CEOs seeking guidance on everything from vetting vendors and establishing safety protocols to preparing for the next big wave of AI breakthroughs. "Everything is inbound," said Conor Grennan, chief of AI Mindset, an AI consulting and training firm he founded in 2023. "We have a list a mile long." Grennan, also chief AI architect at New York University's Stern School of Business, said company leaders often reach out after struggling to get employees to adopt AI. They see other CEOs touting AI's benefits, and so they're feeling like they're behind, he said. 'It was garbage in, garbage out' Some of the AI gurus that company leaders are tapping helm (or work at) startups that were launched in recent years to take advantage of the AI boom. Others head up new or expanded AI teams within established advisory firms. Lan Guan was named Accenture's first chief AI officer in 2023, two decades after joining the professional-services firm. She's since been counseling a growing number of company leaders on how to bring AI into their organizations. "CEOs need an AI translator to basically sift through all this noise," she said. "The amount of signals you're getting, the amount of noise, it's so distracting." Company leaders are also seeking out AI gurus in some cases to fix mistakes they made while going at it alone. Guan recalled one CEO who came to her after the person's company had to pause a multi-million dollar investment in a custom AI model because employees trained it on dozens of different versions of the same operating procedure. "When they tried to scale, their data was not clean enough," she said. "It was garbage in, garbage out." Amos Susskind, CEO and founder of the London beauty-tech startup Noli, has been tapping Guan and members of her team for AI-related guidance for the past year. Noli's roughly 20 employees have been using AI tools to do their jobs and the company's beauty-product recommendation platform is powered by AI. "I'm in touch with AI leaders in Accenture probably five times a day," said Susskind, who previously led L'Oreal's consumer-products division for the U.K. and Ireland. Shaping the AI narrative leaders use Last year, 78% of workers said their organizations had used AI in at least one function, up from 55% in 2023, according to a March survey by global management consulting firm McKinsey. Companies are planning to dig into AI even more this year. A May survey from professional-services firm PricewaterhouseCoopers found that 88% of senior executives planned to increase their AI-related budgets in the next 12 months. "AI is in line with, if not bigger than, the internet," said Dan Priest, chief AI officer of PwC, whose position was created last year. Public company CEOs mentioned "agentic AI" — AI systems capable of acting autonomously — and similar terms on 269 conference calls in the second quarter, up from 12 during the same period last year, according to AI research firm AlphaSense. Getting those AI mentions right is another reason why company leaders are leaning on AI sages. CEOs need to consider more than just how investors and analysts interpret their remarks, said Priest. Employees are listening, too, and workplace experts say heightened anxiety among personnel can dent productivity and drive up turnover. "You want to be careful," warned Priest, who helps CEOs communicate their AI strategy externally. "The second you start talking about AI efficiencies, it makes your teams very nervous." CEOs also need to make sure employees using AI are doing so safely, said Hopper, the AI strategist in Memphis. "If you try to have too prohibitive a policy or don't have a policy at all, that's when people are going to do stupid stuff with data," he said. While CEOs may not want to be involved in every AI process or initiative happening at their companies, Hopper said the more hands-on experience they get with the technology, the better equipped they'll be to make smart decisions about how their organizations can benefit from it. Michael White, chief of MashTank, a boutique management consulting firm near Philadelphia, became one of Hopper's clients last year. Though he considers himself tech-savvy, White said Hopper got him up to speed on AI faster than he could've on his own. "We now have a bot that knows a lot of what I know, but has a better memory than I do," said White. Without an AI whisperer like Hopper, he added, "I'd still be at the starting gate."

Business Insider
37 minutes ago
- Business Insider
OpenAI and Microsoft are dueling over AGI. These real-world tests will prove when AI is really better than humans.
AGI is a pretty silly debate. It's only really important in one way: It governs how the world's most important AI partnership will change in the coming months. That's the deal between OpenAI and Microsoft. This is the situation right now: Until OpenAI achieves Artificial General Intelligence — where AI capabilities surpass those of humans — Microsoft gets a lot of valuable technological and financial benefits from the startup. For instance, OpenAI must share a significant portion of its revenue with Microsoft. That's billions of dollars. One could reasonably argue that this might be why Sam Altman bangs on about OpenAI getting close to AGI soon. Many other experts in the AI field don't talk about this much, or they think the AGI debate is off base in various ways, or just not that important. Even Anthropic CEO Dario Amodei, one of the biggest AI boosters on the planet, doesn't like to talk about AGI. Microsoft CEO Satya Nadella sees things very differently. Wouldn't you? If another company is contractually required to give you oodles of money if they don't reach AGI, then you're probably not going to think we're close to AGI! Nadella has called the push toward AGI "benchmark hacking," which is so delicious. This refers to AI researchers and labs designing AI models to perform well on wonky industry benchmarks, rather than in real life. Here's OpenAI's official definition of AGI: "highly autonomous systems that outperform humans at most economically valuable work." Other experts have defined it slightly differently. But the main point is that AI machines and software must be better than humans at a wide variety of useful tasks. You can already train an AI model to be better at one or two specific things, but to get to artificial general intelligence, machines must be able to do many different things better than humans. Please help BI improve our Business, Tech, and Innovation coverage by sharing a bit about your role — it will help us tailor content that matters most to people like you. Continue By providing this information, you agree that Business Insider may use this data to improve your site experience and for targeted advertising. By continuing you agree that you accept the Terms of Service and Privacy Policy . My real-world AGI tests Over the past few months, I've devised several real-world tests to see if we've reached AGI. These are fun or annoying everyday things that should just work in a world of AGI, but they don't right now for me. I also canvassed input from readers of my Tech Memo newsletter and tapped my source network for fun suggestions. Here are my real-world tests that will prove we've reached AGI: The PR departments of OpenAI and Anthropic use their own AI technology to answer every journalist's question. Right now, these companies are hiring a ton of human journalists and other communications experts to handle a barrage of reporter questions about AI and the future. When I reach out to these companies, humans answer every time. Unacceptable! Unless this changes, we're not at AGI. This suggestion is from a hedge fund contact, and I love it: Please, please can my Microsoft Outlook email system stop burying important emails while still letting spam through? This one seems like something Microsoft and OpenAI could solve with their AI technology. I haven't seen a fix yet. In a similar vein, can someone please stop Cactus Warehouse from texting me every 2 days with offers for 20% off succulents? I only bought one cactus from you guys once! Come on, AI, this can surely be solved! My 2024 Tesla Model 3 Performance hits potholes in FSD. No wonder tires have to be replaced so often on these EVs. As a human, I can avoid potholes much better. Elon, the AGI gauntlet has been thrown down. Get on this now. Can AI models and chatbots make valuable predictions about the future, or do they mostly just regurgitate what's already known on the internet? I tested this recently, right after the US bombed Iran. ChatGPT's stock-picking ability was put to the test versus a single human analyst. Check out the results here. TL;DR: We are nowhere near AGI on this one. There's a great Google Gemini TV ad where a kid is helping his dad assemble a basketball net. The son is using an Android phone to ask Gemini for the instructions and pointing the camera at his poor father struggling with parts and tools. It's really impressive to watch as Gemini finds the instruction manual online just by "seeing" what's going on live with the product assembly. For AGI to be here, though, the AI needs to just build the damn net itself. I can sit there and read out instructions in an annoying way, while someone else toils with fiddly assembly tasks — we can all do that. Yes, I know these tests seem a bit silly — but AI benchmarks are not the real world, and they can be pretty easily gamed. That last basketball net test is particularly telling for me. Getting an AI system and software to actually assemble a basketball net — that might happen sometime soon. But, getting the same system to do a lot of other physical-world manipulation stuff better than humans, too? Very hard and probably not possible for a very long time. As OpenAI and Microsoft try to resolve their differences, the companies can tap experts to weigh in on whether the startup has reached AGI or not, per the terms of their existing contract, according to The Information. I'm happy to be an expert advisor here. Sam and Satya, let me know if you want help! For now, I'll leave the final words to a real AI expert. Konstantin Mishchenko, an AI research scientist at Meta, recently tweeted this, while citing a blog by another respected expert in the field, Sergey Levine: "While LLMs learned to mimic intelligence from internet data, they never had to actually live and acquire that intelligence directly. They lack the core algorithm for learning from experience. They need a human to do that work for them," Mishchenko wrote, referring to AI models known as large language models. "This suggests, at least to me, that the gap between LLMs and genuine intelligence might be wider than we think. Despite all the talk about AGI either being already here or coming next year, I can't shake off the feeling it's not possible until we come up with something better than a language model mimicking our own idea of how an AI should look," he concluded.