
How to Spot AI Hype and Avoid The AI Con, According to Two Experts
That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.
The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.
Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."
Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.
Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.
So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.
Watch out for language that humanizes AI
Anthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."
These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.
That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.
In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."
The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.
But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."
Be dubious of the phrase 'super intelligence'
If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."
The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.
Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.
The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."
Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
Apple Delaying Siri's Intelligence Isn't a Failure. The Problem Is Bigger Than Apple Apple Delaying Siri's Intelligence Isn't a Failure. The Problem Is Bigger Than Apple
Click to unmute
Video Player is loading.
Play Video
Pause
Skip Backward
Skip Forward
Next playlist item
Unmute
Current Time
0:00
/
Duration
6:28
Loaded :
0.00%
0:00
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
6:28
Share
Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text
Color White Black Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Text Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Transparent Caption Area Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset Done
Close Modal Dialog
End of dialog window.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Apple Delaying Siri's Intelligence Isn't a Failure. The Problem Is Bigger Than Apple
Ask what goes in and how outputs are evaluated
One of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.
One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.
If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.
It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Newsweek
16 minutes ago
- Newsweek
Yes, Chrome Really Can Survive Without Google
An estimated 4 billion people around the world use Chrome. But what if Google didn't control the world's most-used web browser? There's a chance this question might no longer be hypothetical. In the U.S. government's landmark antitrust case against Google for illegally monopolizing the online search market, one of the most consequential and hotly debated remedies on the table is the proposed spin-off of Chrome—a browser that drives more than a third of Google's search traffic and acts as a powerful gatekeeper to the internet. In court, Google cast doubt on the idea that any other company could successfully run Chrome. But it's simply not true that Google alone is capable of running Chrome. A new report from the Knight-Georgetown Institute (KGI)—co-authored by the two of us, KGI's executive director and Firefox web browser's former chief technology officer—shows that it is technically feasible for Chrome to be divested. Our report concludes that an independent Chrome browser could stand on its own without Google, and could still compete with rivals like Microsoft Edge, Apple Safari, and Mozilla Firefox. Google Chrome logos are displayed on a cell phone. Google Chrome logos are displayed on a cell start with the basics about Chrome. Chrome is built on Chromium, an open-source software project with 35 million lines of code that are publicly accessible. Any company or developer can build a functional browser from Chromium today—and many already do, including Microsoft, Brave, and Perplexity. The new owner of Chrome would build on the underlying Chromium code base. If Chrome were spun off, the new owner would need to replace certain Google-owned components and services. Consider Safe Browsing, a feature that alerts users when they visit a suspicious website or encounter harmful downloads. It's run by Google, but it's also already used by competing browsers. Chrome's new owner could continue using it—and Google may well have reasons to continue running it—or adopt alternative approaches, like partnering with another company. Similarly, technical blueprints exist for how to run services like bookmark syncing and software updates. However, we shouldn't expect the new owner to replicate or recreate everything Google does—nor should it. Any buyer willing to front the resources to buy Chrome is not likely to be interested in perfectly copying everything Google has done, and would rather compete on its own terms according to its own business interests. It will be key for Chrome's new owner to recruit and retain the engineering talent needed to build and maintain Chrome. A well-resourced new owner—one committed to the future of an independent Chrome—could build a top-tier team with the right mission and incentives, drawing from a talent pool that exists both inside and outside of Google. A successful divestiture would also require court-ordered guardrails and transitional support from Google. With the above in place, Chrome's 4 billion users should continue to enjoy the same high-quality browser experience they've come to expect from a major browser, and perhaps even benefit from new browser innovations that are no longer tied to Google's corporate priorities. Some features would surely change as the new owner crafts the product according to its own goals, but the key components of a fast, secure browsing experience are all within reach for an independently operated Chrome. Concerns about the potential Chrome spin-off sometimes exhibit a sort of Stockholm syndrome: fear of a world without the monopolist's resources. We often get asked how an independent Chrome could possibly make money—as if a product with 4 billion global users would not be an attractive asset with many monetization options. Possible business models could draw on search, advertising, artificial intelligence (AI), enterprise use, and other services. If the court adopts some of the government's other proposed remedies as well—for example, requiring Google to syndicate its search results or search ads—many new potential business opportunities could open up. Breaking Chrome away from Google wouldn't break the browser. An independent Chrome browser can successfully compete should Google be forced to sell off Chrome. Alissa Cooper is executive director of the Knight-Georgetown Institute. She is a recognized leader in the development of global internet standards, policy, and governance. Prior to joining KGI, Alissa spent a decade at Cisco Systems in senior engineering and executive roles, including vice president of technology standards and vice president and chief technology officer for technology policy. Eric Rescorla is a senior research fellow at the Knight-Georgetown Institute. He is the former chief technology officer, Firefox, at Mozilla, where he was responsible for setting the overall technical strategy for the Firefox browser. Eric has contributed extensively to many of the core security protocols used in the internet, including TLS, DTLS, WebRTC, ACME, and QUIC. Most recently, he served as chief technologist for the Center for Forecasting and Outbreak Analytics at the Centers for Disease Control and Prevention. The views expressed in this article are the writers' own.


CNET
18 minutes ago
- CNET
These Are the First FireSat Images for Finding Wildfires from Space
At Google I/O in May, Google revealed that it's working with the Earth Fire Alliance on FireSat, a program that combines new high-resolution satellites with AI analysis to pinpoint wildfires in their earliest stages and help responders knock them down before they grow. This week the alliance released the first images captured by the initial satellite, showing how fires as small as 5-by-5 meters -- about the size of a classroom -- can be detected from space. FireSat identified this small roadside fire in Oregon in June 2025. Muon Space and Earth Fire Alliance Existing satellite systems scan for fires, but at a coarser resolution. In one image from Oregon, using MWIR (Mid-Wave Infrared) heat-sensing imaging, a small roadside fire showed up as a bright speck. According to the alliance, it wasn't detected by other space-based systems. Using up to six infrared channels, FireSat can detect new fires as well as hot burn scars from earlier fires, as shown in this June 15 image from Ontario, Canada. Muon Space and Earth Fire Alliance This example from Ontario, Canada, on June 15, 2025, shows the Nipigon 6 fire, a new blaze detected using the MWIR spectrum, but it also shows how LWIR (Long-Wave Infrared) was used to identify areas left over from a previous burn in 2020, which are heated due to a lack of vegetation. At the bottom, a false-color composite of SWIR (Short-Wave Infrared), NIR (Near-Infrared) and visible Red channels helps track the life cycle of the fire. Currently, the Earth Fire Alliance has one protoflight satellite, built by Muon Space, aloft for testing. With three satellites in orbit, FireSat will be able to scan locations globally twice a day. And when the program is fully operational, in 2030, a network of more than 50 satellites is expected to cut that time down to 20 minutes; for areas that are more prone to fires, that interval will be every 9 to 12 minutes. One key reason for Google's involvement in the alliance is to sort through the massive amount of data that will be generated. Muon Space estimates that each satellite will cover 190 million square kilometers per day, and the multispectral instrument on each satellite records across six channels. With AI and software assistance from Google, the program should filter out false positives. AI is playing a larger role in fighting wildfires around the world -- NASA is using its vast trove of Landsat satellite data to build predictive models of where fires are likely to erupt next. "There are millions of things that can be mistaken for a fire," said Chris Van Arsdale, Google Research climate and energy lead and chair of the Earth Fire Alliance board of directors. "Looking for fires becomes a game of looking for needles in a world of haystacks." FireSat can locate wildfires in areas too remote to be detected by many other methods, as seen in this example showing Alaska on June 21. Muon Space and Earth Fire Alliance It will also be important to prioritize fires that crews can respond to. A June 21, 2025, image of a remote area of Alaska shows a fire that wasn't observable by ground-based sources. A FireSat image composite showing wildfires in Australia. Muon Space and Earth Fire Alliance In this image from Borroloola, Northern Territory, Australia from July 11, 2025, the FireSat satellite identified multiple wildfires spread over a large distance, which would help fire responders coordinate efforts. The Earth Fire Alliance is currently working with some fire departments and other early adopters to help determine how best to parse the data and communicate with responders. "What you're looking at now is raw imagery that is helpful for the technologists, the scientists [and] the remote sensors," said Kate Dargan Marquis, former California state fire marshall and senior wildfire advisor to the Gordon and Betty Moore Foundation, a main underwriter of the Earth Fire Alliance. "But for firefighters, we'll build fire data products on this data." That would include map-based tools with AI underpinnings to help them understand where and how they can make fire response decisions, she noted. The FireSat satellite and the people who helped build it. Muon Space The data will eventually be made available for public resources, such as those used by the consumer app Watch Duty. Brian Collins, executive director of Earth Fire Alliance, explained that the current early adopter program includes pathways to determine how to disseminate the information being collected, be that through local dispatchers or other sources. "A very informed public can make decisions [such as when to prepare to evacuate] in advance of being told," he said, adding that a public that understands fire is no longer scared of fire. Although FireSat is still in its first stages and won't be considered operational until three satellites are in orbit, in 2026, the initial data and imagery looks to be a promising tool for fighting wildfires around the globe.


Forbes
19 minutes ago
- Forbes
Microsoft Deadline—72 Hours To Stop Using Your Passwords
Do not leave this too late. 'The 'password era is ending,' Microsoft has warned its billion users, confirming that it wants all those users to delete their passwords given that account attacks are now surging. As part of this change, a 72 hour deadline means you must act now. On the surface, Microsoft's decision to delete any passwords you have stored in its Authenticator app is straightforward. 'You can continue to access them,' it says, 'with Microsoft Edge, a secure and user-friendly AI-powered web browser.' The Authenticator app 'will continue to support passkeys,' which has been overlooked as the password deletion warning has grabbed the headlines. Microsoft doesn't want you to move your passwords, it wants you to replace them with passkeys where you can. So don't move your passwords, stop using them use passkeys instead. Just as with Google's warning to Gmail users, Amazon's warning to Prime users, and Microsoft's own warning to all its users, this is the time to make that change. All these major providers support passkeys, and you should add them now. 'If you have set up Passkeys for your Microsoft Account,' you must 'ensure that Authenticator remains enabled as your Passkey Provider. Disabling Authenticator will disable your passkeys.' Microsoft has already killed its autofill password capability. So while some of your passwords cannot be replaced with passkeys and need to be moved, where accounts do support passkeys, you should take this opportunity to stop using those passwords and upgrade the security on the accounts instead. And before blindly moving passwords to Edge, you should also bear in mind the security risks in using browser-based passwords managers. A standalone app is best, ensuring a fire-gap between the websites you visit and the passwords you have saved. 'While enrolling passkeys is an important step,' Microsoft says, 'it's just the beginning. Even if we get our more than one billion users to enroll and use passkeys, if a user has both a passkey and a password, and both grant access to an account, the account is still at risk for phishing. Our ultimate goal is to remove passwords completely and have accounts that only support phishing-resistant credentials.'