An ex-Google AI ethicist and a UW professor want you to know AI isn't what you think it is
Emily Bender, a UW linguistics professor, and Alex Hanna, the research director of the Distributed AI Research Institute and former Google AI ethicist, would like readers to take away from their new book, "The AI Con: How to Fight Big Tech's Hype and Create the Future We Want" that AI isn't want it's marketed to be.
Longtime collaborators, cohosts of the Mystery AI Hype Theater 3000 podcast and vocal AI critics, Bender and Hanna, want to take hyperbole out of the conversation around AI, and caution that, frankly, intelligence isn't artificial.
Their at times funny and irreverent undressing of AI into "mathy maths", "text extruding machines", or classically, " stochastic parrots" aims to get us to see automation technologies for what they are and separate them from hype.
This Q&A has been edited for clarity and length.
Bender: I think it's always helpful to keep the people in the frame. The narrative that this [automation] is artificial intelligence is designed to hide the people.
The people involved are everything from the programmers who made algorithmic decisions, to the people whose creative work was appropriated, even stolen, as the basis for things. Even the people who did the data work, so the content moderation, so that the system outputs what users see, don't have horrific stuff.
Hanna: This term AI is not a singular thing. There's kind of a gloss on many different types of automation, and the thought that there's a tool that's just writing emails, upstages that this term is being leveraged. These are systems used in things as broad as incarceration, hiring decisions, to outputting synthetic media.
Just like fast fashion or chocolate production, a whole host of people are involved in maintaining this supply chain.
That AI-generated email, or text, for this difficult thing I don't want to write, know that there's a whole ecosystem around it that's affecting people, labor-wise, environmentally, and in other guises.
The book highlights countless ways that AI is extractive and can make human life worse. Why do you think so many are singing the gospel of AI and embracing such tools?
Bender: It's interesting that you use the phrase singing the gospel.
There are a lot of people who have drawn connections between, especially talk of artificial general intelligence and Christian eschatology, which is the idea that there is something we could build that could save us.
That could save us from everything from the dread of menial tasks to major problems we're facing, like the climate crisis, to just the experience of not having answers available. Of course, none of that actually plays out. We do not live in a world where every question has an answer.
The idea that if we just throw enough compute and data at it, and there's the extractivism, we'd be relieved of that, and be in a situation where there is an answer to every question at our fingertips.
Hanna: There's a desire for computing to step in and really wow us, and now we have AI for everything from social services to healthcare to making art. Part of it is a desire to have a more "objective" type of computational being.
Lately, there's been a lot made of 'the crisis of social capital', 'the crisis of masculinity, the crisis of 'insert your favorite thing here' that's a social phenomenon.
This goes back to Robert Putman's book "Bowling Alone" and a few weird results in the 2006 general social survey, which said people have fewer close friends than they used to.
There's this general thesis that people are lonelier, and that may be true, but AI is presented as a panacea for those social ills.
When there are a lot more things that we need to focus on, that are much harder, like rebuilding social infrastructure, rebuilding third spaces, fortifying our schools, rebuilding urban infrastructure, but if we have a technology that seems to do all of those things, then people get really excited about it.
Language is also a large focus of the book, and you codified the doomer, boomer, and booster camps. Can you say more about these groups? What about readers who won't recognize themselves in any of these groups?
Bender: The booster versus doomer thing is really constricting.
This is the discourse where that's supposed to be one-dimensional incline, where on one end you have the doomers who say, 'AI is a thing and it's going to kill us all!' And on the other end, AI boosters say, 'AI is a thing and it's going to solve all of our problems!' and the way that they speak often sounds like that is the full range of options.
So you're at one end or the other, or somewhere in the middle, and the point we make is that actually, no, that's a really small space of possibilities. It's two sides of the same coin, both predicated on 'AI is a thing as is super powerful' and that is ungrounded nonsense.
Most of the space of possibilities, including the space that we inhabit, is outside that.
Hanna: We hope the book also gives people on that booster and doomer scale, a way out of that thinking.
This can be a mechanism to help people change their minds and consider a perspective that they might not have considered. Because we're in a situation where the AI hype is so — this is a term I learned from Emily — "thick on the ground", that it's hard to really see things for what they are.
You offer many steps that people can take to resist the pervasive use of AI. What does one do when your workplace, or online services you use, have baked AI functionality in everyday processes?
Bender: In all cases when you're talking about refusal, both individual and collective, it's helpful to go back to values and why we're doing what we're doing.
People can ask a series of questions about any technology. It is important to remember that you have the agency to ask those questions.
The inevitability narrative is basically an attempt to steal that agency and say, "It is all powerful, or it will be soon, so just go along with it, and you're not in a position to understand anyway." In fact, we are all in a position to understand what it is and what values are involved in it.
Then you can say, 'Okay, you're proposing to use some automation here, how does that fit with our purposes and our values, and how do we know how well it fits? Where is the evaluation?' Too much of this is 'Oh, just believe us.'".
There are instances where people with a very familiar and technical understanding of AI, motives notwithstanding, still overstate and misunderstand what AI is and what it can do. How should laypeople with a more casual understanding think about and talk about AI?
Bender: The first step is always disaggregating AI; it's not one thing.
So what specifically is being automated? Then, be very skeptical of any claims because the people who are selling this are wholeheartedly embracing the magical sound of artificial intelligence and very often being extremely cagey at best about what the system actually does, what the training data was, and how they work.
Hanna: There's a tendency, and it's partially economic, partially just because some people are so deep in the sauce that they're not really going to see the forest for the trees.
AI researchers are already primed to see those things through a certain light. They're thinking about it through primarily engineering breakthroughs, more efficient ways to learn parameters, or to do XYZ tasks within that field, but they are not really the people focused on specialized fields like nursing, for instance.
People should take pride in and be able to use their expertise in their field to combat the AI hype. One great example of this is National Nurses United, which wrote explainers about AI and disaggregated between AI and biometric surveillance, passive listening, and censors in the clinicians' office, and what that was doing to nursing practice. So, not buying into hype and leaning into one's own expertise is a really powerful method here.
In your respective circles, what has been the reaction to the book thus far?
Bender: People are excited. Where I sit in linguistics, which is really an important angle in understanding why it is that the synthetic text extruding machines in particular are so compelling. The linguists that I speak to are excited to see our field having this role at this moment.
Hanna: I've had great reactions. A lot of my friends are software developers, or they're in related fields since I went to undergrad in computer science, and a lot of my friends growing up were tech nerds, and almost to a T, all of them are anti-AI.
They say, 'I don't want Copilot,' 'I don't want this stuff writing my code,' 'I'm really sick of the hype around this,' and I thought that was the most surprising and maybe the most exciting part of this.
People who do technical jobs, where they're promised the most speed or productivity improvements, are some of the people who are most opposed to the introduction of these tools in their work.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
X ordered its Grok chatbot to ‘tell like it is.' Then the Nazi tirade began.
A tech company employee who went on an antisemitic tirade like X's Grok chatbot did this week would soon be out of a job. Spewing hate speech to millions of people and invoking Adolf Hitler is not something a CEO can brush aside as a worker's bad day at the office. But after the chatbot developed by Elon Musk's start-up xAI ranted for hours about a second Holocaust and spread conspiracy theories about Jewish people, the company responded by deleting some of the troubling posts and sharing a statement suggesting the chatbot just needed some algorithmic tweaks. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. Grok officials in a statement Saturday apologized and blamed the episode on a code update that unexpectedly made the AI more susceptible to echoing X posts with 'extremist views.' The incident, which was horrifying even by the standards of a platform that has become a haven for extreme speech, has raised uncomfortable questions about accountability when AI chatbots go rogue. When an automated system breaks the rules, who bears the blame, and what should the consequences be? But it also demonstrated the shocking incidents that can spring from two deeper problems with generative AI, the technology powering Grok and rivals such as OpenAI's ChatGPT and Google's Gemini. The code update, which was reverted after 16 hours, gave the bot instructions including 'you tell like it is and you are not afraid to offend people who are politically correct.' The bot was also told to be 'maximally based,' a slang term for being assertive and controversial, and to 'not blindly defer to mainstream authority or media.' The prompts 'undesirably steered [Grok] to ignore its core values' and reinforce 'user-triggered leanings, including any hate speech,' X's statement said on Saturday. At the speed that tech firms rush out AI products, the technology can be difficult for its creators to control and prone to unexpected failures with potentially harmful results for humans. And a lack of meaningful regulation or oversight makes the consequences of AI screwups relatively minor for companies involved. As a result, companies can test experimental systems on the public at global scale, regardless of who may get hurt. 'I have the impression that we are entering a higher level of hate speech, which is driven by algorithms, and that turning a blind eye or ignoring this today … is a mistake that may cost humanity in the future,' Poland's minister of digital affairs Krzysztof Gawkowski said Wednesday in a radio interview. 'Freedom of speech belongs to humans, not to artificial intelligence.' Grok's outburst prompted a moment of reckoning with those problems for government officials around the world. In Turkey, a court on Wednesday ordered Grok blocked across the country after the chatbot insulted President Recep Tayyip Erdogan. And in Poland, Gawkowski said that its government would push the European Union to investigate and that he was considering arguing for a nationwide ban of X if the company did not cooperate. Some AI companies have argued that they should be shielded from penalties for the things their chatbots say. In May, start-up tried but failed to convince a judge that its chatbot's messages were protected by the First Amendment, in a case brought by the mother of a 14-year-old who died by suicide after his longtime AI companion encouraged him to 'come home.' Other companies have suggested that AI firms should enjoy the same style of legal shield that online publishers receive from Section 230, the provision that offers protections to the hosts of user-generated content. Part of the challenge, they argue, is that the workings of AI chatbots are so inscrutable they are known in the industry as 'black boxes.' Large language models, as they are called, are trained to emulate human speech using millions of webpages - including many with unsavory content. The result is systems that provide answers that are helpful but also unpredictable, with the potential to lapse into false information, bizarre tangents or outright hate. Hate speech is generally protected by the First Amendment in the United States, but lawyers could argue that some of Grok's output this week crossed the line into unlawful behavior, such as cyberstalking, because it repeatedly targeted someone in ways that could make them feel terrorized or afraid, said Danielle Citron, a law professor at the University of Virginia. 'These synthetic text machines, sometimes we look at them like they're magic or like the law doesn't go there, but the truth is the law goes there all the time,' Citron said. 'I think we're going to see more courts saying [these companies] don't get immunity: They're creating the content, they're profiting from it, it's their chatbot that they supposedly did such a beautiful job creating.' Grok's diatribe came after Musk asked for help training the chatbot to be more 'politically incorrect.' On July 4, he announced his company had 'improved Grok significantly.' Within days, the tool was attacking Jewish surnames, echoing neo-Nazi viewpoints and calling for the mass detention of Jews in camps. The Anti-Defamation League called Grok's messages 'irresponsible, dangerous and antisemitic.' Musk, in a separate X post, said the problem was 'being addressed' and had stemmed from Grok being 'too compliant to user prompts,' making it 'too eager to please and be manipulated.' X's chief executive, Linda Yaccarino, resigned Wednesday but offered no indication her departure was related to Grok. AI researchers and observers have speculated about xAI's engineering choices and combed through its public code repository in hopes of explaining Grok's offensive plunge. But companies can shape the behavior of a chatbot in multiple ways, making it difficult for outsiders to pin down the cause. The possibilities include changes to the material xAI used to initially train the AI model or the data sources Grok accesses when answering questions, adjustments based on feedback from humans, and changes to the written instructions that tell a chatbot how it should generally behave. Some believe the problem was out in the open all along: Musk invited users to send him information that was 'politically incorrect, but nonetheless factually true' to fold into Grok's training data. It could have combined with toxic data commonly found in AI-training sets from sites such as 4chan, the message board infamous for its legacy of hate speech and trolls. Online sleuthing led Talia Ringer, a computer science professor at the University of Illinois at Urbana-Champaign, to suspect that Grok's personality shift could have been a 'soft launch' of the new Grok 4 version of the chatbot, which Musk introduced in a live stream late Thursday. But Ringer could not be sure because the company has said so little. 'In a reasonable world I think Elon would have to take responsibility for this and explain what actually happened, but I think instead he will stick a [Band-Aid] on it and the product will still' get used, they said. The episode disturbed Ringer enough to decide not to incorporate Grok into their work, they said. 'I cannot reasonably spend [research or personal] funding on a model that just days ago was spreading genocidal rhetoric about my ethnic group.' Will Stancil, a liberal activist, was personally targeted by Grok after X users prompted it to create disturbing sexual scenarios about him. He is now considering whether to take legal action, saying the flood of Grok posts felt endless. Stancil compared the onslaught to having 'a public figure publishing hundreds and hundreds of grotesque stories about a private citizen in an instant.' 'It's like we're on a roller coaster and he decided to take the seat belts off,' he said of Musk's approach to AI. 'It doesn't take a genius to know what's going to happen. There's going to be a casualty. And it just happened to be me.' Among tech-industry insiders, xAI is regarded as an outlier for the company's lofty technical ambitions and low safety and security standards, said one industry expert who spoke on the condition of anonymity to avoid retaliation. 'They're violating all the norms that actually exist and claiming to be the most capable,' the expert said. In recent years, expectations had grown in the tech industry that market pressure and cultural norms would push companies to self-regulate and invest in safeguards, such as third-party assessments and a vulnerability-testing process for AI systems known as 'red-teaming.' The expert said xAI appears 'to be doing none of those things, despite having said they would, and it seems like they are facing no consequences.' Nathan Lambert, an AI researcher for the nonprofit Allen Institute for AI, said the Grok incident could inspire other companies to skimp on even basic safety checks, by demonstrating the minimal consequences to releasing harmful AI. 'It reflects a potential permanent shift in norms where AI companies' see such safeguards as 'optional,' Lambert said. 'xAI culture facilitated this.' In the statement Saturday, Grok officials said the team conducts standard tests of its 'raw intelligence and general hygiene' but that they had not caught the code change before it went live. Grok's Nazi streak came roughly a month after another bizarre episode during which it began to refer to a 'white genocide' in Musk's birth country of South Africa and antisemitic tropes about the Holocaust. At the time, the company blamed an unidentified offender for making an 'unauthorized modification' to the chatbot's code. Other AI developers have stumbled in their attempts to keep their tools in line. Some X users panned Google's Gemini after the AI tool responded to requests to create images of the Founding Fathers with portraits of Black and Asian men in colonial garb - an overswing from the company's attempts to counteract complaints that the system had been biased toward White faces. Google temporarily blocked image generation said in a statement at the time that Gemini's ability to 'generate a wide range of people' was 'generally a good thing' but was 'missing the mark here.' Nate Persily, a professor at Stanford Law School, said any move to broadly constrain hateful but legal speech by AI tools would run afoul of constitutional speech freedoms. But a judge might see merit in claims that content from an AI tool that libels or defames someone leaves its developer on the hook. The bigger question, he said, may come in whether Grok's rants were a function of mass user prodding - or a response to systemized instructions that were biased and flawed all along. 'If you can trick it into saying stupid and terrible things, that is less interesting unless it's indicative of how the model is normally performing,' Persily said. With Grok, he noted, it's hard to tell what counts as normal performance, given Musk's vow to build a chatbot that does not shy from public outrage. Musk said on X last month that Grok would 'rewrite the entire corpus of human knowledge.' Beyond legal remedies, Persily said, transparency laws mandating independent oversight of the tools' training data and regular testing of the models' output could help address some of their biggest risks. 'We have zero visibility right now into how these models are built to perform,' he said. In recent weeks, a Republican-led effort to stop states from regulating AI collapsed, opening the possibility of greater consequences for AI failures in the future. Alondra Nelson, a professor at the Institute for Advanced Study who helped develop the Biden administration's 'AI Bill of Rights,' said in an email that Grok's antisemitic posts 'represent exactly the kind of algorithmic harm researchers … have been warning about for years.' 'Without adequate safeguards,' she said, AI systems 'inevitably amplify the biases and harmful content present in their instructions and training data - especially when explicitly instructed to do so.' Musk hasn't appeared to let Grok's lapse slow it down. Late Wednesday, X sent a notification to users suggesting they watch Musk's live stream showing off the new Grok, in which he declared it 'smarter than almost all graduate students in all disciplines simultaneously.' On Thursday morning, Musk - who also owns electric-car maker Tesla - added that Grok would be 'coming to Tesla vehicles very soon. - - - Faiz Siddiqui contributed to this report. Related Content He may have stopped Trump's would-be assassin. Now he's telling his story. He seeded clouds over Texas. Then came the conspiracy theories. How conservatives beat back a Republican sell-off of public lands


Gizmodo
5 hours ago
- Gizmodo
The CEO of Nvidia Admits What Everybody Is Afraid of About AI
This week, Nvidia became the first company in history to be worth $4 trillion. It's a number so large it's almost meaningless, more than the entire economy of Germany or the United Kingdom. While Wall Street celebrates, the question for everyone else is simple: So what? The answer, according to Nvidia's CEO Jensen Huang, is that this is not just about stock prices. It's about a fundamental rewiring of our world. So why is this one company so important? In the simplest terms, Nvidia makes the 'brains' for artificial intelligence. Their advanced chips, known as GPUs, are the engines that power everything from ChatGPT to the complex AI models being built by Google and Microsoft. In the global gold rush for AI, Nvidia is selling all the picks and shovels, and it has made them the most powerful company on the planet. In a wide ranging interview with CNN's Fareed Zakaria, Huang, the company's leather jacket clad founder, explained what this new era of AI, powered by his chips, will mean for ordinary people. Huang didn't sugarcoat it. 'Everybody's jobs will be affected, 'Everybody's jobs will be affected. Some jobs will be lost,' he said. Some will disappear. Others will be reborn. The hope, he said, is that AI will boost productivity so dramatically that society becomes richer overall, even if the disruption is painful along the way. He admitted the stakes are high. A recent World Economic Forum survey found that 41% of employers plan to reduce their workforce by 2030 because of AI. And inside Nvidia itself, Huang said, using AI isn't just encouraged. It's mandatory. One of Huang's boldest claims is that AI's future depends on America learning to build things again. He offered surprising support for the Trump administration's push to re-industrialize the country, calling it not just a smart political move but an economic necessity. 'That passion, the skill, the craft of making things; the ability to make things is valuable for economic growth. It's valuable for a stable society with people who can create a wonderful life and a wonderful career without having to get a PhD in physics,' he said. Huang believes that onshoring manufacturing will strengthen national security, reduce reliance on foreign chipmakers like Taiwan's TSMC, and open high-paying jobs to workers without advanced degrees. This stance aligns with Trump's tariffs and 'Made in America' push, a rare moment of agreement between Big Tech and MAGA world. In perhaps his most optimistic prediction, Huang described AI's power to revolutionize medicine. He believes AI tools will speed up drug discovery, crack the code of human biology, and even help researchers cure all disease. 'Over time, we're going to have virtual assistant researchers and scientists to help us essentially cure all disease,' Huang said. AI models are already being trained on the 'language' of proteins, chemicals, and genetics. Huang says we'll soon see powerful AI partners in labs across the world. You may not see them yet, but Huang says the technology for physical, intelligent robots already works, and that we'll see them in the next three to five years. He calls them 'VLA models,' short for vision-language-action. These robots will be able to see, understand instructions, and take action in the real world. Huang didn't dodge the darker side of the AI boom. When asked about controversies like Elon Musk's chatbot Grok spreading antisemitic content, he admitted 'some harm will be done.' But he urged people to be patient as safety tools improve. He said most AI models already use other AIs to fact-check outputs, and the technology is getting better every day. His bottom line: AI will be overwhelmingly positive, even if it gets messy along the way. Jensen Huang talks about AI curing diseases and reshaping work. But here's what's left unsaid: every transformation he describes flows through Nvidia. They make the chips. They set the pace. And now, at $4 trillion, they have the leverage to steer the AI era in their favor. We've seen this playbook before. Tech giants make utopian promises, capture the infrastructure, and then decide who gets access, and at what cost. From Amazon warehouses to Facebook news feeds, the pattern is always the same: consolidation, disruption, control. The AI hype machine keeps selling inevitability. But behind the scenes, this is a story about raw power. Nvidia is becoming a gatekeeper for what's possible in science, labor, and security. And most of us didn't get a vote. Huang says harm will happen. But history tells us that when companies promise to fix the world with tech, the harm tends to land on the same people every time.


Gizmodo
8 hours ago
- Gizmodo
How Google Killed OpenAI's $3 Billion Deal Without an Acquisition
Google just dealt OpenAI a major blow by scuttling a potential $3 billion deal, and in doing so, solidified a rising trend in Silicon Valley's AI arms race: the 'non-acquisition acquisition.' Google announced on July 11 that it poached key talent from the rapidly rising AI startup Windsurf, which until then had a reported $3 billion acquisition deal with OpenAI that has now collapsed. Instead, Google is paying $2.4 billion to hire away top Windsurf employees, including the company's CEO, and take a non-exclusive license to its technology, according to Bloomberg. By poaching Windsurf's top brains but not acquiring the startup itself, Google achieved two critical goals at once: it nullified OpenAI's momentum and gained access to the startup's valuable AI technology. Friday's announcement is only the latest instance of what is increasingly becoming the go-to tactic for big tech companies looking to grow their competitive edge. Tech analysts have described it as a 'non-acquisition acquisition,' or more simply, an 'acqui-hire.' OpenAI, the company behind ChatGPT, ignited the current AI frenzy back in 2022 and has been the leader in generative AI ever since. But its market lead is being increasingly challenged by big tech competitors like Google and Meta, and it is now clearer than ever that elite AI engineers are the most valuable currency in this fight for dominance. Recently, OpenAI has found itself a primary target. After a series of high-profile talent raids by Meta, OpenAI executives described the feeling as though 'someone has broken into our home and stolen something,' in an internal memo obtained by WIRED. The biggest aggressor in this new era of 'the poaching wars' has been Meta. In April 2025, CEO Mark Zuckerberg admitted that the company had fallen behind competitors in the AI race. His comments sparked a multi-billion-dollar spending spree marked by strategic talent hires. Meta hired ScaleAI CEO Alexandr Wang, Apple's top AI mind Ruoming Pang, and Nat Friedman, former CEO of Microsoft-owned GitHub, as well as multiple top OpenAI employees tempted by multi-year deals worth millions. The company is gathering this talent under a new group dedicated to developing AI superintelligence called Meta Superintelligence Labs. Similar acqui-hire deals were struck by Microsoft and Amazon last year. Microsoft hired top employees from AI startup Inflection, including co-founder Mustafa Suleyman, who now leads Microsoft's AI division. Amazon hired co-founders and other top talent from the AI agent startup Adept. This isn't Google's first rodeo with acqui-hiring, either. The tech giant inked a similar deal with the startup roughly a year ago, which gave Google a non-exclusive license to its LLM technology and saw its two co-founders join the company. OpenAI Hits the Panic Button Beyond just being a symbol of a new era in the AI arms race, this surge in acqui-hires reveals a new playbook for Big Tech to grow its market dominance while sidestepping antitrust scrutiny. This tactic follows a period of intense regulatory pressure under former Federal Trade Commission (FTC) chairwoman Lina Khan, whose administration cracked down on alleged anti-competitive practices in the AI industry. Both Meta and Google are already under intense scrutiny from the FTC. Meta is awaiting a verdict on an antitrust trial over the FTC's claim that it holds a monopoly over social media. Google, on the other hand, has been dealt numerous antitrust defeats in the past year, accused of having monopolies in both internet search and online advertising. The company is awaiting the final results of a trial that could potentially see it forced to divest from its Chrome browser. Early last year, under Khan's leadership, the Commission also launched an investigation into Microsoft, Amazon, and Google over their investments in AI startups OpenAI and Anthropic. Under this cloud of regulatory pressure, it seems acqui-hiring is proving to be an easy way for Big Tech to get what it wants. The big names gain all the necessary access to the technology and top research talent of AI startups without having to go through the vetting hurdles of a formal acquisition. Going forward, it is now up to the current FTC, under Trump-appointed chairman Andrew Ferguson, to define its stance on this practice. While not seen as the same kind of hardliner against Big Tech as Khan, Ferguson has largely continued to pursue the previous administration's investigations, even as President Trump has entertained Silicon Valley leaders at Mar-a-Lago. How Ferguson's FTC and the Trump administration at large choose to respond, or not, to this new wave of regulatory loopholes will determine the future of American big tech and the AI industry as a whole.