Latest news with #EmilyBender


New York Times
16-07-2025
- Entertainment
- New York Times
How I Learned to Stop Worrying and Have Fun With A.I.
This spring, OpenAI's C.E.O., Sam Altman, advertised a new model of ChatGPT by showcasing its ability to write fiction. Mr. Altman had prompted the bot to write a story about grief, in the style of 'metafiction' (a self-reflexive genre in which the narrator weaves personal details into the story). It duly generated a winding tale that compares grieving a dead loved one to the loss function, technical jargon for a bit of the math that makes modern artificial intelligence systems work. Mr. Altman crowed about the passage, implying that such a complex genre, one associated with pretentious literary types, could be written only by a really intelligent agent. I'm a professor of literature, and I think the story is a solid illustration of the genre. I don't know that it's great literature, or that ChatGPT is about to take over literary publishing. I certainly don't think it proves that ChatGPT is intelligent; it just shows that it is an expert imitator of style. More broadly, I think we're having the wrong debates about A.I. altogether. In a recent article for The Atlantic, Tyler Austin Harper called A.I. a 'scam,' an animatronic simulation of intelligence. This claim went against not just Mr. Altman, but also many tech journalists and data wonks who think Silicon Valley's narrative that we are close to real machine intelligence is plausible. The linguist Emily Bender and the sociologist Alex Hanna think that A.I. is a 'con,' and Dr. Bender describes it as a set of tricks that produces 'synthetic text' rather than human meaning. These critiques do little to explain A.I.'s popularity. They miss the fact that humans love to play games with language, not just use it to test intelligence. What Mr. Altman inadvertently showed us is that what is really driving the hype and widespread use of large language models like ChatGPT is that they are fun. A.I. is a form of entertainment. OpenAI seems to understand this. ChatGPT was the (then) fastest-ever platform to gain 100 million users, a feat it pulled off in just two months. The company just teamed up with Mattel, which could result in a Barbie you can have a conversation with. The endless back-and-forth about 'intelligence' seems abstract compared to the reality that hundreds of millions of people are using these systems to write emails, simulate tutors and even fall in love with their chat-partner avatars. The scholar Neil Postman's idea about the rise of television — that we were 'amusing ourselves to death' with the medium — could extend to A.I. You can't become obsessed with something that isn't amusing in the first place. No one ever fell in love with a calculator. There's a name for being fooled into thinking you're dealing with an intelligent being when you're not: the Eliza effect. The name comes from the first ever chatbot, built by the computer scientist Joseph Weizenbaum in 1966 and named for George Bernard Shaw's character Eliza Doolittle. Mr. Weizenbaum thought of his program as a simple trick and was horrified when his secretary, testing it, asked him to leave the room because her conversation with the chatbot had become too intimate. Critics of A.I. today seem to think that the whole world is under the spell of the Eliza effect, hundreds of millions of people deluded by gimmicks. But what if the effect isn't a sign of delusion but a simple desire to keep chatting, to play with the limits of language? When a Times of London journalist, James Mariott, posted an A.I.-generated review of Martin Amis's novel 'The Rachel Papers,' he set off a vehement debate about whether the passage was of magazine quality or shallow and repetitive. The argument wasn't really about intelligence; it was about words. Underneath all the barbs and shouting — fueled by a broader public panic about a literacy crisis — I saw people reading and interpreting, passionately. When I was a child, the panic was about video games, which were supposed to degrade us morally, make us stupid and otherwise warp society permanently. None of that happened, even as video games became a globally popular form of entertainment. Even the most serious face of A.I. — its ability to pass tests, solve difficult logic problems and math problems, and hit benchmarks — can also be viewed as a form of entertainment: puzzles. Humans have always used cognitive challenges as a form of fun, and the history of A.I. is filled with these types of games, such as chess and Go. One group of academics, led by the cognitive scientist Alison Gopnik, have characterized large language models as 'cultural technologies.' They mean that the bots contain an enormous amount of human knowledge, writing, images and other forms of cultural production. I tend to agree, but I think it's crucial to understand that using such systems is also extremely entertaining. Whether you are touching up the 'Mona Lisa,' 'reviewing' novels or doing logic puzzles, you are engaging in the very human drive to play. As I've watched people adopt these systems, what I've seen is mostly people playing with art and language. If you go through the history of these bots, you see poetry, fiction and all kinds of little genre experiments like this as a constantly recurring theme. Literary uses like this are deployed to advertise the bots and their abilities, as well as quantitative metrics to determine A.I.'s linguistic capabilities in formal data science papers. The effect isn't limited to language, either. OpenAI also advertised one of its early models with an image produced by the prompt 'astronaut riding a horse.' The natural response to this image is to think, 'Cool!' A.I. is a culture machine. That's not to say I don't share many of the worries of the critics. A.I.'s deployment in medicine, military applications, hiring algorithms and beyond is alarming. (Nobody should feel comfortable with the Defense Department's intentions to use Elon Musk's Grok chatbot, so soon after it posted a deluge of antisemitic comments.) But I simply don't think A.I. is driving the bus when it comes to these problems. Many of our systems are simply broken in the first place, and A.I. seems like a fix even when it isn't. We ought to think about A.I. as an entertainment-first system, before anything else. Would you replace all of primary education with 'Sesame Street'? Or decide government policy with SimCity? It's not an insult to the beloved children's program or computer game to say no. The lesson is simple: We might be taking A.I. too seriously. Leif Weatherby (@leifweatherby) is an associate professor of German and the director of the Digital Theory Lab at New York University. He is the author of 'Language Machines.' The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: letters@ Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.


Fast Company
06-06-2025
- Fast Company
5 dark facts to remember in the face of AI hype
Emily Bender is a Professor of Linguistics at the University of Washington where she is also Faculty Director of the Computational Linguistics Master of Science program, affiliate faculty in the School of Computer Science and Engineering, and affiliate faculty in the Information School. Alex Hanna is Director of Research at the Distributed AI Research Institute and a lecturer in the School of Information at the University of California Berkeley. She has been featured in articles for the Washington Post, Financial Times, The Atlantic, and Time. What's the big idea? The AI Con is an exploration of the hype around artificial intelligence, whose interests it serves, and the harm being done under this umbrella. Society has options when it comes to pushing back against AI hype, so there is still hope that we can collectively resist and prevent tech companies from mortgaging humanity's future. Below, co-authors Emily Bender and Alex Hanna share five key insights from their new book, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Listen to the audio version—read by Emily and Alex—in the Next Big Idea App. 1. The tech that's driving the current wave of AI hype is built on a parlor trick Chatbots like ChatGPT are impressive technology, but maybe not in the way you think. They cannot perform the range of functions they purportedly fulfill, but rather, they are designed to impress us. The key to their parlor trick lies in how people utilize language. You might think it's a simple matter of decoding what the words say, but the process is both far more complex and far more social. We interpret language by relying on everything we know (or guess) about the person who said the words, and whatever common ground we share with them. Then we make inferences about what they must have been trying to convey. We do this instinctively and reflexively. So, when we encounter synthetic text of the kind that comes out of ChatGPT and its ilk, we interpret it by imagining a mind behind the text, even though there is no mind there. In other words, the linguistic and social skills we wrap around AI outputs are what make it so easy for the purveyors of chatbots to fool us into perceiving chatbots as reasoning entities. 2. AI is not going to take your job, but it will make your job a lot worse Much of the purpose of AI technology serves to remove humans from the equation at work. The story of the Writers Guild of America strike is instructive here. In 2023, the Writers Guild of America East and West (or the WGA), the labor union representing Hollywood writers, went on strike for several reasons, including a demand to raise the pay rate that writers receive from streaming services. They also wanted to ensure that they wouldn't be reduced to babysitters for chatbots tasked to write scripts based on harebrained ideas from movie and television producers. John Lopez, a member of the WGA's AI working group, noted that writers could be paid the rewrite rate for dealing with AI-generated content, which is much less than the pay rate for an original script. We've seen the threat of image and text generators drastically reduce the number of job opportunities for graphic designers, video game artists, and journalists. This is not because these tools can adequately perform the tasks of these professionals, but they perform well enough for careers to be cut short and for workers to be rehired at a fraction of what they had been paid before, just so that they can fix the sloppy outputs of AI. 'They perform well enough for careers to be cut short and for workers to be rehired at a fraction of what they had been paid before.' Furthermore, systems that get called 'AI' are often a thin veneer that hides the tried-and-true corporate strategy of outsourcing labor to people in the Majority World, also called the Global South. Many of these workers moderate online content, test chatbots for toxic outputs, and even remotely drive vehicles that are advertised as being fully automated. Luckily, workers have been able to push back, both by concerned labor action, industrial sabotage (especially through creative tools for artists, like Nightshade and Glaze, which prevent their work from being used for training image generation models), and political education. 3. The purpose of the AI con is to disconnect people from social services Because we use language in just about every sphere of activity, and because the synthetic text extruding from machines can be trained to mimic language, it can seem like we are about to have technology that can provide medical diagnoses, personalized tutoring, wise decision making in the allocation of government services, legal representation, and more—all for just the cost of electricity (plus whatever the companies making the chatbots want to charge). But in all these cases, it's not the words that matter, but the actual thought that goes into them and the relationships they help us build and maintain. AI systems are only good for those who want to redirect funding away from social services and justify austerity measures. Meanwhile, those in power will be sure to get services from actual people, while foisting the shoddy facsimiles off on everyone else. The head of Health AI at Google, Greg Corrado, said he wouldn't want Google's Med-PaLM system to be part of his family's health care journey. That didn't stop him from bragging about how it supposedly passed a medical licensing exam. It didn't. But more to the point, designing systems to pass multiple-choice exams about medical situations is not an effective way to build useful medical technology. In these domains, AI hype takes the form of specious claims of technological solutions to social problems, based, at best, on spurious and unfounded evaluations of the systems being sold. 4. AI is not going to kill us all, but climate change might There was a time in Silicon Valley and Washington D.C. when an idiosyncratic, yet serious, question was posed to people working on technology or tech policy: 'What is your p(doom)?' p(doom) refers to probability of doom, or the likelihood that AI would somehow kill all of humanity. This doomerism is predicated on the development of artificial general intelligence (or AGI). AGI is poorly defined, but the basic idea is a system which can do a variety of tasks as well as or better than humans. Unfortunately, doomerism has serious purchase with some technologists and policymakers, and is predicated on a body of unseemly ideologies, including effective altruism, longtermism, and rationalism. These ideologies take the moral philosophy of utilitarianism to the extreme, suggesting that we need to discount harm in the present to save the billions of trillions of humans who will live in some undefined future. These ideologies are eugenicist in their origins and implications. 'Doomerism has serious purchase with some technologists and policymakers.' Meanwhile, we are likely to fail to meet the Paris Agreement's goal to limit the increase in global average temperature to well below 2 degrees Celsius above pre-industrial levels, and AI is making this problem worse. The data centers that host these tools are generating vast amounts of excess carbon, semiconductors used for their parts are leeching forever chemicals into the ground, and backup generators are projected to cause more respiratory illnesses in the poorest parts of the U.S. and elsewhere. Not only are robots not going to take over the world, but their production is going to make the climate crisis much worse. 5. None of this is inevitable The people selling AI systems and the hype around them would like us to voluntarily give up our agency in these matters. They tell us that AI, or even AGI, is inevitable, or at least that systems like ChatGPT are 'here to stay.' But none of this is inevitable. We do have agency, both collectively and individually. Collectively, we can push for regulations that prevent AI tech from being used on us and for labor contracts that keep us in control of our work. On an individual level, we can refuse to use AI systems. We can be critical consumers of automation, being sure we understand what's being automated, how it was evaluated, and why it's being automated. We can also be critical consumers of journalism about technology, looking for and supporting work that holds power to account. And finally, we can and should engage in ridicule as praxis, meaning having fun pointing out all the ways in which synthetic media extruding machines are janky and tacky.


Geek Wire
19-05-2025
- Entertainment
- Geek Wire
Scholars explain how humans can hold the line against AI hype, and why it's necessary
BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives. Strategic refusal is one of the ways to counter AI hype. (Bigstock Illustration / Digitalista) Don't call ChatGPT a chatbot. Call it a conversation simulator. Don't think of DALL-E as a creator of artistic imagery. Instead, think of it as a synthetic media extruding machine. In fact, avoid thinking that what generative AI does is actually artificial intelligence. That's part of the prescription for countering the hype over artificial intelligence, from the authors of a new book titled 'The AI Con.' ''Artificial intelligence' is an inherently anthropomorphizing term,' Emily M. Bender, a linguistics professor at the University of Washington, explains in the latest episode of the Fiction Science podcast. 'It sells the tech as more than it is — because instead of this being a system for, for example, automatically transcribing or automatically adjusting the sound levels in a recording, it's 'artificial intelligence,' and so it might be able to do so much more.' In their book and in the podcast, Bender and her co-author, Alex Hanna, point out the bugaboos of AI marketing. They argue that the benefits produced by AI are being played up, while the costs are being played down. And they say the biggest benefits go to the ventures that sell the software — or use AI as a justification for downgrading the status of human workers. 'AI is not going to take your job, but it will likely make your job shittier,' says Hanna, a sociologist who's the director of research for the Distributed AI Research Institute. 'That's because there's not many instances in which these tools are whole-cloth replacing work, but what they are ending up doing is … being imagined to replace a whole host of tasks that human workers are doing.' Such claims are often used to justify laying off workers, and then to 'rehire them back as gig workers or to find someone else in the supply chain who is doing that work instead,' Hanna says. Tech executives typically insist that AI tools will lead to quantum leaps in productivity, but Hanna points to less optimistic projections from economists including MIT's Daron Acemoglu, who won a share of last year's Nobel Prize in economics. Acemoglu estimates the annual productivity gain due to AI at roughly 0.05% for the next 10 years. What's more, Acemoglu says AI may bring 'negative social effects,' including a widening gap between capital and labor income. In 'The AI Con,' Bender and Hanna lay out a litany of AI's negative social and environmental effects — ranging from a drain on energy and water resources to the exploitation of workers who train AI models in countries like Kenya and the Philippines. The authors of 'The AI Con': Emily Bender (left) is a linguistic professor at the University of Washington. Alex Hanna (right) is director of research at the Distributed AI Research Institute. (Bender Photo by Susan Doupé; Hanna Photo by Will Toft) Another concern has to do with how literary and artistic works are pirated to train AI models. (Full disclosure: My own book, 'The Case for Pluto,' is among the works that were used to train Meta's Llama 3 AI model.) Also, there's a well-known problem with large language models outputting information that may sound plausible but happens to be totally false. (Bender and Hanna avoid calling that 'hallucination,' because that term implies the presence of perception.) Then there are the issues surrounding algorithmic biases based on race or gender. Such issues raise red flags when AI models are used to decide who gets hired, who gets a jail sentence, or which areas should get more policing. This all gets covered in 'The AI Con.' It's hard to find anything complimentary about AI in the book. 'You're never going to hear me say there are things that are good about AI, and that's not that I disagree with all of this automation,' Bender says. 'It's just that I don't think AI is a thing. Certainly there are use cases for automation, including automating pattern recognition or pattern matching. … That is case by case, right?' Among the questions to ask are: What's being automated? How was the automation tool built? Whose labor went into building that tool, and were the laborers fairly compensated? How was the tool evaluated, and does that evaluation truly model the task that's being automated? Bender says generative AI applications fail her test. 'One of the close ones that I got to is, well, dialogue with non-player characters in video games,' Bender says. 'You could have more vibrant dialogue if it could run the synthetic text extruding machine. And it's fiction, so we're not looking for facts. But we are looking for a certain kind of truth in fictional experiences. And that's where the biases can really become a problem — because if you've got the NPCs being just total bigots, subtly or overtly, that's a bad thing.' 'The AI Con: How to Fight Big Tech's Hype and Create the Future We Want,' by Emily M. Bender and Alex Hanna. (Jacket design by Kris Potter for Harper) Besides watching your words and asking questions about the systems that are being promoted, what should be done to hold the line on AI hype? Bender and Hanna say there's room for new regulations aimed at ensuring transparency, disclosure, accountability — and the ability to set things straight, without delay, in the face of automated decisions. They say a strong regulatory framework for protecting personal data, such as the European Union's General Data Protection Regulation, could help curb the excesses of data collection practices. Hanna says collective bargaining provides another avenue to keep AI at bay in the workplace. 'We've seen a number of organizations do this to great success, like the Writers Guild of America after their strike in 2023,' she says. 'We've also seen this from National Nurses United. A lot of different organizations are having provisions in their contracts, which say that they have to be informed and can refuse to work with any synthetic media, and can decide where and when it is deployed in the writers' room, if at all, and where it exists in their workplace.' The authors advise internet users to rely on trusted sources rather than text extruding machines. And they say users should be willing to resort to 'strategic refusal' — that is, to say 'absolutely not' when tech companies ask them to provide data for, or make use of data from, AI blenderizers. Bender says it also helps to make fun of the over-the-top claims made about AI — a strategy she and Hanna call 'ridicule as praxis.' 'It helps you sort of get in the habit of being like, 'No, I don't have to accept your ridiculous claims,'' Bender says. 'And it feels, I think, empowering to laugh at them.' Links to further reading During the podcast, and in my intro to the podcast, we referred to lots of news developments and supporting documents. Here's a selection of web links relating to subjects that were mentioned. Bender and Hanna will be talking about 'The AI Con' at 7 p.m. PT today at Elliott Bay Book Company in Seattle, and at 7 p.m. PT May 20 at Third Place Books in Lake Forest Park. During the Seattle event, they'll share the stage with Anna Lauren Hoffmann, an associate professor at the University of Washington who studies the ethics of information technologies. At Third Place Books, Bender and Hanna will be joined by Margaret Mitchell, a computer scientist at Hugging Face who focuses on machine learning and ethics-informed AI development. My co-host for the Fiction Science podcast is Dominica Phetteplace, an award-winning writer who is a graduate of the Clarion West Writers Workshop and lives in San Francisco. To learn more about Phetteplace, visit her website, Fiction Science is included in FeedSpot's 100 Best Sci-Fi Podcasts. Check out the original version of this report on Cosmic Log to get sci-fi reading recommendations from Bender and Hanna, and stay tuned for future episodes of the Fiction Science podcast via Apple, Spotify, Pocket Casts and Podchaser. If you like Fiction Science, please rate the podcast and subscribe to get alerts for future episodes.