Latest news with #DistributedAIResearchInstitute


Scroll.in
01-07-2025
- Science
- Scroll.in
Is AI not all it's made out to be? A new book punctures the hype and proposes some ways to resist it
Is AI going to take over the world? Have scientists created an artificial lifeform that can think on its own? Is it going to replace all our jobs, even creative ones, like doctors, teachers and care workers? Are we about to enter an age where computers are better than humans at everything? The answers, as Emily M Bender and Alex Hanna, the authors of The AI Con, stress are 'no', 'they wish', 'LOL' and 'definitely not'. Artificial intelligence is a marketing term as much as a distinct set of computational architectures and techniques. AI has become a magic word for entrepreneurs to attract startup capital for dubious schemes, an incantation deployed by managers to instantly achieve the status of future-forward leaders. In a mere two letters, it conjures a vision of automated factories and robotic overlords, a utopia of leisure or a dystopia of servitude, depending on your point of view. It is not just technology, but a powerful vision of how society should function and what our future should look like. In this sense, AI doesn't need to work for it to work. The accuracy of a large language model may be doubtful, the productivity of an AI office assistant may be claimed rather than demonstrated, but this bundle of technologies, companies and claims can still alter the terrain of journalism, education, healthcare, service work and our broader sociocultural landscape. Pop goes the bubble Bender is a linguistics professor at the University of Washington who has become a prominent technology critic. Hanna is a sociologist and former employee of Google, who is now the director of research at the Distributed AI Research Institute. After teaming up to mock AI boosters in their popular podcast, Mystery AI Hype Theater 3000, they have distilled their insights into a book written for a general audience. They meet the unstoppable force of AI hype with immovable scepticism. Step one in this program is grasping how AI models work. Bender and Hanna do an excellent job of decoding technical terms and unpacking the 'black box' of machine learning for lay people. Driving this wedge between hype and reality, between assertions and operations, is a recurring theme across the pages of The AI Con, and one that should gradually erode readers' trust in the tech industry. The book outlines the strategic deceptions employed by powerful corporations to reduce friction and accumulate capital. If the barrage of examples tends to blur together, the sense of technical bullshit lingers. What is intelligence? A famous and highly cited paper co-written by Bender asserts that large language models are simply ' stochastic parrots ', drawing on training data to predict which set of tokens (i.e. words) is most likely to follow the prompt given by a user. Harvesting millions of crawled websites, the model can regurgitate 'the moon' after 'the cow jumped over', albeit in much more sophisticated variants. Rather than actually understanding a concept in all its social, cultural and political contexts, large language models carry out pattern matching: an illusion of thinking. But I would suggest that, in many domains, a simulation of thinking is sufficient, as it is met halfway by those engaging with it. Users project agency onto models via the well-known Eliza effect, imparting intelligence to the simulation. Management is pinning their hopes on this simulation. They view automation as a way to streamline their organisations and not be 'left behind'. This powerful vision of early adopters vs extinct dinosaurs is one we see repeatedly with the advent of new technologies – and one that benefits the tech industry. In this sense, poking holes in the 'intelligence' of artificial intelligence is a losing move, missing the social and financial investment that wants this technology to work. 'Start with AI for every task. No matter how small, try using an AI tool first,' commanded DuoLingo's chief engineering officer in a recent message to all employees. Duolingo has joined Fiverr, Shopify, IBM and a slew of other companies proclaiming their 'AI first' approach. Shapeshifting technology The AI Con is strongest when it looks beyond or around the technologies to the ecosystem surrounding them, a perspective I have also argued is immensely helpful. By understanding the corporations, actors, business models and stakeholders involved in a model's production, we can evaluate where it comes from, its purpose, its strengths and weaknesses, and what all this might mean downstream for its possible uses and implications. 'Who benefits from this technology, who is harmed, and what recourse do they have?' is a solid starting point, Bender and Hanna suggest. These basic but important questions extract us from the weeds of technical debate – how does AI function, how accurate or 'good' is it really, how can we possibly understand this complexity as non-engineers? – and give us a critical perspective. They place the onus on industry to explain, rather than users to adapt or be rendered superfluous. We don't need to be able to explain technical concepts like backpropagation or diffusion to grasp that AI technologies can undermine fair work, perpetuate racial and gender stereotypes, and exacerbate environmental crises. The hype around AI is meant to distract us from these concrete effects, to trivialise them and thus encourage us to ignore them. As Bender and Hanna explain, AI boosters and AI doomers are really two sides of the same coin. Conjuring up nightmare scenarios of self-replicating AI terminating humanity or claiming sentient machines will usher us into a posthuman paradise are, in the end, the same thing. They place a religious-like faith in the capabilities of technology, which dominates debate, allowing tech companies to retain control of AI's future development. The risk of AI is not potential doom in the future, à la the nuclear threat during the Cold War, but the quieter and more significant harm to real people in the present. The authors explain that AI is more like a panopticon 'that allows a single prison warden to keep track of hundreds of prisoners at once', or the 'surveillance dragnets that track marginalised groups in the West', or a 'toxic waste, salting the earth of a Superfund site', or a 'scabbing worker, crossing the picket line at the behest of an employer who wants to signal to the picketers that they are disposable. The totality of systems sold as AI are these things, rolled into one.' A decade ago, with another 'game-changing' technology, author Ian Bogost observed that …Rather than utopia or dystopia, we usually end up with something less dramatic yet more disappointing. Robots neither serve human masters nor destroy us in a dramatic genocide, but slowly dismantle our livelihoods while sparing our lives. The pattern repeats. As AI matures (to some degree) and is adopted by organisations, it moves from innovation to infrastructure, from magic to mechanism. Grand promises never materialise. Instead, society endures a tougher, bleaker future. Workers feel more pressure; surveillance is normalised; truth is muddied with post-truth; the marginal become more vulnerable; the planet gets hotter. Technology, in this sense, is a shapeshifter: the outward form constantly changes, yet the inner logic remains the same. It exploits labour and nature, extracts value, centralises wealth, and protects the power and status of the already-powerful. Co-opting critique In The New Spirit of Capitalism, sociologists Luc Boltanski and Eve Chiapello demonstrate how capitalism has mutated over time, folding critiques back into its DNA. After enduring a series of blows around alienation and automation in the 1960s, capitalism moved from a hierarchical Fordist mode of production to a more flexible form of self-management over the next two decades. It began to favour 'just in time' production, done in smaller teams, that (ostensibly) embraced the creativity and ingenuity of each individual. Neoliberalism offered 'freedom', but at a price. Organisations adapted; concessions were made; critique was defused. AI continues this form of co-option. Indeed, the current moment can be described as the end of the first wave of critical AI. In the last five years, tech titans have released a series of bigger and 'better' models, with both the public and scholars focusing largely on generative and 'foundation' models: ChatGPT, StableDiffusion, Midjourney, Gemini, DeepSeek, and so on. Scholars have heavily criticised aspects of these models – my own work has explored truth claims, generative hate, ethics washing and other issues. Much work focused on bias: the way in which training data reproduces gender stereotypes, racial inequality, religious bigotry, western epistemologies, and so on. Much of this work is excellent and seems to have filtered into the public consciousness, based on conversations I've had at workshops and events. However, its flagging of such issues allows tech companies to practice issue resolving. If the accuracy of a facial-recognition system is lower with Black faces, add more Black faces to the training set. If the model is accused of English dominance, fork out some money to produce data on ' low-resource ' languages. Companies like Anthropic now regularly carry out ' red teaming ' exercises designed to highlight hidden biases in models. Companies then 'fix' or mitigate these issues. But due to the massive size of the data sets, these tend to be band-aid solutions, superficial rather than structural tweaks. For instance, soon after launching, AI image generators were under pressure for not being 'diverse' enough. In response, OpenAI invented a technique to 'more accurately reflect the diversity of the world's population'. Researchers discovered this technique was simply tacking on additional hidden prompts (e.g. 'Asian', 'Black') to user prompts. Google's Gemini model also seems to have adopted this, which resulted in a backlash when images of Vikings or Nazis had South Asian or Native American features. The point here is not whether AI models are racist or historically inaccurate or 'woke', but that models are political and never disinterested. Harder questions about how culture is made computational, or what kind of truths we want as society, are never broached and therefore never worked through systematically. Such questions are certainly broader and less 'pointy' than bias, but also less amenable to being translated into a problem for a coder to resolve. What next? How, then, should those outside the academy respond to AI? The past few years have seen a flurry of workshops, seminars and professional development initiatives. These range from 'gee whiz' tours of AI features for the workplace, to sober discussions of risks and ethics, to hastily organised all-hands meetings debating how to respond now, and next month, and the month after that. Bender and Hanna wrap up their book with their own responses. Many of these, like their questions about how models work and who benefits, are simple but fundamental, offering a strong starting point for organisational engagement. For the technosceptical duo, refusal is also clearly an option, though individuals will obviously have vastly different degrees of agency when it comes to opting out of models and pushing back on adoption strategies. Refusal of AI, as with many technologies that have come before it, often relies to some extent on privilege. The six-figure consultant or coder will have discretion that the gig worker or service worker cannot exercise without penalties or punishments. If refusal is fraught at the individual level, it seems more viable and sustainable at a cultural level. Bender and Hanna suggest that generative AI be responded to with mockery: companies who employ it should be derided as cheap or tacky. The cultural backlash against AI is already in full swing. Soundtracks on YouTube are increasingly labelled ' No AI '. Artists have launched campaigns and hashtags, stressing their creations are '100% human-made'. These moves are attempts to establish a cultural consensus that AI-generated material is derivative and exploitative. And yet, if these moves offer some hope, they are swimming against the swift current of enshittification. AI slop means faster and cheaper content creation, and the technical and financial logic of online platforms – virality, engagement, monetisation – will always create a race to the bottom. The extent to which the vision offered by big tech will be accepted, how far AI technologies will be integrated or mandated, how much individuals and communities will push back against them – these are still open questions. In many ways, Bender and Hanna successfully demonstrate that AI is a con. It fails at productivity and intelligence, while the hype lauds a series of transformations that harm workers, exacerbate inequality and damage the environment. Yet such consequences have accompanied previous technologies – fossil fuels, private cars, factory automation – and hardly dented their uptake and transformation of society. So while praise goes to Bender and Hanna for a book that shows 'how to fight big tech's hype and create the future we want', the issue of AI resonates, for me, with Karl Marx's observation that people 'make their own history, but they do not make it just as they please'. The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, Emily M Bender and Alex Hanna, Harper. Luke Munn, Research Fellow, Digital Cultures and Societies, The University of Queensland.

Business Insider
12-05-2025
- Business Insider
An ex-Google AI ethicist and a UW professor want you to know AI isn't what you think it is
It may seem like AI adoption has taken off rapidly, but there are some notable holdouts. Emily Bender, a UW linguistics professor, and Alex Hanna, the research director of the Distributed AI Research Institute and former Google AI ethicist, would like readers to take away from their new book, "The AI Con: How to Fight Big Tech's Hype and Create the Future We Want" that AI isn't want it's marketed to be. Longtime collaborators, cohosts of the Mystery AI Hype Theater 3000 podcast and vocal AI critics, Bender and Hanna, want to take hyperbole out of the conversation around AI, and caution that, frankly, intelligence isn't artificial. Their at times funny and irreverent undressing of AI into "mathy maths", "text extruding machines", or classically, " stochastic parrots" aims to get us to see automation technologies for what they are and separate them from hype. This Q&A has been edited for clarity and length. Bender: I think it's always helpful to keep the people in the frame. The narrative that this [automation] is artificial intelligence is designed to hide the people. The people involved are everything from the programmers who made algorithmic decisions, to the people whose creative work was appropriated, even stolen, as the basis for things. Even the people who did the data work, so the content moderation, so that the system outputs what users see, don't have horrific stuff. Hanna: This term AI is not a singular thing. There's kind of a gloss on many different types of automation, and the thought that there's a tool that's just writing emails, upstages that this term is being leveraged. These are systems used in things as broad as incarceration, hiring decisions, to outputting synthetic media. Just like fast fashion or chocolate production, a whole host of people are involved in maintaining this supply chain. That AI-generated email, or text, for this difficult thing I don't want to write, know that there's a whole ecosystem around it that's affecting people, labor-wise, environmentally, and in other guises. The book highlights countless ways that AI is extractive and can make human life worse. Why do you think so many are singing the gospel of AI and embracing such tools? Bender: It's interesting that you use the phrase singing the gospel. There are a lot of people who have drawn connections between, especially talk of artificial general intelligence and Christian eschatology, which is the idea that there is something we could build that could save us. That could save us from everything from the dread of menial tasks to major problems we're facing, like the climate crisis, to just the experience of not having answers available. Of course, none of that actually plays out. We do not live in a world where every question has an answer. The idea that if we just throw enough compute and data at it, and there's the extractivism, we'd be relieved of that, and be in a situation where there is an answer to every question at our fingertips. Hanna: There's a desire for computing to step in and really wow us, and now we have AI for everything from social services to healthcare to making art. Part of it is a desire to have a more "objective" type of computational being. Lately, there's been a lot made of 'the crisis of social capital', 'the crisis of masculinity, the crisis of 'insert your favorite thing here' that's a social phenomenon. This goes back to Robert Putman's book "Bowling Alone" and a few weird results in the 2006 general social survey, which said people have fewer close friends than they used to. There's this general thesis that people are lonelier, and that may be true, but AI is presented as a panacea for those social ills. When there are a lot more things that we need to focus on, that are much harder, like rebuilding social infrastructure, rebuilding third spaces, fortifying our schools, rebuilding urban infrastructure, but if we have a technology that seems to do all of those things, then people get really excited about it. Language is also a large focus of the book, and you codified the doomer, boomer, and booster camps. Can you say more about these groups? What about readers who won't recognize themselves in any of these groups? Bender: The booster versus doomer thing is really constricting. This is the discourse where that's supposed to be one-dimensional incline, where on one end you have the doomers who say, 'AI is a thing and it's going to kill us all!' And on the other end, AI boosters say, 'AI is a thing and it's going to solve all of our problems!' and the way that they speak often sounds like that is the full range of options. So you're at one end or the other, or somewhere in the middle, and the point we make is that actually, no, that's a really small space of possibilities. It's two sides of the same coin, both predicated on 'AI is a thing as is super powerful' and that is ungrounded nonsense. Most of the space of possibilities, including the space that we inhabit, is outside that. Hanna: We hope the book also gives people on that booster and doomer scale, a way out of that thinking. This can be a mechanism to help people change their minds and consider a perspective that they might not have considered. Because we're in a situation where the AI hype is so — this is a term I learned from Emily — "thick on the ground", that it's hard to really see things for what they are. You offer many steps that people can take to resist the pervasive use of AI. What does one do when your workplace, or online services you use, have baked AI functionality in everyday processes? Bender: In all cases when you're talking about refusal, both individual and collective, it's helpful to go back to values and why we're doing what we're doing. People can ask a series of questions about any technology. It is important to remember that you have the agency to ask those questions. The inevitability narrative is basically an attempt to steal that agency and say, "It is all powerful, or it will be soon, so just go along with it, and you're not in a position to understand anyway." In fact, we are all in a position to understand what it is and what values are involved in it. Then you can say, 'Okay, you're proposing to use some automation here, how does that fit with our purposes and our values, and how do we know how well it fits? Where is the evaluation?' Too much of this is 'Oh, just believe us.'". There are instances where people with a very familiar and technical understanding of AI, motives notwithstanding, still overstate and misunderstand what AI is and what it can do. How should laypeople with a more casual understanding think about and talk about AI? Bender: The first step is always disaggregating AI; it's not one thing. So what specifically is being automated? Then, be very skeptical of any claims because the people who are selling this are wholeheartedly embracing the magical sound of artificial intelligence and very often being extremely cagey at best about what the system actually does, what the training data was, and how they work. Hanna: There's a tendency, and it's partially economic, partially just because some people are so deep in the sauce that they're not really going to see the forest for the trees. AI researchers are already primed to see those things through a certain light. They're thinking about it through primarily engineering breakthroughs, more efficient ways to learn parameters, or to do XYZ tasks within that field, but they are not really the people focused on specialized fields like nursing, for instance. People should take pride in and be able to use their expertise in their field to combat the AI hype. One great example of this is National Nurses United, which wrote explainers about AI and disaggregated between AI and biometric surveillance, passive listening, and censors in the clinicians' office, and what that was doing to nursing practice. So, not buying into hype and leaning into one's own expertise is a really powerful method here. In your respective circles, what has been the reaction to the book thus far? Bender: People are excited. Where I sit in linguistics, which is really an important angle in understanding why it is that the synthetic text extruding machines in particular are so compelling. The linguists that I speak to are excited to see our field having this role at this moment. Hanna: I've had great reactions. A lot of my friends are software developers, or they're in related fields since I went to undergrad in computer science, and a lot of my friends growing up were tech nerds, and almost to a T, all of them are anti-AI. They say, 'I don't want Copilot,' 'I don't want this stuff writing my code,' 'I'm really sick of the hype around this,' and I thought that was the most surprising and maybe the most exciting part of this. People who do technical jobs, where they're promised the most speed or productivity improvements, are some of the people who are most opposed to the introduction of these tools in their work.