logo
#

Latest news with #selfhood

So many jobs are a laughable waste of time. The greater part of any job is learning to look busy
So many jobs are a laughable waste of time. The greater part of any job is learning to look busy

Irish Times

time5 days ago

  • Entertainment
  • Irish Times

So many jobs are a laughable waste of time. The greater part of any job is learning to look busy

Lately I've been thinking about Sartre's waiter. You might know the story. The philosopher is sitting in a Parisian cafe sometime in the early 1940s, watching a waiter glide from table to table. There's something creepy about him, Sartre decides, but what? He watches a little longer. It's this: the man is playing at being a waiter in a cafe. It's a memorable observation, like something so obvious it requires an alien observer to notice it. Once seen, it passes into the brain as truth. You see it everywhere: people performing their functions like actors who've learned their parts a little too well. It's a psychotic but undeniably catchy worldview. In Being and Nothingness, where this anecdote appears, the waiter's exaggerated waiterliness becomes a case study in what Sartre calls bad faith: the act of denying one's full, complex, and ever-changing selfhood by overidentifying with a preassigned role. The man isn't just working as a waiter, he has become a waiter. Sartre argues it's more comforting to take refuge in a familiar script than to confront the ongoing anxiety of having to choose, moment by moment, who and what we are. [ How Sartre's theory of 'self' can explain all of humanity - even Elon Musk Opens in new window ] It's easy to criticise Sartre's use of the waiter. Here's a guy who, when not experimenting with polyamory or taking amphetamines to fuel his lengthy philosophical treatises, spends his days in Parisian cafes critiquing the man bringing him coffee for failing to confront the abyss of his radical existential freedom. It's true the waiter could, at any moment, throw his tray like a frisbee, tear off his apron, and walk out into the unknown – but it's also possible he has a family to feed, and that living in good faith might still mean having to find another identical job down the line. READ MORE It's also possible, more importantly, that the waiter's exaggerated waiterliness isn't evidence of a collapsed identity at all, but rather a protective mask. A way of drawing a line between the role he is paid to perform and the person he actually is in the off hours. The reason I've been thinking about Sartre's waiter is that I have a new job. When I'm working, I often have the strange sense that I'm only pretending to work, or pretending to be the kind of person I imagine would be good at the job. Maybe boredom just breeds dissociation. I won't punish anyone with the unspectacular details of my employment, except to say that its meaninglessness boggles the mind, it really does. I can't complain, though; after all, I sought this job out, applied for it, politely accepted when it was offered to me, and now there's nothing left to do but get on with it. The greater part of any job is learning to look busy. In a hotel, you're hired not just to stand behind a desk, but to act like a receptionist. We understand it instinctively and so we develop professional selves that may resemble us but aren't quite us. We do this not only to protect our real selves, but because turning it into a performance helps to pass the hours. My first job was a weekend shift in a jeweller's when I was 15, and at the time, it felt like something close to freedom. Proof that I could rely on myself, that the money I earned, however modest, might translate into real independence. The exciting feeling that it was possible to make my own way in the adult world. More than that, I liked the sense of being a spinning cog in the great, whirring city. Of being a shopgirl in a shop. One of the multitudes making little things happen, pushing forward into the future. I think I approached it enthusiastically because school seemed so irredeemably awful that I wasn't especially concerned about what I was running toward, only what I was trying to escape. It took a while for it to dawn on me that this whole work thing wasn't just a fun little side plot, but something I'd be doing, in one form or another, for the rest of my life. Ruby Eastwood: 'The social contract is falling apart; everybody knows it' Of course, there are all sorts of jobs, and many of them are worthwhile and even ennobling, but the idea that there's any inherent virtue in work for its own sake falls away pretty quickly. It only takes working a few jobs to dispel that myth. I'm reminded of that famous story from the Soviet Union. In an effort to meet productivity quotas, a nail factory was told to maximise output by weight. The factory responded by producing a small number of large, heavy nails; useless for construction but perfect for hitting the target. When the quota shifted to the number of units instead, they switched to making thousands of tiny, fragile pins. Again: useless. The workers did exactly what was asked of them, but none of it amounted to anything. Under capitalism there are perhaps more sophisticated ways of obscuring our futility, but we still find out eventually. The truth is, so many jobs are such a laughable waste of time it's tempting to think dread is what keeps the whole system running. There's always something worse, something more degrading just a rung below, and it's that fear of sliding downward, not any real belief in upward mobility, that keeps everyone stuck where they are. I read an article once about line standers: people who get paid to stand in queues for other people. It's a real job. Apparently it happens a lot in the US, and it's mostly homeless people and students doing it. The article was fascinating because of this one story that happened in Poland. It was actually a kind of beautiful story. During the 1980s, in the late communist era, shortages were so bad that people would queue for hours, sometimes days, for basic goods. A small economy sprang up around this reality. People who didn't have time to stand in line would pay someone else to do it for them. One man had turned it into a profession. In the article the man was talking about the job with real sincerity, talking about the qualities it required: honesty, reliability, patience. He said he once queued for 40 hours straight. He particularly liked queuing in hospitals, holding spots to make sure people could get in-demand specialist care at a time when the healthcare system was overloaded. He saw himself as providing a little bit of security for people who were already struggling with illness. The social contract is falling apart; everybody knows it; you don't need me to tell you What happened was that this man's business eventually collapsed because there was some reform, and he was left facing the threat of destitution. But it turned out that he had become famous through his humanistic work in line standing for all those years, maybe even decades, and that the people knew and loved him, so he ended up having this bizarre odyssey where he became part of a theatre company and someone cast him in an opera and even made a marionette with his likeness. At this late stage in the article they mentioned the fact that the man happened to be a dwarf, and that his distinctive appearance may have contributed to his iconic status as a Polish folk hero. After the stint in theatre he went on to politics, running for mayor in his hometown. All of this happened in the real world. Which proves that it is possible to escape from under the crushing banality of your circumstances and reclaim your radical existential freedom, but it takes a certain alignment of the stars and lots of chutzpah. Anyway, I've always been interested in the things people do to make money, but I also understand the question 'What do you do?' can provoke hostility. We've inherited this strange cultural hangover from better times, the idea that the thing you do to survive should also double as your identity and source of pride. Stable, long-term employment is becoming rarer. Entire industries are being gutted or automated. Many people are cobbling together an income from gigs and freelance scraps, and young people, even ones with degrees, can't seem to secure proper work. Every so often something comes along (Covid, the anti-work movement, quiet quitting, the rise of AI) that seems poised to change the future of work, or to bring the whole thing crashing down. But the moment passes, and things stay more or less the same. And after all our fruitless toil, we hand over more than half of our paycheck to a landlord who's probably chilling with a rum and coke somewhere in the Bahamas. In short, the social contract is falling apart; everybody knows it; you don't need me to tell you. What actually interests me are the quiet, almost heroic ways people carry on as if this weren't the case, and the small psychological tricks we use to get through the working day. I had a drink a few months ago with a friend who was about to start a new job at an AI training company. His role, as it was described to him, would be to interact with a chatbot in order to help it censor harmful content. The example they gave was Romeo and Juliet. Juliet is 13. Say, hypothetically, a paedophile wanted to engage the chatbot in a discussion that drew on the text, citing Juliet's age, the sexual nature of her relationship with Romeo, and so on, as a way to access inappropriate material under the guise of literature. My friend's task would be to think like this hypothetical user, coming up with ever more inventive ways to outwit the filters, so that those filters could then be adjusted accordingly. In essence: he was being hired to think like a paedophile, from nine to five. [ Life as a Facebook moderator: 'People are awful. This is what my job has taught me' Opens in new window ] He was, understandably, disturbed by this, and concerned about what effect it might have on his mental health. It's a good idea to look after one's capacity to see beauty in the world, to preserve hope that life can be fun. Jobs like this pose a serious threat. I agreed with him that the situation sounded far from ideal, pretty bleak really. Then we fell into silence, because what else can you say? A few weeks later I bumped into him again and asked how the job was going. He seemed sort of surprised I'd remembered, as if he himself had already forgotten. It turned out it didn't bother him at all once he'd reconciled himself to doing it. You compartmentalise. You show up. You do whatever weird thing is required of you. You clock out. A job is a job, he'd decided, and there are many worse jobs.

ChatGPT may be polite, but it's not cooperating with you
ChatGPT may be polite, but it's not cooperating with you

The Guardian

time13-05-2025

  • Entertainment
  • The Guardian

ChatGPT may be polite, but it's not cooperating with you

After publishing my third book in early April, I kept encountering headlines that made me feel like the protagonist of some Black Mirror episode. 'Vauhini Vara consulted ChatGPT to help craft her new book 'Searches,'' one of them read. 'To tell her own story, this acclaimed novelist turned to ChatGPT,' said another. 'Vauhini Vara examines selfhood with assistance from ChatGPT,' went a third. The publications describing Searches this way were reputable and fact-based. But their descriptions of my book – and of ChatGPT's role in it – didn't match my own reading. It was true that I had put my ChatGPT conversations in the book, but my goal had been critique, not collaboration. In interviews and public events, I had repeatedly cautioned against using large language models such as the ones behind ChatGPT for help with self-expression. Had these headline writers misunderstood what I'd written? Had I? In the book, I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It's a dynamic that makes us complicit in big tech's accumulation of wealth and power: we're both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues. The book opens with epigraphs from Audre Lorde and Ngũgĩ wa Thiong'o evoking the political power of language, followed by the beginning of a conversation in which I ask ChatGPT to respond to my writing. The juxtaposition is deliberate: I planned to get its feedback on a series of chapters I'd written to see how the exercise would reveal the politics of both my language use and ChatGPT's. My tone was polite, even timid: 'I'm nervous,' I claimed. OpenAI, the company behind ChatGPT, tells us its product is built to be good at following instructions, and some research suggests that ChatGPT is most obedient when we act nice to it. I couched my own requests in good manners. When it complimented me, I sweetly thanked it; when I pointed out its factual errors, I kept any judgment out of my tone. ChatGPT was likewise polite by design. People often describe chatbots' textual output as 'bland' or 'generic' – the linguistic equivalent of a beige office building. OpenAI's products are built to 'sound like a colleague', as OpenAI puts it, using language that, coming from a person, would sound 'polite', 'empathetic', 'kind', 'rationally optimistic' and 'engaging', among other qualities. OpenAI describes these strategies as helping its products seem 'professional' and 'approachable'. This appears to be bound up with making us feel safe: 'ChatGPT's default personality deeply affects the way you experience and trust it,' OpenAI recently explained in a blogpost explaining the rollback of an update that had made ChatGPT sound creepily sycophantic. Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft's Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren't flukes. Research suggests that both tendencies are widespread. In my own ChatGPT dialogues, I wanted to enact how the product's veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech – including editing my description of OpenAI's CEO, Sam Altman, to call him 'a visionary and a pragmatist'. I'm not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn't attempt to influence users' thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data – though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: 'The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.' Still, by the end of the dialogue, ChatGPT was proposing an ending to my book in which Altman tells me: 'AI can give us tools to explore our humanity in ways we never imagined. It's up to us to use them wisely.' Altman never said this to me, though it tracks with a common talking point emphasizing our responsibilities over AI products' shortcomings. I felt my point had been made: ChatGPT's epilogue was both false and biased. I gracefully exited the chat. I had – I thought – won. Then came the headlines (and, in some cases, articles or reviews referring to my use of ChatGPT as an aid in self-expression). People were also asking about my so-called collaboration with ChatGPT in interviews and at public appearances. Each time, I rejected the premise, referring to the Cambridge Dictionary definition of a collaboration: 'the situation of two or more people working together to create or achieve the same thing.' No matter how human-like its rhetoric seemed, ChatGPT was not a person – it was incapable of either working with me or sharing my goals. OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that 'benefits all of humanity'. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are – a goal that is easier to accomplish if people see those products as trustworthy collaborators. Last year, Altman envisioned AI behaving as a 'super-competent colleague that knows absolutely everything about my whole life'. In a Ted interview this April, he suggested this could even function at the societal level: 'I think AI can help us be wiser and make better collective governance decisions than we could before.' By this month, he was testifying at a US Senate hearing about the hypothetical benefits of having 'an agent in your pocket fully integrated with the United States government'. Reading the headlines that seemed to echo Altman, my first instinct was to blame the headline writers' thirst for something sexy to tantalize readers (or, in any case, the algorithms that increasingly determine what readers see). My second instinct was to blame the companies behind the algorithms, including the AI companies whose chatbots are trained on published material. When I asked ChatGPT about well-known recent books that are 'AI collaborations', it named mine, citing a few of the reviews whose headlines had bothered me. I went back to my book to see if maybe I'd inadvertently referred to collaboration myself. At first it seemed like I had. I found 30 instances of words such as 'collaboration' and 'collaborating'. Of those, though, 25 came from ChatGPT, in the interstitial dialogues, often describing the relationship between people and AI products. None of the other five were references to AI 'collaboration' except when I was quoting someone else or being ironic: I asked, for example, about the fate ChatGPT expected for 'writers who refuse to collaborate with AI'. But did it matter that I mostly hadn't been the one using the term? It occurred to me that those talking about my ChatGPT 'collaboration' might have gotten the idea from my book even if I hadn't put it there. What had made me so sure that the only effect of printing ChatGPT's rhetoric would be to reveal its insidiousness? How hadn't I imagined that at least some readers might be convinced by ChatGPT's position? Maybe my book had been more of a collaboration than I had realized – not because an AI product had helped me express myself, but because I had helped the companies behind these products with their own goals. My book concerns how those in power exploit our language to their benefit – and about our complicity in this. Now, it seemed, the public life of my book was itself caught up in this dynamic. It was a chilling experience, but I should have anticipated it: of course there was no reason my book should be exempt from an exploitation that has taken over the globe. And yet, my book was also about the way in which we can – and do – use language to serve our own purposes, independent from, and indeed in opposition to, the goals of the powerful. While ChatGPT proposed that I close with a quote from Altman, I instead picked one from Ursula K Le Guin: 'We live in capitalism. Its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.' I wondered aloud where we might go from here: how might we get our governments to meaningfully rein in big tech wealth and power? How might we fund and build technologies so that they serve our needs and desires without being bound up in exploitation? I'd imagined that my rhetorical power struggle against big tech had begun and ended within the pages of my book. It clearly hadn't. If the headlines I read represented the actual end of the struggle, it would mean I had lost. And yet, I soon also started hearing from readers who said the book had made them feel complicit in big tech's rise and moved to act in response to this feeling. Several had canceled their Amazon Prime subscriptions; one stopped soliciting intimate personal advice from ChatGPT. The struggle is ongoing. Collaboration will be required – among human beings.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store