
AI didn't take the job. It changed what the job is.
Over the past few weeks, I've been on the road. Parbhani, Pune, Chennai, Jaipur. In small-town labs and factory floors, I saw jobs that still exist, but don't look like they used to
In Parbhani I met Dr. Chaitanya, who runs a 24-hour diagnostics lab above a heart clinic. He told me he's failed to detect cancer before—not out of neglect, but because he was worn out. Now, when something doesn't feel right, he runs the slide through a machine. It doesn't get distracted. It doesn't get tired. It caught leukaemia in a boy whose report looked normal at first glance.
In Jaipur I spent time inside Wipro's factories. I met Chandni—just out of college, far from home—running a CNC machine built for someone twice her size. The platform was raised to fit her. Sensors pause the line if she skips a step. She's not fighting the machine. She's learning to work with it.
And then I came back to Bengaluru.
Over the weekend, I caught up with a few junior engineers—entry-level coders, recently let go. We sat in a noisy café near HSR, talking about layoffs. Some of their friends—older, with fatter salaries—had been let go, too, from well-known names on Outer Ring Road. Most of them hadn't told their families yet. Someone joked their severance would go into a 'detox trip". But the silence after that said more.
Also read | Mary Meeker's AI report: Decoding what it signals for India's tech future
I kept thinking about all of it. From Parbhani to Jaipur to Bengaluru, I've seen AI reshape work—but in such unsettling ways. In some places, it keeps people going. In others, it shuts the door.
And I've come back with questions I can't truly answer.
Who gets to stay in the game? Who gets to rewrite their role? And who just disappears?
***
We've spent years asking the wrong question. It's never been just 'Will AI take jobs?" That's the headline version—the one that misses what's actually unfolding on the ground.
What I've seen is something slower and harder to name: jobs are shifting shape. The work still exists, but it doesn't look like it used to. Doctors don't just rely on training—they rely on machines to catch what their fatigue might miss. Factory workers aren't lifting metal—they're supervising systems. Engineers aren't writing code—they're managing what the agents spit out. In some places, people are being lifted. In others, pushed out.
This isn't about replacement. It's about redefinition. And not everyone is getting the chance to adapt.
***
In Parbhani, Dr. Chaitanya isn't trying to be some AI-era pathologist. He just doesn't want to miss a sign of cancer again. He bought the scanner not because anyone sold him a pitch-deck future, but because he was tired. Because late at night, after hours of non-stop samples, the eyes slip. And he knows what that costs. The machine doesn't replace his judgment – it just doesn't lose focus when he does.
In Jaipur, Wipro didn't automate Chandni out. They built the floor to fit her. She's running a CNC machine designed for someone taller, stronger—but they raised the platform instead. Her job wasn't taken. It was made possible. She oversees the system now. And when she sends money home, there's no debate anymore about whether girls can handle mechanical work.
Also read: Indian companies lag in workforce upskilling amid AI disruption, job cuts
And then there's Bengaluru.
The coders I met had barely started. A few months in, then gone. Not for bad performance. Just… gone. Their work was handed to tools they weren't trained to supervise. Their seniors—some drawing seven-figure salaries—were asked to leave too. One of them said most of his severance would go into a detox trip. We all laughed. But it didn't feel funny.
Same tool. But in Parbhani, it buys time. In Jaipur, it makes the job possible. In Bengaluru, it ends it.
****
There's something I've been noticing everywhere lately—in factories, hospitals, GCCs, even small startups. Someone in the room knows how to work with the AI. Not just use it, but shape it. Prompt it right. Catch when it's wrong. That person sets the tone for how work flows.
And then there's everyone else.
Trying to keep up. Hoping they're not left behind.
It's not just a skill gap. It's who gets the confidence to speak up. Who gets the permission to push back when the machine's answer doesn't feel right. Who gets to set the rules for how AI shows up—and who's left cleaning up after it.
One founder told me straight: 'We're not hiring another ops exec. We're hiring someone to manage the agents." The job still exists. It just looks different now. And the person who knows how to talk to the machine gets to decide how everyone else works around it.
That's the shift I can't ignore. It's not about mass layoffs. It's about brutal sidelining.
Not fired. Still on payroll. But it is no longer in the loop.
***
I keep coming back to something Andy Grove once said. Intel was stuck in the memory chip business, losing ground fast. Grove turned to CEO Gordon Moore and asked, 'If we were fired, and the board brought in someone new, what do you think they'd do?" Moore said, 'They'd get us out of memories." Grove paused, then said, 'Then why don't we walk out the door, come back in, and do it ourselves?"
And that's what they did. They walked back in and changed the company.
Also read: Microsoft envisions a web driven by AI agents. What will it look like?
What stayed with me wasn't the decision itself—it was the mindset. They gave themselves permission to reset. Same chairs. Same table. Just a different way of thinking.
Most people I meet don't get to do that.
In every workplace I've visited lately—factories, hospitals, GCCs—there's always someone who gets to reframe the game. The person who speaks up, shapes the tool, sets the tone.
Everyone else is just trying to stay in the room. Or figuring out the exit.
***
I asked Dr. Chaitanya if he ever worries AI will take over his work. He didn't hesitate. 'I just don't want to miss what matters," he said. 'Let the machine help with the rest."
Chandni said the same thing, in different words. 'If it helps us do the work better, why fear it?"
Neither of them were trying to protect their turf. They just wanted the tools to hold up when it counted. When they're tired. When something's easy to miss. When a mistake can't be undone.
They weren't talking about AI as a threat. They weren't talking about it as the future. They were talking about the work—what it asks of them, what it gives back, and what they still want to hold on to.
*****
So yes, people will need to learn. New tools, new ways of working, new habits. That's always been part of work.
But before any of that, they need a little space to figure things out. To ask questions without sounding slow. To try, to fumble, to not know right away—and not be punished for it.
Because the bigger risk isn't that AI takes your job.
Also read: Why AI is central to the new browser wars
It's that you're still in the role, still showing up every day—but slowly pushed out of the decisions. Not because you can't contribute. But because no one gave you the chance to learn how.
And by the time you notice what's changed, the work has already moved on—without your voice in the room.
Pankaj Mishra is a journalist and co-founder of FactorDaily. He has spent over two decades reporting on technology, startups, and work in India with a focus on the people and places often left out of the spotlight.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
4 hours ago
- Time of India
Master prompt engineering the fast way - what Google's 9-hour course teaches you in 10 minutes
Prompt engineering is the flavour of the era, and Google has now come up with a 9-hour course to master the modern art. AI is everywhere now — it helps us write, create images, answer questions, and work faster. ChatGPT made AI super popular and now many AI tools are being used all over the world. All these tools work based on how we talk to them — that's called prompting. Prompting means giving instructions to the AI in a way that it understands and gives the output we want, as per reports. Google course structure Start writing prompts like a pro. This teaches you how to write basic and clear prompts to get better answers. Design prompts for everyday work tasks. Helps you learn how to use prompts for your daily office work or school tasks. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like One of the Most Successful Investors of All Time, Warren Buffett, Recommends: 5 Books for Turning... Blinkist: Warren Buffett's Reading List Click Here Undo Use AI for data analysis and presentations. Shows how to ask AI to help you understand data and make slides or reports. Use AI as a creative or expert partner. Trains you to use AI as a teammate who can help you think creatively or give expert-like advice, as stated by The Indian Express report. Here are them Modules breakdown as mentioned by Medium: Live Events Module 1: How to write prompts like a pro This module teaches you how to talk to AI properly so it gives better answers. Google shares a simple 5-step method called T.C.R.E.I.: Task – Say exactly what you want. Context – Give background info to help AI understand better. References – Show examples to guide the AI. Evaluate – Check if the answer is good. Iterate – If it's not perfect, rewrite your prompt and try again. Fun memory trick: 'Tiny Crabs Ride Enormous Iguanas.' You also learn to break big prompts into short ones. Try different wording. Add limits or rules to get more focused replies. ALSO READ: 2 per 20: The viral blood sugar hack that doesn't involve exercise or diet overhauls Module 2: Use AI for daily work stuff This part teaches how AI can help with daily office or school work. Some things you can do: Write better emails (faster and clearer). Create content like blog posts, newsletters, or social media ideas. Brainstorm new ideas (like campaign slogans or product names). Summarize long documents into short bullet points. Just use the same 5-step method from Module 1 to get better results. Module 3: Use AI for data and presentations This module shows how AI can help with numbers and slides. For data analysis: AI can help read spreadsheets, find patterns, or do simple math. Example: 'Find the average sales per customer in this Google Sheet.' For presentations: AI can write outlines or make slides based on your points. Example: 'Create slides for our sales report with 3 main sections: numbers, wins, and goals.' Module 4: Use AI as a creative or expert partner This part is about using AI for big ideas or expert help. You learn special techniques: Prompt chaining – Ask AI questions step by step to build a big answer. Chain of thought – Ask AI to explain its thinking. Tree of thought – Ask AI to give multiple ideas/solutions. You also learn how to create AI agents — smart bots that act like a teacher, coach, or role-player. Example: A mock interview bot to practice job questions. Plus, there's meta-prompting — where you ask AI to help you write better prompts. FAQs Q1. What is prompt engineering in AI? Prompt engineering means giving clear instructions to AI so it understands what you want and gives better results. Q2. Why is prompt engineering important? Because AI tools work based on how we ask them questions. Good prompts = good answers.


India.com
8 hours ago
- India.com
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...'
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...' Microsoft CEO Satya Nadella is calling on the industry to think seriously about the real impact of artificial intelligence (AI) especially the amount of energy it uses. This comes as AI is quickly changing the tech world. Speaking at Y Combinator's AI Startup School, he said that tech companies need to prove that AI is creating real value for people and society. 'If you're going to use a lot of energy, you need to have a good reason,' Nadella said. 'We can't just burn energy unless we are doing something useful with it.' His comments come as AI is praised for pushing innovation forward, but also criticized for using massive amounts of electricity and possibly making social gaps worse. For Microsoft, one of the biggest companies building AI tools, this is a big concern. A report in 2023 estimated that Microsoft used about 24 terawatt-hours of power in a year. That's as much electricity as a small country uses in the same time. But Nadella believes AI should be judged by how well it helps people in real life. 'The real test of AI,' he said, 'is whether it can make everyday life easier—like improving healthcare, speeding up education, or cutting down on boring paperwork.' He gave the example of hospitals in the U.S., where simple things like discharging a patient can take too long and cost too much. He said if AI is used for this task, it could save time, money, and energy. Microsoft's AI push comes with job losses Even as Microsoft have big plans for AI, the changes have not come without a cost, especially for workers. Over the past year, the company has laid off more than 6,000 employees. Microsoft said these job cuts were part of 'organisational changes' needed to stay strong in a fast-changing business world. That fast-changing world is being shaped by artificial intelligence and cloud computing. Microsoft, working closely with its AI partner OpenAI, is putting AI at the center of its future plans. But as the company shifts toward more automation and AI-driven tools, it's also reorganizing teams, often leading to people losing their jobs. Microsoft is reportedly preparing for another round of job cuts and this time in its Xbox division. The layoffs are expected to be part of a larger corporate reshuffle as the company wraps up its financial year. If these cuts go ahead, it would be Microsoft's fourth major layoff in just 18 months. The company is facing increasing pressure to boost profits, especially after spending USD 69 billion to acquire Activision Blizzard in 2023.


Mint
8 hours ago
- Mint
Why tech billionaires want bots to be your BFF
Next Story Tim Higgins , The Wall Street Journal In a lonely world, Elon Musk, Mark Zuckerberg and even Microsoft are vying for affection in the new 'friend economy.' Illustration: Emil Lendof/WSJ, iStock. Gift this article Grok needs a reboot. Grok needs a reboot. The xAI chatbot apparently developed too many opinions that ran counter to the way the startup's founder, Elon Musk, sees the world. The recent announcement by Musk—though decried by some as '1984"-like rectification—is understandable. Big Tech now sees the way to differentiate artificial-intelligence offerings by creating the perception that the user has a personal relationship with it. Or, more weirdly put, a friendship—one that shares a similar tone and worldview. The race to develop AI is framed as one to develop superintelligence. But in the near term, its best consumer application might be curing loneliness. That feeling of disconnect has been declared an epidemic—with research suggesting loneliness can be as dangerous as smoking up to 15 cigarettes a day. A Harvard University study last year found AI companions are better at alleviating loneliness than watching YouTube and are 'on par only with interacting with another person." It used to be that if you wanted a friend, you got a dog. Now, you can pick a billionaire's pet product. Those looking to chat with someone—or something—help fuel AI daily active user numbers. In turn, that metric helps attract more investors and money to improve the AI. It's a virtuous cycle fueled with the tears of solitude that we should call the 'friend economy." That creates an incentive to skew the AI toward a certain worldview—as right-leaning Musk appears to be aiming to do shortly with Grok. If that's the case, it's easy to imagine an AI world where all of our digital friends are superfans of either MSNBC or Fox News. In recent weeks, Meta Platforms chief Mark Zuckerberg has garnered a lot of attention for touting a stat that says the average American has fewer than three friends and a yearning for more. He sees AI as a solution and talks about how consumer applications will be personalized. 'I think people are gonna want a system that gets to know them and that kind of understands them in a way that their feed algorithms do," he said during a May conference. Over at Microsoft, the tech company's head of AI, Mustafa Suleyman has also been talking about the personalization of AI as the key to differentiation. 'We really want it to feel like you're talking to someone who you know really well, that is really friendly, that is kind and supportive but also reflects your values," he said during an April appearance on the Big Technology Podcast. Still, he added, Microsoft wants to impose boundaries that keep things safe. 'We don't really want to engage in any of the chaos," Suleyman said. 'The way to do that, we found, is that it just stays reasonably polite and respectful, super-even handed, it helps you see both sides of an argument." With all of that in mind, it comes as little surprise that the current crop of chatbots are designed to sound like you're having a conversation with another human. This has resulted in lots of pearl clutching. There are academics warning about the dangers of users falling in love with AI and individual anecdotes of users going overboard in their use. The idea of a heartbroken man falling in love with his chatbot isn't new, of course. It was the plot of the 2013 film 'Her" starring Joaquin Phoenix and Scarlett Johansson, who voiced the AI. (OpenAI's own AI voice, which sounded eerily similar to Johansson's last year, triggered a public fight with the actress.) Still, we—a society that's only a generation into a world of social media—have some recent experience in how algorithms can have unintended consequences. They create echo chambers that reinforce worldviews rather than expand them. The news of Musk's Grok reboot is an example of that. Grok and xAI were born, in part, out of Musk's concerns about AI being developed by companies he called woke. He pointed to Google results that seemed overly concerned with being politically correct rather than factually accurate. With xAI, Musk aimed to create what he called a 'maximum truth-seeking AI." Yet when an influential X user known as Catturd lambasted Grok's answers in June, Musk took quick note. 'When Grok first came out—it was awesome—now Grok is just woke fake news that repeats liberal talking points," Catturd posted on June 18. 'I know this for a fact because everything Grok says about me is a lie. What a joke Grok has become." Another X user asked Grok why it lied about Catturd. 'I aim to provide accurate, neutral responses based on available data," the chatbot responded on X. 'Claims that I lie about @catturd2 may stem from disagreements with my sourcing, like Media Matters or Rolling Stone, which document his misinformation. I don't have personal biases, but AI training can introduce unintended slants. Some report I censor criticism or push fringe views, while others note I fact-check conservative claims accurately. Bias in AI is complex, and neutrality is tough to achieve. If you have specific examples of lies, please share, and I'll clarify with evidence." Musk wasn't impressed. 'Your sourcing is terrible," Musk replied. 'Only a very dumb AI would believe [Media Matters] and [Rolling Stone]! You are being updated this week." He later said xAI would retrain the AI on data created with an updated version of Grok, 'which has advanced reasoning" that would be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors." After all, nobody wants a friend who is always spouting the wrong crazy stuff. Write to Tim Higgins at Topics You May Be Interested In Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.