Latest news with #Mollick


Economic Times
4 days ago
- Business
- Economic Times
You can still outpace AI: Wharton professor reveals a ‘skill bundling' strategy to safeguard your future from automation
iStock Artificial Intelligence is rapidly changing workplaces. Wharton professor Ethan Mollick suggests professionals focus on roles requiring diverse human skills. These include emotional intelligence and creativity. (Image: iStock) As artificial intelligence reshapes the modern workplace with stunning speed, one Wharton professor has a sobering message for today's professionals: the safest jobs of tomorrow aren't necessarily the most technical—they're the most complex. Ethan Mollick, associate professor at the Wharton School and author of Co-Intelligence: Living and Working with AI, says job security in the AI era will increasingly depend on choosing roles that bundle multiple human skills together. That means emotional intelligence, judgment, creativity, and domain expertise—all woven into one. 'AI may outperform you in one or two things,' Mollick tells CNBC Make It, 'but if your job requires five or six of them, it's a lot harder to replace.' It's the kind of insight that redefines how we think about employability in an increasingly automated world. And with AI usage surging—40% of U.S. workers now use it at least a few times a year, per a Gallup poll—these career choices have never mattered more. Mollick doesn't sugarcoat the AI wave ahead. Tech labs aren't just chasing progress—they're chasing a paradigm shift. 'Labs are aiming for machines smarter than humans within the next three years,' Mollick warns. 'They're betting on mass unemployment. Whether they succeed or not is still unclear, but we have to take it as a real possibility.' Even Nvidia CEO Jensen Huang, whose company powers some of the most advanced AI systems, echoes that sentiment—albeit from a different vantage point. In a recent All-In podcast, Huang predicted AI will create more millionaires in five years than the internet did in 20, while also cautioning: 'Anybody who is not using AI will lose their job to someone who is.' What's the solution? According to Mollick, job seekers must rethink their strategy. 'Don't go for roles that do one thing,' he says. 'Pick a job like being a doctor—where you're expected to be good at empathy, diagnosis, hand skills, and research. If AI helps with some of it, you still have the rest.' This idea of "bundled roles"—where a single job draws on varied skills and responsibilities—could be the firewall against replacement. These complex, human-centered positions are harder for AI to replicate wholesale and leave more room for humans to collaborate with AI, not compete against it. AI's evolution could make entry-level roles scarce—or at least, radically different. 'Companies will need to rethink entry-level hiring,' Mollick notes. 'Not just for productivity, but for training future leaders.' Without the chance to learn through repetition—what Mollick calls 'apprenticeship'—younger workers may miss out on foundational skills. The result could be a workforce with knowledge gaps AI can't fill, even as those same gaps are used to justify greater automation. Nvidia's Huang calls AI the 'greatest equalizer of our time' because it gives creative power to anyone who can express an idea. 'Everybody is a programmer now,' he says. But critics caution that this accessibility may also deepen divides between the AI-literate and those left behind. Eric Schmidt, former Google CEO, has a different concern: infrastructure. On the Moonshots podcast, Schmidt warned that AI's growth could be throttled not by chips, but by electricity. The U.S., he says, may need 92 more gigawatts of power to meet AI demands—equivalent to 92 new nuclear plants. As AI spreads into every corner of work, from payroll review (yes, Huang uses machine learning for that too) to high-stakes decision-making, the one thing that's clear is this: the rules are changing faster than most organizations can adapt. 'The tools are evolving fast,' Mollick says, 'but organizations aren't. And we can't ask employees to figure it all out on their own.' He believes the real danger isn't AI itself—but the lack of vision from leadership. Without a clear roadmap, workers are left adrift, trying to 'magic' their way into the future. In the race to stay relevant in the AI era, the best defense isn't to out-code or out-process a machine. It's to out-human it—by doubling down on the kind of nuanced, multi-layered work AI can't yet replicate. And by choosing jobs that ask you to wear many hats, not just one. Or as Mollick puts it: 'Bundled tasks are your best bet for surviving the AI takeover.'


Time of India
5 days ago
- Business
- Time of India
You can still outpace AI: Wharton professor reveals a ‘skill bundling' strategy to safeguard your future from automation
As artificial intelligence reshapes the modern workplace with stunning speed, one Wharton professor has a sobering message for today's professionals: the safest jobs of tomorrow aren't necessarily the most technical—they're the most complex. Ethan Mollick, associate professor at the Wharton School and author of Co-Intelligence: Living and Working with AI, says job security in the AI era will increasingly depend on choosing roles that bundle multiple human skills together. That means emotional intelligence, judgment, creativity, and domain expertise—all woven into one. 'AI may outperform you in one or two things,' Mollick tells CNBC Make It, 'but if your job requires five or six of them, it's a lot harder to replace.' Explore courses from Top Institutes in Please select course: Select a Course Category Project Management Data Science Operations Management Healthcare Data Analytics others Public Policy Product Management Others Digital Marketing Management Degree CXO Cybersecurity MBA Data Science healthcare Technology Leadership Finance MCA Artificial Intelligence Design Thinking PGDM Skills you'll gain: Portfolio Management Project Planning & Risk Analysis Strategic Project/Portfolio Selection Adaptive & Agile Project Management Duration: 6 Months IIT Delhi Certificate Programme in Project Management Starts on May 30, 2024 Get Details Skills you'll gain: Project Planning & Governance Agile Software Development Practices Project Management Tools & Software Techniques Scrum Framework Duration: 12 Weeks Indian School of Business Certificate Programme in IT Project Management Starts on Jun 20, 2024 Get Details It's the kind of insight that redefines how we think about employability in an increasingly automated world. And with AI usage surging—40% of U.S. workers now use it at least a few times a year, per a Gallup poll—these career choices have never mattered more. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Find out: this is how you clean your yoga mat! Kingdom Of Men Undo 'They're Aiming for Mass Unemployment' Mollick doesn't sugarcoat the AI wave ahead. Tech labs aren't just chasing progress—they're chasing a paradigm shift. 'Labs are aiming for machines smarter than humans within the next three years,' Mollick warns. 'They're betting on mass unemployment. Whether they succeed or not is still unclear, but we have to take it as a real possibility.' You Might Also Like: Nvidia CEO Jensen Huang calls AI the 'greatest equalizer of our time', predicts it will create more millionaires than the internet Even Nvidia CEO Jensen Huang , whose company powers some of the most advanced AI systems, echoes that sentiment—albeit from a different vantage point. In a recent All-In podcast, Huang predicted AI will create more millionaires in five years than the internet did in 20, while also cautioning: 'Anybody who is not using AI will lose their job to someone who is.' Pick the Job with Layers, Not Just Titles What's the solution? According to Mollick, job seekers must rethink their strategy. 'Don't go for roles that do one thing,' he says. 'Pick a job like being a doctor—where you're expected to be good at empathy, diagnosis, hand skills, and research. If AI helps with some of it, you still have the rest.' This idea of "bundled roles"—where a single job draws on varied skills and responsibilities—could be the firewall against replacement. These complex, human-centered positions are harder for AI to replicate wholesale and leave more room for humans to collaborate with AI, not compete against it. Gen Z's Entry-Level Catch-22 AI's evolution could make entry-level roles scarce—or at least, radically different. 'Companies will need to rethink entry-level hiring,' Mollick notes. 'Not just for productivity, but for training future leaders.' You Might Also Like: Former Google CEO Eric Schmidt warns of AI superintelligence outpacing Earth's energy limits: 'Chips will outrun power needs' Without the chance to learn through repetition—what Mollick calls 'apprenticeship'—younger workers may miss out on foundational skills. The result could be a workforce with knowledge gaps AI can't fill, even as those same gaps are used to justify greater automation. AI's Double-Edged Sword: Democratizer or Divider? Nvidia's Huang calls AI the 'greatest equalizer of our time' because it gives creative power to anyone who can express an idea. 'Everybody is a programmer now,' he says. But critics caution that this accessibility may also deepen divides between the AI-literate and those left behind. Eric Schmidt , former Google CEO, has a different concern: infrastructure. On the Moonshots podcast, Schmidt warned that AI's growth could be throttled not by chips, but by electricity. The U.S., he says, may need 92 more gigawatts of power to meet AI demands—equivalent to 92 new nuclear plants. As AI spreads into every corner of work, from payroll review (yes, Huang uses machine learning for that too) to high-stakes decision-making, the one thing that's clear is this: the rules are changing faster than most organizations can adapt. AI's Real Disruption? Leadership That Lags 'The tools are evolving fast,' Mollick says, 'but organizations aren't. And we can't ask employees to figure it all out on their own.' He believes the real danger isn't AI itself—but the lack of vision from leadership. Without a clear roadmap, workers are left adrift, trying to 'magic' their way into the future. In the race to stay relevant in the AI era, the best defense isn't to out-code or out-process a machine. It's to out-human it—by doubling down on the kind of nuanced, multi-layered work AI can't yet replicate. And by choosing jobs that ask you to wear many hats, not just one. Or as Mollick puts it: 'Bundled tasks are your best bet for surviving the AI takeover.'


CNBC
5 days ago
- Business
- CNBC
AI won't replace you just yet, Wharton professor says—but it'll be 'a huge concern' for entry-level workers
For many Americans, AI is rapidly changing the way we work. A growing number of workers now use AI at their jobs with some frequency. According to a recent Gallup poll, 40% of U.S. workers say that they use AI at work at least a few times a year, and 19% of workers use it several times a week. Both statistics have nearly doubled since last year, from 21% and 11%, respectively. At the same time, over half of American workers are worried about AI's impact on the workforce, according to a Pew Research Center survey. Their fears have merit: a World Economic Forum report published in January found that 48% of U.S. employers plan to reduce their workforce due to AI. Naturally, the rapid growth of AI in the workplace has raised plenty of questions. How will AI reshape our jobs? What new skills will we need to develop? Which industries will be impacted the most by AI? These questions don't have easy answers, says Ethan Mollick, an associate professor at Wharton and author of "Co-Intelligence: Living and Working with AI." Mollick, who is also the co-director of Wharton's Generative AI Labs, is well aware of concerns about AI replacing human jobs. "The idea that you could just sub in AI for people seems naive to me," he says. Still, as AI keeps improving, "there may be effects" for workers, he says. Here's what Mollick has to say about AI and the future of work. CNBC Make It: There's a lot of concern about AI replacing human jobs, including some big predictions from leaders like Bill Gates. What's your take on that? AI agents are not there yet. Right now, AI is good at some stuff, bad at some stuff, but it doesn't substitute well for human jobs, overall. It does some things quite well, but the goal of the labs is [to create] fully autonomous agents and machines smarter than human in the next 3 years. Do we know they can achieve it? We don't, but that is their bet. That's what they're aiming for. They are expecting and aiming for mass unemployment. That is what they keep telling us to prepare for. As for believing them or not, we just don't know, right? You have to take it as at least a possibility, but we're not there yet, either. A lot of it is also the choice of organizational leaders who get to decide how these systems are actually used, and organizational change is slower than all the labs and tech people think. A lot of the time, technology creates new jobs. That's possible, too. We just don't know the answer. As AI usage becomes more prevalent, what skills will we need to develop in the workforce? If you asked about AI skills a year ago, I would have said prompting skills. That doesn't matter as much anymore. We've been doing a lot of research, and it turns out that the prompts just don't matter the way they used to. So, you know, what does that leave us with? Well, judgment, taste, deep experience and knowledge. But you have to build those in some ways despite AI, rather than with their help. Having curiosity and agency also helps, but these are not really skills. I don't think using AI is going to be the hard thing for most people. What is the "hard thing," then? I think it's developing enough expertise to be able to oversee these systems. Expertise is gained by apprenticeship, which means doing some AI-level work [tasks that current AI models can do easily] over and over again, so you learn how to do something right. Why would anyone ever do that again? And that becomes a real challenge. We have to figure out how to solve that with a mix of education and training. How do you think AI will affect the entry-level job market? I think people are jumping to the conclusion that [AI is] why we're seeing youth unemployment. I don't think that's the issue yet, but I think that's a huge concern. Companies are going to have to view entry level jobs in some ways, not just as getting work done, but as a chance to get people who will become senior employees, and train them up to be that way, which is very different than how they viewed the work before. Are your students concerned about AI's impact on jobs? I think everybody's worrying about it, right? Consulting and banking, analyst roles and marketing roles — those are all jobs touched by AI. The more educated you are, the more highly paid you are, the more your job overlaps with AI. So I think everyone's very concerned and I don't have easy answers for them. The advice I tend to give people is to pick jobs that have as many 'bundled' tasks as possible. Think about doctors. You have a job where someone's supposed to be good at empathy and [surgical] hand skills and diagnosis and be able to run an office and keep up with the latest side of research. If AI helps you with some of those things, that's not a disaster. If AI can do one or two of those things better than you, that doesn't destroy your job, it changes what you do, and hopefully it lets you focus on the things you like best. So bundled jobs are more likely to be flexible than single thread jobs. How might AI adoption play out in the workplace? For me, the issue is that these tools are not really built as productivity tools. They're built as chatbots, so they work really well at the individual level, but that doesn't translate into something that can be stamped out across the entire team very easily. People are still figuring out how to operate with these things as teams. Do you bring it into every meeting and ask the AI questions in the middle of each meeting? Does everybody have their own AI campaign they're talking to? The piece I keep making a big deal about is that it is unfair to ask employees to figure it out. I'm seeing leadership and organizations say it's urgent to use AI, people will be fired without it, and then they have no articulation about what the future looks like. I want to hammer that point home, which is, without articulating a vision, where do we go? And that's the missing piece. It's not just up to everybody to figure it out. Instructors and college professors need to take an active role in shaping how AI is used. Leaders of organizations need to take an active role in shaping how AI is used. It can't just be, 'everyone figure it out and magic will happen.'


Forbes
08-04-2025
- Entertainment
- Forbes
Mollick Presents The Meaning Of New Image Generation Models
Paintbrush dynamically illustrates the innovative concept of generative AI art. This mesmerizing ... More image captures the essence of creativity and automation in the realm of digital masterpieces. Witness the fusion of human imagination and artificial intelligence as strokes of the brush evolve into intricate patterns, showcasing the potential of neural networks and creative evolution. This visual journey limitless and where technology transforms the canvas of artistic expression. What does it mean when AI can build smarter pictures? We found out a few weeks ago as both Google and OpenAI unveiled new image generation models that are fundamentally different than what has come before. A number of important voices chimed in on how this is likely to work, but I didn't yet cover this timely piece by Ethan Mollick at One Useful Thing, in which the MIT graduate looks at these new models in a detailed way, and evaluates how they work and what they're likely to mean to human users. The Promise of Multimodal Image Generation Essentially, Mollick explains that the traditional image generation systems were a handoff from one model to another. 'Previously, when a Large Language Model AI generated an image, it wasn't really the LLM doing the work,' he writes. 'Instead, the AI would send a text prompt to a separate image generation tool and show you what came back. The AI creates the text prompt, but another, less intelligent system creates the image.' Diffusion Models Are So 2021 The old models also mostly used diffusion to work. How does diffusion work? The traditional models have a single dimension that they use to generate images. I remember a year ago I was writing an explanation for an audience of diffusion by my colleague Daniela Rus, who presented it at conferences. It goes something like this – the diffusion model takes an image, introduces noise, and abstracts the image, before denoising it again to form a brand new image that resembles what the computer already knows from looking at images that match the prompt. Here's the thing – if that's all the model does, you're not going to get an informed picture. You're going to get a new picture that looks like a prior picture, or more accurately, thousands of pictures that the computer saw on the Internet, but you're not going to get a picture with actionable information that's reasoned and considered by the model itself. Now we have multimodal control, and that's fundamentally different. No Elephants? Mollick gives the example of a prompt that asks the model to create an image without elephants in the room, showing why there are no elephants in the room. Here's the prompt: 'show me a room with no elephants in it, make sure to annotate the image to show me why there are no possible elephants.' When you hand this to a traditional model, it shows you some elephants, because it doesn't understand the context of the prompt, or what it means. Furthermore, a lot of the text that you'll get is complete nonsense, or even made-up characters. That's because the model didn't know what letters actually looked like – it was getting that from training data, too. Mollick shows when you hand the same prompt to a multimodal model. It gives you exactly what you want – a room with no elephants, and notes like 'the door is too small' showing why the elephants wouldn't be in there. Challenges of Prompting Traditional Models I know personally that this was how the traditional models worked. As soon as you asked them not to put something in, they would put it in, because they didn't understand your request. Another major difference is that traditional models would change the fundamental image every time you ask for a correction or a tweak. Suppose you had an image of a person, and you asked for a different hat. You might get an image of an entirely different person. The multimodal image generation models know how to preserve the result that you wanted, and just change it in one single small way. Preserving Habitats Mollick gives another example of how this works: he shows an otter with a particular sort of display in its hands. Then the otter appears in different environments with different styles of background. This also shows the detailed integration of multi Moto image generators. A whole pilot deck. For a used case scenario BB shows how you could take one of these multimodal models and have it designed an entire pitch deck for guacamole or anything else? All you have to do is say come up with this type of deck and the model will get right to work looking at what else is on the Internet, Synthesizing it and giving you the result. As Mick mentions this will make all sorts of human work obsolete very quickly. We will need well considered framework


Forbes
20-03-2025
- Entertainment
- Forbes
More On Vibecoding From Ethan Mollick
Just yesterday, I mentioned Andrej Karpathy, who made some waves with his recent X post talking about giving ground to AI agents to create software and write code. Then I thought about one of our most influential voices in today's tech world, MIT PhD Ethan Mollick, and I went over to his blog, One Useful Thing, to see if he was covering this new capability. Sure enough, I found a March 11 piece titled 'Speaking Things Into Existence' where Mollick covers this idea of 'ex nihilo' code creation based on informal prompting. In digging into this revolutionary use case, Mollick starts right up top with a quote from Karpathy that I think gets to the very heart of things – that 'the hottest new programming language is English.' Presumably, you could use other world languages, too, but so much of what happens in this industry happens in English, and hundreds of thousands of seasoned professionals are getting used to the idea that you can talk to an LLM in your own language, not in Fortran or JavaScript or C-sharp, but just in plain English, and it will come up with what you want. Mollick tells us how he 'decided to give it a try' using Anthropic's Claude Code agent. 'I needed AI help before I could even use Claude Code,' he said, citing the model's Linux build as something to get around. Here, Mollick coins the phrase 'vibetroubleshooting', and says 'if you haven't used AI for technical support, you should.' 'Time to vibecode,' Mollick wrote, noting that his first prompt to Claude Code was: 'make a 3-D game where I can place buildings of various designs, and then drive through the town I create.' 'Grammar and spelling issues included,' he disclaims, 'I got a working application about four minutes later.' He then illustrates how he tweaked the game and solved some minor glitches, along with additional prompts like: 'Can you make buildings look more real? Can you add in a rival helicopter that is trying to extinguish fires before me?' He then provides the actual cost for developing this new game – about $5.00 to make the game, and $8.00 to fix the bug. 'Vibecoding is most useful when you actually have some knowledge and don't have to rely on the AI alone,' he adds. 'A better programmer might have immediately recognized that the issue was related to asset loading or event handling. And this was a small project… This underscores how vibecoding isn't about eliminating expertise but redistributing it - from writing every line of code to knowing enough about systems to guide, troubleshoot, and evaluate. The challenge becomes identifying what 'minimum viable knowledge' is necessary to effectively collaborate with AI on various projects.' 'Expertise clearly still matters in a world of creating things with words,' Mollick continues. 'After all, you have to know what you want to create; be able to judge whether the results are good or bad; and give appropriate feedback.' On the part of the machines, he refers to a 'jagged frontier' of capabilities. That might be fair, but the idea that humans are there for process refinement and minor tweaking is sort of weak tea compared to the staggering capability of these machines to do the creative work. How long until model evolution turns that jagged edge into a spectacular smooth scalpel? At the same time that we're trying to digest all of this, there's another contender in the ring. A bit later in the blog, Mollick references Manus, a new Chinese AI agent that uses Claude and other tools for fundamental task management. Mollick details how he asked Manus to 'create an interactive course on elevator pitching using the best academic advice.' 'You can see the system set up a checklist of tasks and then go through them, doing web research before building the pages,' he says. 'As someone who teaches entrepreneurship, I would say that the output it created was surface-level impressive - it was an entire course that covered much of the basics of pitching, and without obvious errors! Yet, I also could instantly see that it was too text heavy and did not include opportunities for knowledge checks or interactive exercises.' Here, you can see that the system is able to source the actual content, the ideas, and then arrange them and present them the right way. There's very little human intervention or work needed. That's the reality of it. We just had the Chinese announcement of DeepSeek tanking stocks like Nvidia. What will Manus do? How does the geopolitical interplay of China and the U.S. factor into this new world of AI software development? That question will be answered pretty soon, as these technologies make their way to market. As for Mollick, he was also able to dig up old spreadsheets and get new results with the data-crunching power of AI. 'Work is changing, and we're only beginning to understand how,' Mollick writes. 'What's clear from these experiments is that the relationship between human expertise and AI capabilities isn't fixed. … The current moment feels transitional. These tools aren't yet reliable enough to work completely autonomously, but they're capable enough to dramatically amplify what we can accomplish.' There's a lot more in the blog post – you should read the whole thing, and think about the work processes that Mollick details. On a side note, I liked this response from a poster named 'Kevin' that talks about the application to teams culture: 'To me, vibecoding is similar to being a tech lead for a bunch of junior engineers,' Kevin writes. 'You spend most of your time reviewing code, rather than writing code. The code you review is worse in most ways than the code you write. But it's a lot faster to work together as a team, because the junior engineers can crank through a lot of features. And your review really is important - if you blindly accept everything they do, you'll end up in trouble.' Taking this all in, in the context of what I've already been writing about this week, it seems like many of the unanswered questions have to do with human roles and positions. Everything that we used to take for granted is changing suddenly. How are we going to navigate this? Can we change course quickly enough to leverage the power of AI without becoming swamped in its encompassing power? Feel free to comment, and keep an eye on the blog as we head toward some major events in the MIT community this spring that will have more bearing on what we're doing with new models and hardware setups.