Latest news with #StevenLevy


Digital Trends
30-06-2025
- Entertainment
- Digital Trends
Relive the iPhone launch exactly 18 years ago via this TV news report
Can you believe that the first iPhone launched exactly 18 years ago on June 29? Do you remember what you were doing that day? Oh hang on, maybe you weren't even born then. The late Steve Jobs, then Apple's CEO, had unveiled the revolutionary smartphone five months earlier, in January 2007. In the intervening months, the company created enough hype to encourage hordes of people to descend upon Apple Stores in the U.S. and beyond to purchase the device that was to truly transform the fortunes of the California-based tech company. Recommended Videos An old ABC News clip about the iPhone's launch day features tech writer Steven Levy, now editor-at-large at Wired, summing up the level of excitement that surrounded the iPhone's launch. 'There's been nothing like this in my memory,' Levy tells ABC News reporter John Berman. 'I've been covering technology for over 20 years and I can't recall the anticipation for a product like this has.' Berman, meanwhile, has clearly been bedazzled by Apple's ad campaign, telling Levy: 'I consider myself at least of average intelligence, but they're in my head. Apple is in my head. Must get iPhone. Must get iPhone.' Levy responds with a comment that's aged well: 'Apple has always, throughout its history, struck a chord among people who like technology, and like it done really really well … it's a religion almost, for some people.' An unnamed contributor then takes up the religious theme: 'Steve Jobs — master marketer,' she says. 'The guy is incredible at bringing the Mac faithful to a fever pitch, and then those early adopters, those high-end geeks, go forth and spread the gospel of Apple.' Jessica, a woman waiting in line outside an Apple Store in New York City on iPhone launch day, offered her own take, telling Berman: 'Steve Jobs is an innovator, he always comes up with new creative things before anybody thinks of them.' She adds that everything Jobs comes up with is 'top notch,' prompting the reporter to mention the Apple Lisa, the failed PC launched by the company in 1983. But Jessica has never heard of it. Next, we see the Apple Store opening and the first customers heading inside to collect their brand new iPhone. Jessica buys two of them — one for her sister — and is shown counting out more than a thousand bucks for her purchase. 'I feel like I won the Olympic gold medal,' she says. The original iPhone featured a tiny 3.5-inch display and a basic 2-megapixel camera and went on sale for $499 (4GB) and $599 (8GB). The iPhone has been an astonishing success for Apple, generating around $1.5 trillion for the company over the years. Many iterations of the device have come and gone, with Apple expected to release the iPhone 17 later this year. Below is another news report from the same day, this one from CBS News:
Yahoo
06-06-2025
- Entertainment
- Yahoo
Demis Hassabis On The Future of Work in the Age of AI
WIRED Editor At Large Steven Levy sits down with Google DeepMind CEO Demis Hassabis for a deep dive discussion on the emergence of AI, the path to Artificial General Intelligence (AGI), and how Google is positioning itself to compete in the future of the workplace. Director: Justin Wolfson Director of Photography: Christopher Eusteche Editor: Cory Stevens Host: Steven Levy Guest: Demis Hassabis Line Producer: Jamie Rasmussen Associate Producer: Brandon White Production Manager: Peter Brunette Production Coordinator: Rhyan Lark Camera Operator: Lauren Pruitt Gaffer: Vincent Cota Sound Mixer: Lily van Leeuwen Production Assistant: Ryan Coppola Post Production Supervisor: Christian Olguin Post Production Coordinator: Stella Shortino Supervising Editor: Erica DeLeo Assistant Editor: Justin Symonds - It's a very intense time in the field. We obviously want all of the brilliant things these AI systems can do, come up with new cures for diseases, new energy sources, incredible things for humanity. That's the promise of AI. But also, there are worries if the first AI systems are built with the wrong value systems or they're built unsafely, that could be also very bad. - Wired sat down with Demis Hassabis, who's the CEO of Google DeepMind, which is the engine of the company's artificial intelligence. He's a Nobel Prize winner and also a knight. We discussed AGI, the future of work, and how Google plans to compete in the age of AI. This is "The Big Interview." [upbeat music] Well, welcome to "The Big Interview," Demis. - Thank you, thanks for having me. - So let's start talking about AGI a little here. Now, you founded DeepMind with the idea that you would solve intelligence and then use intelligence to solve everything else. And I think it was like a 20-year mission. We're like 15 years into it, and you're on track? - I feel like, yeah, we're pretty much dead on track, actually, is what would be our estimate. - That means five years away from what I guess people will call AGI. - Yeah, I think in the next five to 10 years, that would be maybe 50% chance that we'll have what we are defined as AGI, yes. - Well, some of your peers are saying, "Two years, three years," and others say a little more, but that's really close, that's really soon. How do we know that we're that close? - There's a bit of a debate going on in the moment in the field about definitions of AGI, and then obviously, of course, dependent on that. There's different predictions for when it will happen. We've been pretty consistent from the very beginning. And actually, Shane Legg, one of my co-founders and our chief scientist, you know, he helped define the term AGI back in, I think, early 2001 type of timeframe. And we've always thought about it as system that has the ability to exhibit, sort of all the cognitive capabilities we have as humans. And the reason that's important, the reference to the human mind, is the human mind is the only existence proof we have. Maybe in the universe, the general intelligence is possible. So if you want to claim sort of general intelligence, AGI, then you need to show that it generalizes to all these domains. - Is when everything's filled in, all the check marks are filled in, then we have it- - Yes, so I think there are missing capabilities right now. You know, that all of us who have used the latest sort of LLMs and chatbots, will know very well, like on reasoning, on planning, on memory. I don't think today's systems can invent, you know, do true invention, you know, true creativity, hypothesize new scientific theories. They're extremely useful, they're impressive, but they have holes. And actually, one of the main reasons I don't think we are at AGI yet is because of the consistency of responses. You know, in some domains, we have systems that can do International Math Olympiad, math problems to gold medal standard- - Sure. - With our AlphaFold system. But on the other hand, these systems sometimes still trip up on high school maths or even counting the number of letters in a word. - Yeah. - So that to me is not what you would expect. That level of sort of difference in performance across the board is not consistent enough, and therefore shows that these systems are not fully generalizing yet. - But when we get it, is it then like a phase shift that, you know, then all of a sudden things are different, all the check marks are checked? - Yeah. - You know, and we have a thing that can do everything. - Mm-hmm. - Are we then power in a new world? - I think, you know, that again, that is debated, and it's not clear to me whether it's gonna be more of a kind of incremental transition versus a step function. My guess is, it looks like it's gonna be more of an incremental shift. Even if you had a system like that, the physical world, still operates with the physical laws, you know, factories, robots, these other things. So it'll take a while for the effects of that, you know, this sort of digital intelligence, if you like, to really impact, I think, a lot of the real world things. Maybe another decade plus, but there's other theories on that too, where it could come faster. - Yeah, Eric Schmidt, who I think used to work at Google, has said that, "It's almost like a binary thing." He says, "If China, for instance, gets AGI, then we're cooked." Because if someone gets it like 10 minutes, before the next guy, then you can never catch up. You know, because then it'll maintain bigger, bigger leads there. You don't buy that, I guess. - I think it's an unknown. It's one of the many unknowns, which is that, you know, that's sometimes called the hard takeoff scenario, where the idea there is that these AGI systems, they're able to self-improve, maybe code themselves future versus themselves, that maybe they're extremely fast at doing that. So what would be a slight lead, let's say, you know, a few days, could suddenly become a chasm if that was true. But there are many other ways it could go too, where it's more incremental. Some of these self-improvement things are not able to kind of accelerate in that way, then being around the same time, would not make much difference. But it's important, I mean, these issues are the geopolitical issues. I think the systems that are being built, they'll have some imprint of the values and the kind of norms of the designers and the culture that they were embedded in. - [Steven] Mm-hmm. - So, you know, I think it is important, these kinds of international questions. - So when you build AI at Google, you know, you have that in mind. Do you feel competitive imperative to, in case that's true, "Oh my God, we better be first?" - It's a very intense time at the moment in the field as everyone knows. There's so many resources going into it, lots of pressures, lots of things that need to be researched. And there's sort of lots of different types of pressures going on. We obviously want all of the brilliant things that these AI systems can do. You know, I think eventually, we'll be able to advance medicine and science with it, like we've done with AlphaFold, come up with new cures for diseases, new energy sources, incredible things for humanity, that's the promise of AI. But also there are worries both in terms of, you know, if the first AI systems are built with the wrong value systems or they're built unsafely, that could be also very bad. And, you know, there are at least two risks that I worry a lot about. One is, bad actors in whether it's individuals or rogue nations repurposing general purpose AI technology for harmful lens. And then the second one is, obviously, the technical risk of AI itself. As it gets more and more powerful, more and more agentic, can we make sure the guardrails are safe around it? They can't be circumvented. And that interacts with this idea of, you know, what are the first systems that are built by humanity gonna be like? There's commercial imperative- - [Steven] Right. - There's national imperative, and there's a safety aspect to worry about who's in the lead and where those projects are. - A few years ago, the companies were saying, "Please, regulate us. We need regulation." - Mm-hmm, mm-hmm. - And now, in the US at least, the current administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation? Do you think that that's a miss on our part? - I think, you know, and I've been consistent in this, I think there are these other geopolitical sort of overlays that have to be taken into account, and the world's a very different place to how it was five years ago in many dimensions. But there's also, you know, I think the idea of smart regulation that makes sense around these increasingly powerful systems, I think is gonna be important. I continue to believe that. I think though, and I've been certain on this as well, it sort of needs to be international, which looks hard at the moment in the way the world is working, because these systems, you know, they're gonna affect everyone, and they're digital systems. - Yeah. - So, you know, if you sort of restrict it in one area, that doesn't really help in terms of the overall safety of these systems getting built for the world and as a society. - [Steven] Yeah. - So that's the bigger problem, I think, is some kind of international cooperation or collaboration, I think, is what's required. And then smart regulation, nimble regulation that moves as the knowledge about the research becomes better and better. - Would it ever reach a point for you where you would feel, "Man, we're not putting the guardrails in. You know, we're competing, that we really have to stop, or you can't get involved in that?" - I think a lot of the leaders of the main labs, at least the western labs, you know, there's a small number of them and we do all know each other and talk to each other regularly. And a lot of the lead researchers do. The problem is, is that it's not clear we have the right definitions to agree when that point is. Like, today's systems, although they're impressive as we discussed earlier, they're also very flawed. And I don't think today's systems, are posing any sort of existential risk. - Mm-hmm. - So it's still theoretical, but the problem is that a lot of unknowns, we don't know how fast those will come, and we don't know how risky they will be. But in my view, when there are so many unknowns, then I'm optimistic we'll overcome them. At least technically, I think the geopolitical questions could be actually, end up being trickier, given enough time and enough care and thoughtfulness, you know, sort of using the scientific method as we approach this AGI point. - That makes perfect sense. But on the other hand, if that timeframe is there, we just don't have much time, you know? - No, we don't. We don't have much time. I mean, we're increasingly putting resources into security and things like cyber, and also research into controllability and understanding of these systems, sometimes called mechanistic interpretability. You know, there's a lot of different sub-branches of AI. - Yeah, that's right. I wanna get to interpretability. - Yeah, that are being invested in, and I think even more needs to happen. And then at the same time, we need to also have societal debates more about institutional building. How do we want governance to work? How are we gonna get international agreement, at least on some basic principles, around how these systems are used and deployed and also built? - What about the effect on work on the marketplace? - Yeah. - You know, how much do you feel that AI is going to change people's jobs, you know, the way jobs are distributed in the workforce? - I don't think we've seen, my view is if you talk to economists, they feel like there's not much has changed yet. You know, people are finding these tools useful, certainly in certain domains- - [Steven] Yeah. - Like, things like AlphaFold, many, many scientists are using it to accelerate their work. So it seems to be additive at the moment. We'll see what happens over the next five, 10 years. I think there's gonna be a lot of change with the jobs world, but I think as in the past, what generally tends to happen is new jobs are created that are actually better, that utilize these tools or new technologies, what happened with the internet, what happened with mobile? We'll see if it's different this time. - Yeah. - Obviously everyone always thinks this new one, will be different. And it may be, it will be, but I think for the next few years, it's most likely to be, you know, we'll have these incredible tools that supercharge our productivity, make us really useful for creative tools, and actually almost make us a little bit superhuman in some ways in what we're able to produce individually. So I think there's gonna be a kind of golden era, over the next period of what we're able to do. - Well, if AGI can do everything humans can do, then it would seem that they could do the new jobs too. - That's the next question about like, what AGI brings. But, you know, even if you have those capabilities, there's a lot of things I think we won't want to do with a machine. You know, I sometimes give this example of doctors and nurses. You know, maybe a doctor and what the doctor does and the diagnosis, you know, one could imagine that being helped by AI tool or even having an AI kind of doctor. On the other hand, like nursing, you know, I don't think you'd want a robot to do that. I think there's something about the human empathy aspect of that and the care, and so on, that's particularly humanistic. I think there's lots of examples like that but it's gonna be a different world for sure. - If you would talk to a graduate now, what advice would you give to keep working- - Yeah. - Through the course of a lifetime- - Yeah. - You know, in the age of AGI? - My view is, currently, and of course, this is changing all the time with the technology developing. But right now, you know, if you think of the next five, 10 years as being, the most productive people might be 10X more productive if they are native with these tools. So I think kids today, students today, my encouragement would be immerse yourself in these new systems, understand them. So I think it's still important to study STEM and programming and other things, so that you understand how they're built, maybe you can modify them yourself on top of the models that are available. There's lots of great open source models and so on. And then become, you know, incredible at things like fine-tuning, system prompting, you know, system instructions, all of these additional things that anyone can do. And really know how to get the most out of those tools, and do it for your research work, programming, and things that you are doing on your course. And then come out of that being incredible at utilizing those new tools for whatever it is you're going to do. - Let's look a little beyond the five and 10-year range. Tell me what you envision when you look at our future in 20 years, in 30 years, if this comes about, what's the world like when AGI is everywhere? - Well, if everything goes well, then we should be in an era of what I like to call sort of radical abundance. So, you know, AGI solves some of these key, what I sometimes call root node problems in the world facing society. So a good one, examples would be curing diseases, much healthier, longer lifespans, finding new energy sources, you know, whether that's optimal batteries and better room temperature, superconductors, fusion. And then if that all happens, then we know it should be a kind of era of maximum human flourishing where we travel to the stars and colonize the galaxy. You know, I think the beginning of that will happen in the next 20, 30 years if the next period goes well. - I'm a little skeptical of that. I think we have an unbelievable abundance now, but we don't distribute it, you know, fairly. - Yeah. - I think that we kind of know how to fix climate change, right? We don't need a AGI to tell us how to do it, yet we're not doing it. - I agree with that. I think we being as a species, a society not good at collaborating, and I think climate is a good example. But I think we are still operating, humans are still operating in a zero-sum game mentality. Because actually, the earth is quite finite, relative to the amount of people there are now in our cities. And I mean, this is why our natural habitats, are being destroyed, and it's affecting wildlife and the climate and everything. - [Steven] Yeah. - And it's also partly 'cause people are not willing to accept, we do now to figure out climate. But it would require people to make sacrifices. - Yeah. - And people don't want to. But this radical abundance would be different. We would be in a finally, like, it would feel like a non-zero-sum game. - How will we get [indistinct] to that? Like, you talk about diseases- - Well, I gave you an example. - We have vaccines, and now some people think we shouldn't use it. - Let me give you a very simple example. - Sure. - Water access. This is gonna be a huge issue in the next 10, 20 years. It's already an issue. Countries in different, you know, poorer parts of the world, dryer parts of the world, also obviously compounded by climate change. - [Steven] Yeah. - We have a solution to water access. It's desalination, it's easy. There's plenty of sea water. - Yeah. - Almost all countries have a coastline. But the problem is, it's salty water, but desalination only very rich countries. Some countries do do that, use desalination as a solution to their fresh water problem, but it costs a lot of energy. - Mm-hmm. - But if energy was essentially zero, there was renewable free clean energy, right? Like fusion, suddenly, you solve the water access problem. Water is, who controls a river or what you do with that does not, it becomes much less important than it is today. I think things like water access, you know, if you run forward 20 years, and there isn't a solution like that, could lead to all sorts of conflicts, probably that's the way it's trending- - Mm-hmm, right. - Especially if you include further climate change. - So- - And there's many, many examples like that. You could create rocket fuel easily- - Mm-hmm. - Because you just separate that from seawater, hydrogen and oxygen. It's just energy again. - So you feel that these problems get solved by AGI, by AI, then we're going to, our outlook will change, and we will be- - That's what I hope. Yes, that's what I hope. But that's still a secondary part. So the AGI will give us the radical abundance capability, technically, like the water access. - Yeah. - I then hope, and this is where I think we need some great philosophers or social scientists to be involved. That should hopefully shift our mindset as a society to non-zero-sum. You know, there's still the issue of do you divide even the radical abundance fairly, right? Of course, that's what should happen. But I think there's much more likely, once people start feeling and understanding that there is this almost limitless supply of raw materials and energy and things like that. - Do you think that driving this innovation by profit-making companies is the right way to go? We're most likely to reach that optimistic high point through that? - I think it's the current capitalism or, you know, is the current or the western sort of democratic kind of systems, have so far been proven to be sort of the best drivers of progress. - Mm-hmm. - So I think that's true. My view is that once you get to that sort of stage of radical abundance and post-AGI, I think economics starts changing, even the notion of value and money. And so again, I think we need, I'm not sure why economists are not working harder on this if maybe they don't believe it's that close, right? But if they really did that, like the AGI scientists do, then I think there's a lot of economic new economic theory that's required. - You know, one final thing, I actually agree with you that this is so significant and is gonna have a huge impact. But when I write about it, I always get a lot of response from people who are really angry already about artificial intelligence and what's happening. Have you tasted that? Have you gotten that pushback and anger by a lot of people? It's almost like the industrial revolution people- - Yeah. - Fighting back. - I mean, I think that anytime there's, I haven't personally seen a lot of that, but obviously, I've read and heard a lot about, and it's very understandable. That's all that's happened many times. As you say, industrial revolution, when there's big change, a big revolution. - [Steven] Yeah. - And I think this will be at least as big as the industrial revolution, probably a lot bigger. That's surprising, there's unknowns, it's scary, things will change. But on the other hand, when I talk to people about the passion, the why I'm building AI- - Mm-hmm. - Which is to advance science and medicine- - Right. - And understanding of the world around us. And then I explain to people, you know, and I've demonstrated, it's not just talk. Here's AlphaFold, you know, Nobel Prize winning breakthrough, can help with medicine and drug discovery. Obviously, we're doing this with isomorphic now to extend it into drug discovery, and we can cure terrible diseases that might be afflicting your family. Suddenly, people are like, "Well, of course, we need that." - Right. - It'll be immoral not to have that if that's within our grasp. And the same with climate and energy. - Yeah. - You know, many of the big societal problems, it's not like you know, we know, we've talked about, there's many big challenges facing society today. And I often say I would be very worried about our future if I didn't know something as revolutionary as AI was coming down the line to help with those other challenges. Of course, it's also a challenge itself, right? But at least, it's one of these challenges that can actually help with the others if we get it right. - Well, I hope your optimism holds out and is justified. Thank you so much. - And I'll do my best. Thank you. [upbeat music]

Business Insider
31-05-2025
- Entertainment
- Business Insider
Here's the best advice for the Class of 2025 from 10 notable graduation speakers
High-profile writers, doctors, entrepreneurs, and actors are making their annual rounds through college commencement ceremonies. They're dispensing some of their best advice to new grads preparing to take on the challenges that lie ahead, talking about everything from taking chances, surrounding yourself with the right people, and understanding your place in an AI-enabled workplace. Here are some standout pieces of advice to the Class of 2025 from 10 commencement speakers. Tech journalist Steven Levy "You do have a great future ahead of you, no matter how smart and capable ChatGPT, Claude, Gemini, and Llama get," author and tech journalist Steven Levy told graduates at the Temple University College of Liberal Arts on May 7. "And here is the reason: You have something that no computer can ever have. It's a superpower, and every one of you has it in abundance," he said, according to Wired. "The lords of AI are spending hundreds of billions of dollars to make their models think like accomplished humans. You have just spent four years at Temple University learning to think as accomplished humans. The difference is immeasurable," he said. Actor Jennifer Coolidge "When you find the thing that you want to do, I really want to highly recommend — just friggin' go for it," Jennifer Coolidge, the star of HBO's White Lotus, told graduates at Emerson College on May 12. "You really have to psych yourself up into bleeding absurd possibilities, and you have to believe that they are not absurd because there's nothing foolish or accidental about expecting things that are unattainable for yourself." Kermit the Frog Everyone's favorite Muppet shared "a little advice — if you're willing to listen to a frog" at the University of Maryland's commencement ceremony on May 22. "Rather than jumping over someone to get what you want, consider reaching out your hand and taking the leap side by side. Because life is better when we leap together." Actor Elizabeth Banks "You're about to enter the incredibly competitive job market, so I can understand why you believe that life is a zero-sum game, that there's only so much opportunity to go around," actor Elizabeth Banks told graduates of the University of Pennsylvania on May 19. "And if one person takes a bigger slice, everyone else has to make a smaller slice, and the total size of the pie remains the same. And that is true with actual pie," she said. "But not with life, not with opportunity. So my advice to you is, as much as possible from here on out, take yourself out of that mindset." Physician and author Abraham Verghese Physician and author Abraham Verghese told Harvard graduates on May 29 to "make your decisions worthy of those who supported, nurtured, and sacrificed for you." "The decisions you will make in the future under pressure will say something about your character, while they also shape and transform you in unexpected ways," he said. Verghese also encouraged the Class of 2025 to read fiction. "To paraphrase Camus, fiction is the great lie that tells the truth about how the world lives," he said. "And if you don't read fiction, my considered medical opinion is that a part of your brain responsible for active imagination atrophies." Actor Henry Winkler Actor Henry Winkler spoke about the power of positive thinking in his May 17 address to graduates of the Georgetown University College of Arts & Sciences. "A negative thought comes into your mind, you say out loud — you say out loud — 'I am sorry, I have no time for you now,'" he said. "Yes, people will look at you very strangely. But it doesn't matter. Because it becomes your habit." Instead, when faced with doubts and negative thoughts about your goals, "you move it out; you move a positive in," he said. Federal Reserve Chair Jerome Powell Federal Reserve Chair Jerome Powell told graduates of Princeton University on May 25 that "the combination of luck, the courage to make mistakes, and a little initiative can lead to much success." "We risk failure, awkwardness, embarrassment, and rejection," he said. "But that's how we create the career opportunities, the great friendships, and the loves that make life worth living." He reminded graduates that "each of us is a work in progress" and "the possibilities for self-improvement are limitless." "The vast majority of what you need to know about work, about relationships, about yourself, about life, you have yet to learn," Powell said. "And that itself is a tremendous gift." Y Combinator cofounder Jessica Livingston Jessica Livingston, cofounder of startup accelerator Y Combinator, told Bucknell University graduates to "find the interesting people." "Talk to people. Get introduced to new people. Find the people that you think are interesting, and then ask what they're working on. And if you find yourself working at a place where you don't like the people, get out," she said in her May 18 speech. She also advised the Class of 2025 that "you can reinvent yourself" at any time. "If you want to, you can just decide to shift gears at this point, and no one's going to tell you you can't," she said. "You can just decide to be more curious, or more responsible, or more energetic, and no one's going to look up your college grades and say, 'Hey, wait a minute. This person's supposed to be a slacker!'" S&P Global CEO Martina L. Cheung "Don't collect promotions. Collect experiences," S&P Global President and CEO Martina L. Cheung told graduates of George Mason University. In her May 15 address, Cheung shared how lateral moves in her own career later prepared her for promotions. "Most people think of their careers as a ladder," she said. "They see the goal as climbing the ladder with promotions or leaving one job to take a bigger one elsewhere. The truth is, moving up is not the only direction. It's not even always the best direction. Sometimes it's the lateral move." YouTuber Hank Green Writer and science YouTuber Hank Green reminded MIT graduates in his May 29 speech to stay curious. "Your curiosity is not out of your control," he said. "You decide how you orient it, and that orientation is going to affect the entire rest of your life. It may be the single most important factor in your career." Green also emphasized the importance of taking chances on your ideas. "Ideas do not belong in your head," he said. "They can't help anyone in there. I sometimes see people become addicted to their good idea. They love it so much, they can't bring themselves to expose it to the imperfection of reality. Stop waiting. Get the ideas out. You may fail, but while you fail, you will build new tools." He closed his speech on this inspiring note: "Do not forget how special and bizarre it is to get to live a human life. It took 3 billion years for the Earth to go from single-celled life forms to you. That's more than a quarter of the life of the entire universe. Something very special and strange is happening on this planet and it is you."


WIRED
16-05-2025
- WIRED
No, Graduates: AI Hasn't Ended Your Career Before It Starts
May 16, 2025 10:00 AM In a commencement speech at Temple University, I shared my views on how new college graduates can compete with powerful artificial intelligence. Photo-Illustration:Imagine graduating with a liberal arts degree as the age of AI dawns. That's the mindset I faced when addressing the Temple University College of Liberal Arts (where I'm an alum) earlier this month. Truth be told, no one knows what will happen with AI, including those who are building it. I took an optimistic view based on one core truth: As amazing as AI might become, by definition it cannot be human, and therefore the human connection we homo sapiens forge with each other is unique—and gives us an edge. Here's the speech: I am thrilled to address the Temple College of Liberal Arts Class of 2025. You have prevailed under the curse of living in interesting times. You coped with Covid in high school and your early years here, navigated your way through the noise of social media, and now face a troubling political climate. The last part of that resonates with me. I attended Temple University at a time of national unrest. Richard Nixon was our president, the war was raging in Vietnam, and the future seemed uncertain. This is an essay from the latest edition of Steven Levy's Plaintext newsletter. SIGN UP for Plaintext to read the whole thing, and tap Steven's unique insights and unmatched contacts for the long view on tech. But there is one concern that you have that I or my classmates could not have conceived of when we graduated over 50 years ago: the fear that artificial intelligence would perform our future jobs and render our career dreams useless. I didn't touch a computer keyboard during my four years at Temple. It wasn't until almost 10 years after my graduation that I finally interacted directly with a computer. I was assigned a story for Rolling Stone about computer hackers. I was energized and fascinated by their world, and decided to keep writing about it. Not long after my article was published I ventured to MIT and met Marvin Minsky, one of the scientists who came up with the idea of artificial intelligence at a summer conference at Dartmouth in 1956. Minsky and his peers thought it would only be a few years until computers could think like humans. That optimism—or naivety—became a punch line for many decades. High-level AI was always 10 years away, 20 years away. It was a science fiction fantasy. Until about 20 years ago or so that was still the case. And then in this century, some computer scientists made breakthroughs in what were called neural nets. It led to rapid progress, and in 2017 another big breakthrough led to the terrifyingly capable large language models like ChatGPT. Suddenly AI is here. My guess is that every single one of you has used a large language model like ChatGPT as a collaborator. Now I hope this isn't the case, but some of you may have used it as a stand-in for your own work. Please don't raise your hand if you've done this—we haven't given out the diplomas yet, and your professors are standing behind me. Much of my time at WIRED the past few years has been spent talking to and writing about the people leading this field. Some refer to their efforts as creating 'the last invention.' They use that term because when AI reaches a certain point, supposedly computers will shove us humans aside and drive progress on their own. They refer to this as reaching artificial general intelligence, or AGI. That's the moment when AI will, in theory, perform any task a human can, but better. So as you leave this institution for the real world, this moment of joy may well be mixed with anxiety. At the least, you may be worried that for the rest of your work life, you will not only be collaborating with AI but competing with it. Does that make your prospects bleak? I say … no. In fact my mission today is to tell you that your education was not in vain. You do have a great future ahead of you no matter how smart and capable ChatGPT, Claude, Gemini, and Llama get. And here is the reason: You have something that no computer can ever have. It's a superpower, and every one of you has it in abundance. Your humanity. Liberal arts graduates, you have majored in subjects like Psychology. History. Anthropology. African American, Asian, and Gender Studies. Sociology. Languages. Philosophy. Political Science. Religion. Criminal Justice. Economics. And there's even some English majors, like me. Every one of those subjects involves examining and interpreting human behavior and human creativity with empathy that only humans can bring to the task. The observations you make in the social sciences, the analyses you produce on art and culture, the lessons you communicate from your research, have a priceless authenticity, based on the simple fact that you are devoting your attention, intelligence, and consciousness to fellow homo sapiens. People, that's why we call them the humanities . The lords of AI are spending hundreds of billions of dollars to make their models think LIKE accomplished humans. You have just spent four years at Temple University learning to think AS accomplished humans. The difference is immeasurable. This is something that even Silicon Valley understands, starting from the time Steve Jobs told me four decades ago that he wanted to marry computers and the liberal arts. I once wrote a history of Google. Originally, its cofounder Larry Page resisted hiring anyone who did not have a computer science degree. But the company came to realize that it was losing out on talent it needed for communications, business strategy, management, marketing, and internal culture. Some of those liberal arts grads it then hired became among the company's most valuable employees. Even inside AI companies. liberal arts grads can and do thrive. Did you know that the president of Anthropic, one of the top creators of generative AI, was an English major? She idolized Joan Didion. Furthermore, your work does something that AI can never do: it makes a genuine human connection. OpenAI recently boasted that it trained one of its latest models to churn out creative writing. Maybe it can put together cool sentences—but that's not what we really seek from books, visual arts, films and criticism. How would you feel if you read a novel that shifted the way you saw the world, heard a podcast that lifted your spirit, saw a movie that blew your mind, heard a piece of music that moved your soul, and only after you were inspired and transformed by it, learned that it was not created by a person, but a robot? You might feel cheated. And that's more than a feeling. In 2023, some researchers published a paper confirming just that. In blind experiments human beings valued what they read more when they thought it was from fellow humans and not a sophisticated system that fakes humanity. In another blind experiment, participants were shown abstract art created by both humans and AI. Though they couldn't tell which was which, when subjects were asked which pictures they liked better, the human-created ones came out on top. Other research studies involved brain MRIs. The scans also showed people responded more favorably when they thought humans, not AI, created the artworks. Almost as if that connection was primal. Everything you have learned in the liberal arts—the humanities—depends on that connection. You bring your superpower to it. I'm not going to sugarcoat things. AI is going to have a huge impact on the labor market, and some jobs will be diminished or eliminated. History teaches us that with every big technological advance, new jobs replace those lost. Those jobs will exist, as there are countless roles AI can never fill because the technology can't replicate true human connection. It's the one thing that AI can't offer. Combined with the elite skills you have learned at Temple, that connection will make your work of continuing value. Especially if you perform it with the traits that make you unique: curiosity, compassion, and a sense of humor. As you go into the workforce, I urge you to lean into your human side. Yes, you can use AI to automate your busy work, explain complicated topics, and summarize dull documents. It might even be an invaluable assistant. But you will thrive by putting your heart into your own work. AI has no such heart to employ. Ultimately, flesh, blood, and squishy neurons are more important than algorithms, bits, and neural nets. So class of 2025, let me send you out into the world with an expression that I encourage you to repeat during these challenging years to come. And that is the repetition of the simple truth that will guide your career and your life as you leave this campus. Here it is: I. Am. Human. Can you say that with me? I Am Human. Congratulations, and go out and seize the world. It is still yours to conquer. And one final note—I did not use AI to write this speech. Thank you. (You can see me deliver the speech here, in full academic regalia.)


WIRED
09-05-2025
- Business
- WIRED
Buy Now or Pay More Later? ‘Macroeconomic Uncertainty' Has Shoppers Anxious
May 9, 2025 10:00 AM President Trump's tariffs have started pushing prices higher. Tech giants and ecommerce strategists offer some clues on when to buy. Photo-illustration: WIRED Staff; Getty Images Buying something before you absolutely need it isn't always affordable. But if there were ever a time to consider making an early investment, this would be it. President Donald Trump's tariffs are beginning to nudge prices higher on products from high-end strollers to cheap smartphone chargers. This is an essay from the latest edition of Steven Levy's Plaintext newsletter. SIGN UP for Plaintext to read the whole thing, and tap Steven's unique insights and unmatched contacts for the long view on tech. The Trump administration has suggested the tariffs are a negotiating tactic. Some could be eliminated as the US makes deals with other countries. That means US shoppers willing to wait out the current chaos could end up getting a better deal. I have been wondering what to do here myself. As a new dad, my family will need a new car seat early next year, and these plastic buckets, which generally must be bought new, don't come cheap—even under normal circumstances. For clues on how to navigate the dilemma of buying now or later, I have been collecting thoughts from experts in the online shopping industry. One of the first lessons I learned doing this research was that if I decided to buy in advance, I wouldn't be alone. 'To some extent, we've seen some heightened buying in certain categories that may indicate stocking up in advance of any potential tariff impact,' Amazon CEO Andy Jassy said on an earnings call last week. eBay also said it saw signs of what could be prebuying, though it didn't specify which products people are stocking up on. On the other hand, there are hints that most consumers have been holding out for now. This time of year tends to be relatively quiet for sales of iPhones and other Apple products, and that's been true to date in 2025, CEO Tim Cook said on the company's earning call last week. Mastercard's earning comments also said that shoppers were spending the expected amount. And Etsy even saw a drop in the total value of merchandise sold as customers held back on gifts and trinkets. So if other consumers are a guide, I could go either way with my car seat purchase. What about prices? As the impact of tariffs started to hit last week, Amazon's Jassy said that prices on the platform hadn't surged 'appreciably' so far. He added that Amazon was 'maniacally focused' on keeping prices down. It helps that Amazon has a global network of competing suppliers and merchants. For example, if one seller raises prices, another may hold theirs steady to gain market share, Jassy said. 'Customers are going to have a better chance of finding variety on selection and on lower prices when they come here,' he added. Jassy didn't touch on illicit tactics, including tariff evasion, that could keep the prices of imported products artificially low. But several ecommerce strategists who help companies sell products on Amazon tell WIRED that factories and distributors in Asia are admitting to new attempts to skirt tariffs, including by underdeclaring the value of shipments to US customs officials. 'It's always been an unfair playing field, and now they are pushing the envelope even more,' says Dave Bryant, cofounder of EcomCrew. Amazon spokesperson Jessica Martin says sellers 'are required to follow all applicable laws and regulations when importing items for sale.' The government losing out on tariff revenue isn't great, but name a shopper that's going to fret at the trade-off of more affordable prices, Bryant says. He and other strategists agree with Jassy that competitive items—think household goods or generic party favors—are unlikely to skyrocket in price on Amazon. More boutique offerings, though, could grow more expensive because of tariffs. Some of those increases appear to be materializing. In mid-April, the average price of goods on Amazon was higher than the previous 90 days in nine out of 27 categories monitored by the price-tracking firm Keepa, according to a WIRED analysis. By this past Wednesday, the number of categories with higher prices shot up to 24. Industrial items, tools, and baby products experienced some of the biggest jumps, with average price increases of around 2.5 percent to 5 percent. More increases are coming later this month, including reportedly a bump of $20 or more for the Graco car seat that I have been eyeing. The big question is how much worse will those increases get. The steepest tariffs—those on Chinese imports—could more than double the price of affected products. Normally, Amazon restricts sellers that make drastic price hikes. But it has been allowing for increases of about 10 percent a week in certain cases, roughly five times more than the previous limit, according to Jason Boyce, CEO and founder of ecommerce strategy company Avenue7Media. That means significant surges could come in days and weeks, not months, adding pressure on consumers like me to make decisions sooner rather than later. Martin, the Amazon spokesperson, says prices have not changed outside of usual fluctuations and that the platform's pricing policy continues to apply. The last factor at play in the when-to-buy dilemma is the chance of a resolution. If you trust the vibes that some big tech platforms are publicly expressing, there is some optimism in the air about averting crisis-level prices Companies are buying online ads to market their products like it's 'mostly business as usual,' Reddit chief operating officer Jen Wong said on an earnings call last week. Typically, marketing budgets would be an early casualty for companies trying to cut expenses and keep their prices low. Wong's comments echoed those of executives from other leading online ad sellers, including Amazon, Google, Microsoft, and Meta. The positive outlook has been encouraging to Wall Street—the US stock market is trending up as if tariff-fueled price hikes aren't going to dissuade all of us from shopping in the coming months. But Trump and the outcomes of negotiations between the US and its trading partners are unpredictable. 'Obviously, none of us knows exactly where tariffs will settle or when,' Amazon's Jassy said in his comments last week. CEOs and their chief financial officers have taken to calling this reality 'macroeconomic uncertainty.' The phrase has been uttered on 222 companies' earnings calls already this year, up from 178 in all of last year, according to a WIRED review of transcripts from financial data company AlphaStreet. Some companies have been trying to create certainty when it comes to Trump and his trade policies since his first presidential term. Apple attempted to control its costs by shifting some of its manufacturing out of China, which has long been Trump's top target for tariffs. But this time around Trump has applied tariffs to every country imaginable, including an island inhabited solely by penguins. Exemptions remain another hope for companies and their customers. Last week, baby monitor maker Nanit led a rally in New York City urging for a tariffs reprieve for baby products. This week, Trump and his treasury secretary both said they were considering it, though Trump added that he preferred not to have too many exemptions. Nanit's Malaysia-made monitors—it abandoned China during Trump's first term— are subject to 10 percent tariffs at the moment. The levy could grow to 24 percent or more come July under the president's current plans. Nanit CEO, Anushka Salinas, says her goal is to avoid price increases as her company tries to grow its base of 1 million monthly users. It helps to have supportive investors who could step in with additional funding. The startup's subscription-based software, a business line that tends to have wider profit margins, also gives it some financial cushioning. But the higher the tariffs go, the more challenging it will become to avoid a price increase. Salinas personally made the call to buy a bed for her 4-year-old sooner than she would have otherwise. I made a similar choice. A car seat I don't need until next year is arriving tomorrow. Better $200 now than $500 later.