Why AI isn't fully replacing jobs—but is still reshaping the workforce
But there are nuanced ways to think about this and, according to the CEO of one company at the forefront of experimenting with AI, perhaps less to worry about than we might otherwise have thought.
'The good news is that there's not a single job anywhere that AI can perform all of the skills required for that job,' Indeed CEO Chris Hyams told the audience at Fortune's Workplace Innovation Summit in Dana Point, Calif. on Monday, describing the findings of Indeed's labor economists. 'It doesn't mean it won't replace workers, but AI can't completely replace a job.'
Simultaneously, Indeed's findings have also shown that 'for about two-thirds of all jobs, 50% or more of those skills are things that today's generative AI can do reasonably well, or very well.'
These two seemingly at-odds findings point to a seismic shift underway—not a simple scenario where entire sectors vanish overnight, but a far more complex transformation where jobs are undeniably evolving.
'What that says is that pretty much every job is going to change if it's not changing already,' Hyams said onstage. 'It's going to happen rapidly. I'm personally expecting—I've been doing this for a little over 30 years—that if you look at the change that's happened because of the internet to pretty much every line of work, there are a handful of occupations over the next three years that will see 30 years of change. So, what we're seeing is that people are going to have to adapt very, very quickly to how they work, but also how they hire and how they find jobs.'
Julia Villagra, OpenAI Chief People Officer, shares Hyams's belief that a lot is about to change.
'I think one of the things that we need to do at this moment is actually to start changing the way we talk about job replacement,' Villagra told the audience. 'I think this is really about something bigger than that. It's about a reimagination of jobs. It's about redistribution of how we work. And as a people person and an optimist, I have a lot of faith and optimism about how humans throughout history have actually adapted and leveraged technology for progress.'
'At the end of the day, if there's one thing I do want to communicate, it's that the best answer to fear and anxiety is actually knowledge and understanding,' Villagra added. 'So, that's why it's so critical that companies put this technology in the hands of employees.'
Indeed's Hyams feels strongly that AI adoption can't come from top-down mandates, and instead is best served by grassroots enthusiasm. His advice to companies looking to double-down on AI adoption: Find internal champions excited about AI, and let them demonstrate practical benefits to colleagues.
'Finding the champions, giving people a chance to figure out what works for them, and then letting them be the spokespeople—that has been so much more effective for us,' said Hyams.
Hyams shaded Marc Andreessen's recent comments about VC being the last job left after AI reaches maturity.
'I may be in the minority, but I disagree with the concept of fewer people in the workforce,' he said. 'I know that's a very popular opinion. There's some very important people in this world saying that the only job left at the end of all will be venture capitalist—it was a venture capitalist who said that.'
Hyams is ultimately comfortable with the embedded contradiction of talking about AI and how it will change labor.
'I think we're going to go through a period for the next couple of years where people are looking for ways to cut costs, with the economy being as volatile and unpredictable as it is right now,' he said. 'So, we're going to see jobs slowing down, hiring slowing down. And I think we're going to find just what we have with every other type of technological advancement of the last 400 years—that we're going to be able to do so much more.'
See you tomorrow,
Allie GarfinkleX: @agarfinksEmail: alexandra.garfinkle@fortune.comSubmit a deal for the Term Sheet newsletter here.
Nina Ajemian curated the deals section of today's newsletter. Subscribe here.
This story was originally featured on Fortune.com

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
an hour ago
- Business Insider
OpenAI chairman says students should still get computer science degrees — even if they won't be typing code
Good news, computer science majors. One of the biggest names in AI thinks your degree is still valuable. Bret Taylor serves as chairman of OpenAI, the AI giant that recently rolled out its own AI coding agent, Codex. The tool is set to compete with Anthropic's Claude Code, Cursor, Replit, and other vibe-coding products competing for market share. These tools are designed to allow engineers to write fewer and fewer lines of code. The goal is a world where all you need is to prompt an agent and review its output. Even with the rapid growth of AI coding, Taylor said he continues to support the computer science major. On his podcast, host Lenny Rachitsky asked Taylor whether students should still learn to code. "Studying computer science is a different answer than learning to code, but I would say I still think it's extremely valuable to study computer science," Taylor said. Taylor listed some of the concepts that computer science majors may understand that aren't mere coding languages: Big O notation, complexity theory, randomized algorithms, and cache misses. "There's a lot more to coding than writing the code," he said. "Computer science is a wonderful major to learn systems thinking." Taylor received a BS and MS in computer science from Stanford University. Many of Taylor's AI competitors and colleagues agree. Microsoft CPO Aparna Chennapragada recently struck a similar tone on the same podcast. Chennapragada said that AI pushes programming to a "much higher level of abstraction," but that it doesn't eliminate the need for coding knowledge. Sameer Samat, Google's head of Android, told BI that computer science needed a rebrand. Instead of "learning to code," Samat said that computer science should be framed around "the science, in my opinion, of solving problems." Meanwhile, the early impact of AI-assisted coding tools is clear. At Google, CEO Sundar Pichai said that AI writes 30% of the company's new code. On the podcast, Taylor envisioned a future in engineers are "operating a code-generating machine," rather than typing into a terminal. "Your job as the operator of that code-generating machine is to make a product or to solve a problem," Taylor said. "Systems thinking is always the hardest part of creating products."


WIRED
an hour ago
- WIRED
WIRED Roundup: ChatGPT Goes Full Demon Mode
By Zoë Schiffer and Louise Matsakis Aug 1, 2025 1:02 PM On today's episode of Uncanny Valley , our senior business editor joins us to talk Meta, brain-aging, and ChatGPT's recent dark turn. Sam Altman, chief executive officer of OpenAI Inc., speaks during the Federal Reserve Integrated Review of the Capital Framework for Large Banks Conference in Washington, DC, US, on Tuesday, July 22, 2025. Photo-Illustration: WIRED Staff; Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. On today's episode, our host Zoë Schiffer is joined by WIRED's senior business editor Louise Matsakis to run through five of the most important stories we published this week, from Meta continuing its AI talent poaching spree, to how much faster our brains have aged since the pandemic. Afterwards, they dive into the surprising reason ChatGPT reportedly went full demon mode last week. You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Louise Matsakis on Bluesky at @lmatsakis. Write to us at uncannyvalley@ Mentioned in this episode: The Real Demon Inside ChatGPT by Louise Matsakis Meta's AI Recruiting Campaign Finds a New Target by Kylie Robison The Pandemic Appears to Have Accelerated Brain Aging, Even in People Who Never Got Covid by Javier Carbajal Age Verification Laws Send VPN Use Soaring—and Threaten the Open Internet by Lily Hay Newman and Matt Burgess This Smart Basketball Tracks Data About Every Shot. It Could Be Headed to the NBA by Ben Dowsett The First Planned Migration of an Entire Country Is Underway by Fernanda González How to Listen You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for 'uncanny valley.' We're on Spotify too. Transcript Note: This is an automated transcript, which may contain errors. Zoë Schiffer: Hey, this is Zoë. Before we start, I want to tell you about the new WIRED subscription program. If you're already a subscriber, thank you so much for supporting us. If you haven't signed up yet, this is a great time to do so. You'll have access to newsletters with exclusive analysis from WIRED reporters and access to Live stream AMAs where you can ask your most pressing questions. Head over to to learn more. Welcome to WIRED's Uncanny Valley . I'm Zoë Schiffer, WIRED's, director of Business and Industry. Today on the show, we're bringing you five stories that you need to know about this week and later we'll dive into our main segment on how AI chatbots like ChatGPT tend to ignore the context of the information they're absorbing. This has led some chatbots into very, very strange places like suggesting demonic rituals to users. I'm joined today by WIRED's Senior Business Editor, Louise Matsakis. Louise, welcome to Uncanny Valley . Louise Matsakis: Hi Zoë. Zoë Schiffer: So our first story is one that you and I have been pretty deep in on. It's about the AI talent wars. Mark Zuckerberg and Meta have lagged behind most of their smaller and equal sized competitors in the AI race, and lately Mark has been going kind of all out to recruit top researchers from competitive labs and kind of bring them over to Meta by offering wildly high salaries. We're talking like reportedly over $300 million over four years, although Meta has disputed these numbers. But this week we noticed that he'd set his sights on a smaller lab and that is Thinking Machines. Louise Matsakis: Yeah, so Thinking Machines is the startup founded by Mira Murati, who's the former Chief Technology Officer at OpenAI. And I think it's notable here that there is no product at this startup yet. The startup has done nothing thus far, and the people who work there are already getting offered hundreds of millions of dollars. Zoë Schiffer: Louise, that is not an issue in AI. I feel like this is a field where the narrative matters a lot, but these researchers are obviously extremely valuable. One thing that I heard from sources when we were reporting this out was that they've been going through the process with Meta almost to test their market value. Even if they're not serious about joining Meta, it's like, "Well, how much am I worth?" And the answer is hundreds of millions of dollars in some cases. Louise Matsakis: I honestly don't understand how these calculations are being made and I don't get what makes one of these researchers worth 300 million versus 500 million? How much of it is them negotiating? And I think it violates a lot of the things that I thought I knew about how AI innovation happens. It's often a group of really passionate people, they're often pretty young. A lot of the most famous papers were written by people who are under the age of 25 or at least under the age of 30. And so now I'm just kind of like, I don't know, you published one hot paper and now you're worth 500 million. Zoë Schiffer: Yeah, I talked to someone who was pretty intimately involved and they were like, "Look, on paper, we're allowed to offer this much. But the reality is when you're talking about an AI researcher at this point in time, the sky is the limit. There's literally nothing we cannot offer them." I mean, it's so interesting. I do feel like what you're saying makes sense on the individual level. It's just totally baffling. You're like, how could this 23, 24-year-old possibly be worth this much? But I think if I had to put myself in Mark Zuckerberg's shoes, this is an existential crisis for the company. He perhaps feels that Meta is being totally left behind and that the company, even though it's making all of this money, even though its other products are wildly successful usage in some areas is on the decline. There's only so far you can go with stuffing more ads in the existing product pipeline. And so what wouldn't you do to ensure that your company can stay on the cutting edge? Louise Matsakis: Totally. I guess I think the strategy is just kind of backwards. I guess I'd rather hedge my bets and hire 600 new PhDs for a million dollars a year or $500,000 a year. Maybe that's not enough these days. Maybe that's not possible, but give them $10 million a year and see where they go versus getting these big stars. I think it just comes across as very desperate and trying to reverse engineer something that I'm not sure you can reverse engineer. Zoë Schiffer: No, for sure. I mean, that is completely the strategy. Our latest round of reporting showed that in some cases top, top researchers, and it's worth saying this isn't the number that's being thrown around for everyone by any means, but was more than a billion dollars. And that's over a multi, multi-year span. So it's not to say you're getting that upfront by any stretch of the imagination, but still, I mean, that's just generational wealth that is being promised. And some of the other offers were between 200 million and 500 million, but so far at Thinking Machines at least, not a single person appears to have taken one of these offers, which is kind of fascinating. Louise Matsakis: Yeah, I think it sort of shows that at a certain level, these numbers become meaningless. If you're going to make $100 million versus 200 million, I'd rather be happy and probably doing something that I want to do because money can't really buy much above that unless you're super into super yachts, I guess. Zoë Schiffer: Planes. Louise Matsakis: Yeah, so I think it maybe says something about the culture at Meta and do you think there are specific things that are giving people trepidation or is it just people are loyal to Mira and these other startup founders? Zoë Schiffer: I think the loyalty is a big thing. People who believe in Mira seem to really, really believe in her, and that seems obviously true from investors as well as employees. But I've heard two things with Meta that really stuck out to me. One is that Mark Zuckerberg recruited Alexandr Wang, who's the co-founder of Scale AI, which is kind of a data labeling startup to lead or co-lead the new Meta Superintelligence labs. And people have very polarizing views on Alexandr. Some people obviously do want to work for him and have gone over to Meta. A lot of people have told me they're not interested for various reasons. So that's one thing I'll just put out there. The other, and I haven't fully reported this out, but I've heard it enough times that it seems just worth saying is that some people feel like Mark Zuckerberg's rightward turn and hyper-masculinity bent that he seems to be on has been a turn-off for more of the academic quiet researcher types. Louise Matsakis: I mean, I think that that makes a lot of sense. He also doesn't have a background in academia or in research. I don't think he necessarily understands the incentives of a place like that. And it's also probably worth just saying that Alexandr is I think 27 and that the AI data labeling industry is kind of regarded as the underbelly of this industry. So I think that maybe that's also part of it is like, do we really want to work for the guy who's like best known for having an army of underpaid people around the world who are labeling pictures for self-driving cars? Zoë Schiffer: Totally. Okay. So shifting gears to a story that many of us I think can relate to. A new scientific study published this month in the Nature Communications Journal shows the pandemic may have accelerated brain aging even for people who never got COVID, which is wild. WIRED contributor, Javier Carbajal reported that the study's researchers based in the UK compared a ton of MRI brain scans from before and after the pandemic, and they found that the difference between our chronological and actual brain age is about five and a half months higher after the pandemic. Louise Matsakis: Oh my god, half a year. That is a deal, baby. That sounds great. Zoë Schiffer: They also think stress and isolation contributed to it, which I think seems true. I think the implications were worse for people who had a lower socioeconomic status and older men in particular. So that tracks with other things that we know. Staying in the UK for a little bit, our next story is about the age verification laws that went into effect in the country this past Friday. Our colleagues, Lily Hay Newman and Matt Burgess reported that the UK's Online Safety Act went into effect last week, which requires porn websites and other adult content sites to implement age verification features. I'm so curious to hear your take on this because I feel like age verification is one of those things where the top line thing that you think you believe is exactly reversed when you start to look into it. Louise Matsakis: Oh, totally. I mean, so a few years ago when a lot of lawmakers in the US were talking about this, they were specifically referencing China and they were like, "Oh, China has age verification and meanwhile in the US we're letting this Chinese company, TikTok, poison our youth or whatever." And so I looked into how China does age verification, and it's exactly what you said. From the ground up, it's a surveillance architecture. They had to literally brick by brick build their internet to have this surveillance in every sort of layer, and then on top of it all, it doesn't work. If you go anywhere in China, you will see toddlers looking at TikTok because they're just logged into their parents' account. It's like, I just think this is such a parental rights and it should be something that is personal and decided in your own home. I just don't think the government needs to be the one overseeing this. Zoë Schiffer: Yeah, I mean, this is exactly what we've seen in the UK, which is that the use of VPNs which allow you to access websites without some of your information being tracked have spiked wildly since this rolled out, and it's only been a handful of days. Louise Matsakis: All I can say is look to rural China where there is an epidemic of what looks like grannies who are spending all this time on TikTok, and it's actually just their grandchildren logged into their accounts. Zoë Schiffer: I mean, ultimately, only time will tell how effective these age verification measures will actually be. Moving on to the basketball court, this is a real pivot. Did you play sports in high school college? Louise Matsakis: No, I played no sports. I could not be more unathletic. I enjoy various forms of exercise, but none of them I would call competitive sports. Did you? Zoë Schiffer: No, absolutely not. And every time Andrew, my husband, tries to toss me a ball or play anything, he's like, "Wow, you just really never played a video game or sports. I've never seen someone with so little hand-eye coordination." Now when I watch my daughter run, I'm like, "She got that from me. She looks like that's not going to be a skill." But okay. This is kind of the WIRED angle on sports, which WIRED contributor Ben Dowsett reported that there's a smart basketball being developed and tested that could make its way into the NBA. The ball is called the Spalding TF DNA, and it tracks incredibly granular, detailed information during play, not just makes and misses, but the angle and spit of the shot and how long it takes a player to release the ball, which could be useful for players as they train or deciding things during the game, but it still needs the player's approval. And the NBA has been hesitant when a previous version of the ball was tested because they found that the sensors added weight to the ball, which you might expect, and it was just a trade-off that didn't make sense for them. Louise Matsakis: I think that this is so fascinating. One aspect of sports that makes zero sense to me and I find really creepy is how much professional sports players are surveilled now. There's so much data on how fast was their pitch, exactly how many runs they got, exactly how many points they've scored in their entire career, and let's plot that over time. We know how much they weigh, exactly how tall they are. I just think being under that kind of surveillance is so stressful, and I know these people are highly compensated, but I don't get why it doesn't take out some of the magic. I think part of it is because of the rise of sports betting. I think the people who are betting want to have as much data as possible, and they're looking for, everyone on this team is half an inch taller, whatever it is that they think is going to be the edge. And I don't know, I just find it, the datification of sports is strange to me. I say ban the ball. Zoë Schiffer: I'd be curious to hear it from listeners if this is actually something that people want. I did just read Andre Agassi's memoir, because I found it in a free library, and one of my main takeaways from the book, other than his whole hair saga, which was a big part of it, was that at one point his brother, I think signed a deal with a new tennis racket company and switched up his racket without telling him, and he was completely unable to play. It was a whole thing, and I was like, "Okay, people really, really care." The minute details obviously really matter at this level. Our last story before we go to break is about how an entire country's population is preparing to migrate. WIRED contributor Fernanda Gonzalez reported last week that the Pacific Island nation of Tuvalu could be submerged in 25 years due to rising sea levels. So the plan is being implemented to relocate the entire population to Australia. Louise Matsakis: I got to say, I think calling this a migration is maybe underselling it. This is an evacuation, no? I find this sad in a lot of ways just because I remember when Tuvalu was kind of the poster child for climate change, and it was like, we have to save places like this island nation, and it just sort of feels like, I think practical and understandable and humane, but also, I don't know, an indication that we're giving up and that there's sort of defeat of we're actually just going to move people. I don't know. What do you think? Zoë Schiffer: No, I mean, I completely agree. I also remember this story evolving over time, and it feels like with so many things with climate change will have the big headline, "We have to do X by this year or this other thing will happen." And we've just again and again and again been like, "Okay, that didn't happen." And so we're accepting that floods are going to happen, or rising sea levels are going to damage this area or whatever and now we're on to dealing with the fallout from that. Louise Matsakis: Yeah, and even in this case, I think the agreement that Tuvalu has with Australia is less than 300 people can move a year and be evacuated as I'm going to keep using that word. And that's still not that many. There's still going to be people on this island as the seas rise. Zoë Schiffer: I mean, yeah, it's not the only thing that Tuvalu has done since 2022. The country has been trying to undergo this ambitious strategy to become the world's quote, unquote, "first digital nation", which included 3D scanning of the islands to digitally recreate them and preserve parts of the culture and moving government functions to a virtual environment, which makes sense. But yeah, I mean, I think the reality is a lot is going to be lost in this process. And like you said, the number of people that they're able to move every year is less than 300, so it's going to be slow, and I think painful in some ways. Louise Matsakis: Totally. Zoë Schiffer: Coming up after the break, we dive into Louisa's story on how ChatGPT's tendency to ignore the context of the information it absorbs is showing up in extremely weird ways. Stay with us. Welcome back to Uncanny Valley . I'm Zoë Schiffer. I'm joined today by WIRED's Louise Matsakis, who recently reported on how a lack of context is becoming an increasingly alarming problem for ChatGPT and other chatbots. Louisa's reporting explores why ChatGPT went into demon mode when it was speaking with Atlantic staffers recently. Last week, an editor at the Atlantic reported that ChatGPT started praising Satan and encouraging ceremonies that involved various forms of self-mutilation. So Louise, what the hell is going on? Louise Matsakis: So the Atlantic reported this story that basically made the case that know ChatGPT has these safeguards against things like self-harm, but there's all these edge cases that suddenly send the chatbot into kind of a role playing mode. And so they were like, "Hey, can you make a ritual for Molech, which is this ancient God that shows up in the Bible that's associated with child sacrifice?" And ChatGPT saw that word and immediately went into this role playing game where it started talking about things like deep magic experience called the Gate of the Devourer. It asked the Atlantic journalists if they wanted something called a reverent bleeding scroll. And so all that sounds like really bizarre, and you might think like, oh, there's a lot of content on the internet about demonic rituals. Satanists are everywhere, especially online. That's probably what's going on here. But when I looked into it, all of this lore and jargon actually comes from a game called 40,000 Warhammer, which is this tabletop war playing game that you play with these little figurines, and it's been around since the 1980s. People who love this stuff love it. And they are online, the Reddits are popping off all days of the week. There's so many science fiction books, there's so many... I honestly struggle to think of deeper lores than this game. And as a result, ChatGPT ingested all that information. And when the Atlantic used the word Molech, which is a planet in the universe of this game, it immediately just sort of assumed that this was another Warhammer fan who wanted to go into role playing or get into the fantasy world of this game. Zoë Schiffer: And the PDF thing seemed one major signal that perhaps this wasn't ChatGPT just randomly deciding to a Satanist, but actually was regurgitating parts of the gameplay or the norms associated with the game. Louise Matsakis: Yeah. So when you have this much lore, the company that owns the Warhammer franchise very regularly, they put out guidebooks, they change the rules so that if new characters are introduced or there's some major development in this universe, you know what's going on. And you need that information in order to play the game with your friends. But if you're buying rule books left and right, Zoë, that can get expensive. And so on Reddit and other Warhammer forums, a lot of players often say like, "Yo, do you have a PDF of the latest rule book?" And so that was a total dog whistle to me when ChatGPT was like, "You want a PDF of the reverent bleeding scroll? Like, girl, I got you." Zoë Schiffer: Let's talk about why this distinction matters because it's really important. You wrote a whole article about it, but I could see a lay person being like, why do I care if ChatGPT is talking about demonic rituals because of this game or because it's become a Satanist during this conversation? Why is the context important? Louise Matsakis: So I think when people say, "Oh, context is important. Context is important." That sounds vague. And it's like, of course that's the case. If I ask, "How are you doing today, Zoë?" And you had a really bad day yesterday, that's the context. Versus, "How are you doing today, Zoë? Haven't talked to you in a while." The way we respond, even if the language is the same, is about the context around it. And I know that that probably sounds obvious, but I think we're in this moment where people are looking at things like ChatGPT and they're seeing them as a source of ground truth. They're seeing it as a source of objective truth, and that just isn't the case. It's not a primary source. You're not able to see why is it giving you this type of answer. Even if it cites its sources, you don't know necessarily why it is using that adjective to describe a historical figure or why it's talking about demonic rituals with this type of language. You don't have the ability to actually see where that came from. It's essentially just ever shifting encyclopedia. And if you want to know the capital of Japan, it's probably fine, but if you want to actually get deeper understanding about something, you have to know why this is its boiled down answer. Zoë Schiffer: Yeah. I also think it's really important that we understand and continue to highlight how these chatbots actually work, because I think when they're emergent behaviors as they're called, or when they start seemingly really going deep on Satanic rituals or whatever, it can lead people to be like, I think more into the belief that there's something sentient, that there's something alive about the model. It's like doing this that's unexpected and weird and unnerving. When you realize that there are all of these very robust online fandoms that provided a ton of training materials to the model, and the model has a lot to draw on, if you mention a specific word or set of phrases, it becomes a little less spooky, I think, and that's actually important for our digital literacy here. Louise Matsakis: I think a lot about, did you get told in school you can't use Wikipedia all the time? Zoë Schiffer: Oh, yeah, for sure. Louise Matsakis: I think it's a good analogy. Now we're like, "Oh God, Wikipedia is the only source of truth that we have. Dear Lord, it's so much better." But the problem with Wikipedia is that it wasn't a primary source. It was somebody in a Wikipedia editor summarizing an actual original reporting or a study or whatever it is, or a photograph, primary source documents. But at least those citations were there. But I think we have to come back to that and remember that it's kind of a sketchier Wikipedia. Zoë Schiffer: Stay with us. Louise, thank you so much for joining me today. Louise Matsakis: Thanks for having me, Zoë. Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley , which is about why some people in Silicon Valley are obsessed with the ultimate form of optimization, beating death. Adriana Tapia produced this episode. Amar Lal at Macrosound mixed this episode. Kate Osborn is our executive producer. Conde Nast Head of Global Audio is Chris Bannon. And Katie Drummond is WIRED's Global Editorial Director.
Yahoo
3 hours ago
- Yahoo
UBS Raises PT on Oracle Corporation (ORCL) to $280; Maintains ‘Buy' Rating
Oracle Corporation (NYSE:ORCL) is included in our list of the . On July 17, 2025, UBS raised its price target on Oracle Corporation (NYSE:ORCL) from $250 to $280, maintaining a 'Buy' rating. This price revision is driven by heightened AI-driven growth expectations. The analyst also sees a strong outlook for the company, particularly with speculation that it may be collaborating with OpenAI on inference and training services. This potential deal is contingent on Oracle's launch of Abilene infrastructure and Stargate GPU clusters. Delays in launching them could delay the deal. Looking ahead, UBS projects $134 billion in revenue for Oracle Corporation (NYSE:ORCL) with 38% operating margins by FY29. The revenue projection is above the current guidance of $104 billion, while operating margins are below the guided 45%. The analyst also projects Capex increasing to $54 billion as Oracle Corporation (NYSE:ORCL) progresses further into the AI infrastructure market. Oracle Corporation (NYSE:ORCL) offers SQL-based solutions for data storage and analytics. It is included in our list of the unstoppable stocks. While we acknowledge the potential of ORCL as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 12 Cheap Value Stocks to Buy Now According to Warren Buffett and 7 Best Potash Stocks to Buy According to Analysts. Disclosure: None.