logo
How IAG's Home-Grown AI Could Save Airlines Millions

How IAG's Home-Grown AI Could Save Airlines Millions

Forbes6 hours ago

AI and technology enhancing aircraft maintenance
In an industry where operational efficiency is measured in minutes and margins, the potential of artificial intelligence to streamline airline maintenance logistics is more than an optimization exercise, it's a necessity. That's why the International Airlines Group (IAG) developed its new AI-powered Engine Optimisation System. Designed in-house and now implemented with Aer Lingus, the system is poised to roll out across IAG's other airlines British Airways, Iberia, Vueling, and LEVEL by year's end.
Turning a Complex Problem into an AI Challenge
The system, built within IAG's London and Barcelona-based AI Labs, is engineered to solve a particularly complex problem: how to schedule engine maintenance in a way that simultaneously satisfies regulatory mandates, part availability, labor constraints, and operational continuity.
Every commercial jet engine must meet strict regulatory intervals while also fitting around flight schedules, parts inventory and shop capacity. Planners juggle thousands of variables, yet one late part or an unexpected route change can unwind months of work.
By running millions of 'what-if' scenarios every day, IAG's new system re-plans in minutes instead of weeks, helping the airline avoid Aircraft On Ground emergencies serious enough to ground the airplane until it's fixed and maintenance-related passenger delays. The system is designed to update maintenance schedules dynamically, adapting in real time as new data flows in.
'By applying advanced algorithms, we're making our engine maintenance programme more efficient. We are avoiding unnecessary maintenance delays to ensure that our fleet is available and in service,' explains Ben Dias, IAG's chief AI scientist. 'The system gives our people the data and tools they need for smarter planning and better teamwork.'
An In-House Approach to AI System Development
Many organizations license predictive-maintenance dashboards from OEMs or software vendors. IAG chose a different path: keep the data, keep the code and tune the algorithms to its own mixed fleet. Dias' team started with the workhorse CFM56 engine, a common type in narrow-body aircraft, to prove the concept before moving to other engine families.
Owning the intellectual property matters for two reasons. First, IAG can refine the model as its network, fleet mix and shop capacity change. Second, the group avoids vendor lock-in, critical when an engine swap between BA and Iberia can hinge on data portability.
AI Making an Increasing Impact in the Airline Industry
IAG's efforts align with similar changes happening in aviation. Lufthansa Technik uses its Aviatar platform for predictive diagnostics that spots repetitive fault codes and suggests fixes, part of a suite used by 100-plus airlines. Delta Air Lines' APEX engine-health system crunches real-time sensor data; the carrier claims parts-demand accuracy has jumped from 60% to 90%. Air France-KLM is working with Google Cloud to layer generative-AI tools onto its existing 'Prognos' analytics stack for both maintenance and network planning.
Where IAG differs is its focus on prescriptive optimization. The model does not simply predict when an engine might need service, it chooses the slot that minimises ground time across a 700-aircraft portfolio.
Taking a broader look, the financial upside becomes clear. With the industry set to spend over $100 billion annually on maintenance, repair and overhaul (MRO) by 2030 according to Strategic Market Research's Aircraft MRO Market Size & Forecast report, even single-digit gains have massive implications. McKinsey estimates AI-driven maintenance could cut costs by 20% and eliminate up to half of unscheduled repairs.
There's also a sustainability edge. By reducing last-minute swaps and repositioning flights, the system can lower emissions, helping airlines meet environmental targets while saving money. A smoother shop schedule reduces repositioning flights and last-minute charters, lowering fuel burn and CO₂.
Obstacles on the AI Taxiway
Still, there are bumps ahead. AI relies on clean, consistent data, and aviation data can be messy. Airlines still wrestle with inconsistent logbook entries, paper-based records and parts tagged under multiple naming conventions. IAG spent months cleaning historical files and standardizing schemas before training the model. Integrating these systems with existing workflows, especially under strict safety regulations, adds another layer of complexity.
Change management is equally tough. Engineers used to white-board plans may bristle at a probabilistic recommendation engine. That is why the system presents its schedule, along with the factors that drove each choice, for human sign-off. Trust builds when planners can challenge the AI, tweak a variable and watch the plan update in seconds.
Getting the data right, and earning trust from frontline teams, will be key to long-term success.
Where the Airline Industry Is Heading
AI developments in the industry could push things even further. Technicians could share anonymized model insights across member airlines in a federated-learning loop. This would allow datasets from different airlines and locations to improve each other without exposing commercially sensitive details. Longer term, this could feed the optimization layer with live flight-ops and crew-roster data so that disruption management and maintenance planning draw from a single source of truth.
If that sounds ambitious, keep in mind that pilots once lugged over 30 pounds of binders to the cockpit in large black roll-aboard suitcases. The electronic flight bag (EFB), a tablet-class device that stores charts, manuals and performance calculators in digital form, changed that. Today they are table stakes. A decade from now, an AI-based scheduler that treats engines, slots and spares as a living puzzle may feel just as ordinary, and IAG will have gained a multi-year head start.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI turns to Google's AI chips to power its products, source says
OpenAI turns to Google's AI chips to power its products, source says

Yahoo

time38 minutes ago

  • Yahoo

OpenAI turns to Google's AI chips to power its products, source says

(Reuters) -OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.

AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do
AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do

Forbes

time40 minutes ago

  • Forbes

AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do

At the AMD Advancing AI event, CEO Lisa Su touted the company's AI compute portfolio. At the AMD Advancing AI event in San Jose earlier this month, CEO Lisa Su and her staff showcased the company's progress across many different facets of AI. They had plenty to announce in both hardware and software, including significant performance gains for GPUs, ongoing advances in the ROCm development platform and the forthcoming introduction of rack-scale infrastructure. There were also many references to trust and strong relationships with customers and partners, which I liked, and a lot of emphasis on open hardware and an open development ecosystem, which I think is less of a clear winner for AMD, as I'll explain later. Overall, I think the event was important for showing how AMD is moving the ball down the field for customers and developers. Under Su, AMD's M.O. is to have clear, ambitious plans and execute against them. Her 'say/do' ratio is high. The company does what it says it will do. This is exactly what it must continue doing to whittle away at Nvidia's dominance in the datacenter AI GPU market. What I saw at the Advancing AI event raised my confidence from last year — although there are a few gaps that need to be addressed. (Note: AMD is an advisory client of my firm, Moor Insights & Strategy.) AMD's AI Market Opportunity And Full-Stack Strategy When she took the stage, Su established the context for AMD's announcements by describing the staggering growth that is the backdrop for today's AI chip market. Just take a look at the chart below. So far, AMD's bullish projections for the growth of the AI chip market have turned out to be ... More accurate. So this segment of the chip industry is looking at a TAM of half a trillion dollars by 2028, with the whole AI accelerator market increasing at a 60% CAGR. The AI inference sub-segment — where AMD competes on better footing with Nvidia — is enjoying an 80% CAGR. People thought that the market numbers AMD cited last year were too high, but not so. This is the world we're living in. For the record, I never doubted the TAM numbers last year. AMD is carving out a bigger place in this world for itself. As Su pointed out, its Instinct GPUs are used by seven of the 10 largest AI companies, and they drive AI for Microsoft Office, Facebook, Zoom, Netflix, Uber, Salesforce and SAP. Its EPYC server CPUs continue to put up record market share (40% last quarter), and it has built out a full stack — partly through smart acquisitions — to support its AI ambitions. I would point in particular to the ZT Systems acquisition and the introduction of the Pensando DPU and the Pollara NIC. GPUs are at the heart of datacenter AI, and AMD's new MI350 series was in the spotlight at this event. Although these chips were slated to ship in Q3, Su said that production shipments had in fact started earlier in June, with partners on track to launch platforms and public cloud instances in Q3. There were cheers from the crowd when they heard that the MI350 delivers a 4x performance improvement over the prior generation. AMD says that its high-end MI355X GPU outperforms the Nvidia B200 to the tune of 1.6x memory, 2.2x compute throughput and 40% more tokens per dollar. (Testing by my company Signal65 showed that the MI355X running DeepSeek-R1 produced up to 1.5x higher throughput than the B200.) To put it in a different perspective, a single MI355X can run a 520-billion-parameter model. And I wasn't surprised when Su and others onstage looked ahead to even better performance — maybe 10x better — projected for the MI400 series and beyond. That puts us into the dreamland of an individual GPU running a trillion-parameter model. By the way, AMD has not forgotten for one second that it is a CPU company. The EPYC Venice processor scheduled to hit the market in 2026 should be better at absolutely everything — 256 high-performance cores, 70% more compute performance than the current generation and so on. EPYC's rapid gains in datacenter market share over the past few years are no accident, and at this point all the company needs to do for CPUs is hold steady on its current up-and-to-the-right trajectory. I am hopeful that Signal65 will get a crack at testing the claims the company made at the event. This level of performance is needed in the era of agentic AI and a landscape of many competing and complementary AI models. Su predicts — and I agree — that there will be hundreds of thousands of specialized AI models in the coming years. This is specifically true for enterprises that will have smaller models focused on areas like CRM, ERP, SCM, HCM, legal, finance and so on. To support this, AMD talked at the event about its plan to sustain an annual cadence of Instinct accelerators, adding a new generation every year. Easy to say, hard to do — though, again, AMD has a high say/do ratio these days. AMD's 2026 Rack-Scale Platform And Current Software Advances On the hardware side, the biggest announcement was the forthcoming Helios rack-scale GPU product that AMD plans to deliver in 2026. This is a big deal, and I want to emphasize how difficult it is to bring together high-performing CPUs (EPYC Venice), GPUs (MI400) and networking chips (next-gen Pensando Vulcano NICs) in a liquid-cooled rack. It's also an excellent way to take on Nvidia, which makes a mint off of its own rack-scale offerings for AI. At the event, Su said she believes that Helios will be the new industry standard when it launches next year (and cited a string of specs and performance numbers to back that up). It's good to see AMD provide a roadmap this far out, but it also had to after Nvidia did at the GTC event earlier this year. On the software side, Vamsi Boppana, senior vice president of the Artificial Intelligence Group at AMD, started off by announcing the arrival of ROCm 7, the latest version of the company's open source software platform for GPUs. Again, big improvements come with each generation — in this case, a 3.5x gain in inference performance compared to ROCm 6. Boppana stressed the very high cadence of updates for AMD software, with new features being released every two weeks. He also talked about the benefits of distributed inference, which allows the two steps of inference to be tasked to separate GPU pools, further speeding up the process. Finally, he announced — to a chorus of cheers — the AMD Developer Cloud, which makes AMD GPUs accessible from anywhere so developers can use them to test-drive their ideas. Last year, Meta had kind things to say about ROCm, and I was impressed because Meta is the hardest 'grader' next to Microsoft. This year, I heard companies talking about both training and inference, and again I'm impressed. (More on that below.) It was also great getting some time with Anush Elangovan, vice president for AI software at AMD, for a video I shot with him. Elangovan is very hardcore, which is exactly what AMD needs. Real grinders. Nightly code drops. What's Working Well For AMD in AI So that's (most of) what was new at AMD Advancing AI. In the next three sections, I want to talk about the good, the needs-improvement and the yet-to-be-determined aspects of what I heard during the event. Let's start with the good things that jumped out at me. What Didn't Work For Me At Advancing AI While overall I thought Advancing AI was a win for AMD, there were two areas where I thought the company missed the mark — one by omission, one by commission. The Jury Is Out On Some Elements Of AMD's AI Strategy In some areas, I suspect that AMD is doing okay or will be doing okay soon — but I'm just not sure. I can't imagine that any of the following items has completely escaped AMD's attention, but I would recommend that the company address them candidly so that customers know what to expect and can maintain high confidence in what AMD is delivering. What Comes Next In AMD's AI Development It is very difficult to engineer cutting-edge semiconductors — let alone rack-scale systems and all the attendant software — on the steady cadence that AMD is maintaining. So kudos to Su and everyone else at the company who's making that happen. But my confidence (and Wall Street's) would rise if AMD provided more granularity about what it's doing, starting with datacenter GPU forecasts. Clearly, AMD doesn't need to compete with Nvidia on every single thing to be successful. But it would be well served to fill in some of the gaps in its story to better speak to the comprehensive ecosystem it's creating. Having spent plenty of time working inside companies on both the OEM and semiconductor sides, I do understand the difficulties AMD faces in providing that kind of clarity. The process of landing design wins can be lumpy, and a few of the non-AMD speakers at Advancing AI mentioned that the company is engaged in the 'bake-offs' that are inevitable in that process. Meanwhile, we're left to wonder what might be holding things back, other than AMD's institutional conservatism — the healthy reticence of engineers not to make any claims until they're sure of the win. That said, with Nvidia's B200s sold out for the next year, you'd think that AMD should be able to sell every wafer it makes, right? So are AMD's yields not good enough yet? Or are hyperscalers having their own problems scaling and deploying? Is there some other gating item? I'd love to know. Please don't take any of my questions the wrong way, because AMD is doing some amazing things, and I walked away from the Advancing AI event impressed with the company's progress. At the show, Su was forthright about describing the pace of this AI revolution we're living in — 'unlike anything we've seen in modern computing, anything we've seen in our careers, and frankly, anything we've seen in our lifetime.' I'll keep looking for answers to my nagging questions, and I'm eager to see how the competition between AMD and Nvidia plays out over the next two years and beyond. Meanwhile, AMD moved down the field at its event, and I look forward to seeing where it is headed.

CoreWeave, Inc. (CRWV) Is One Of The Most Bullish Things In My Career, Says Jim Cramer
CoreWeave, Inc. (CRWV) Is One Of The Most Bullish Things In My Career, Says Jim Cramer

Yahoo

timean hour ago

  • Yahoo

CoreWeave, Inc. (CRWV) Is One Of The Most Bullish Things In My Career, Says Jim Cramer

CoreWeave, Inc. (NASDAQ:CRWV) is one of the . CoreWeave, Inc. (NASDAQ:CRWV) is an AI infrastructure company that provides businesses with hardware to let them run their AI applications. It is one of the few pure-play firms of its kind and the shares have gained a whopping 298% since their IPO in March. Soon after the IPO, Cramer dismissed news reports that CoreWeave, Inc. (NASDAQ:CRWV) had been created specially by NVIDIA to create demand for AI GPUs. This time around, he commented on the strong share price performance which indicated that even Cramer hadn't expected CoreWeave, Inc. (NASDAQ:CRWV)'s shares to perform the way that they did: 'Look can I just say that these are some of the most bullish things I've seen in my career? That CoreWeave could have been priced at 40 and it went to 178. That this Circle just keeps being bought, that Palantir keeps being bought. That a Broadom is going, that Goldman is going. . .' Recently, the CNBC host discussed some of the drivers of CoreWeave, Inc. (NASDAQ:CRWV)'s share price performance: 'Take CoreWeave. This is the company that came public at $40 a share, [a] company I recommended and pushed incredibly hard to. People didn't believe me. Many chose to bet against the stock. 32% of the shares are sold short. On Friday, CoreWeave hit a high of $187. Stock still sits at $172 and change. Sure, it reported a great quarter, but a lot of this move is because so many people were betting against it because the company picked a bad time to come public. I had tremendous conviction that CoreWeave would make a big move, but not this big. Again, I think discipline must trump conviction, and you gotta do some selling here. Well, I still like the stock. I recognize that much of this move was powered by panicked short sellers. Take something off the table, please.' While we acknowledge the potential of CRWV as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store