logo
#

Latest news with #RedHatOpenShiftAI

Turkish Airlines Pioneers AI-led Innovation for Aviation with Red Hat OpenShift AI
Turkish Airlines Pioneers AI-led Innovation for Aviation with Red Hat OpenShift AI

Business Wire

time21-05-2025

  • Business
  • Business Wire

Turkish Airlines Pioneers AI-led Innovation for Aviation with Red Hat OpenShift AI

BOSTON – RED HAT SUMMIT--(BUSINESS WIRE)--Red Hat, the world's leading provider of open source solutions, today announced that Turkish Airlines (IST: THYAO), Türkiye's flagship carrier airline, selected Red Hat to help it transform into a data- and AI-driven organization. Using Red Hat OpenShift AI and Red Hat OpenShift, Turkish Airlines was able to create new development environments in minutes rather than hours, and has doubled the speed of deployment times while empowering its data scientists to harness data from across the organisation. Turkish Airlines, the airline that flies to more countries than any other, operates an extensive global network of scheduled flights to 353 destinations across Europe, Asia, Oceania, Africa and the Americas. With nearly 90,000 employees, the airline serves a diverse range of customers, including both business-to-consumer (B2C) and business-to-business (B2B) segments. Scalability through open source AI platforms Turkish Airlines sought to improve operational efficiency across all of its processes and services, optimize technical maintenance, increase employee productivity and drive both employee and customer satisfaction with the help of AI. To support this goal, Turkish Airlines chose Red Hat due to its need for extensive scalability and open source nature. Open source technology was key to Turkish Airlines for avoiding vendor lock-in, while helping meet the carrier's long-term goal of moving to a hybrid cloud environment as demand for both predictive and generative AI (gen AI) grows. In collaboration with Red Hat Consulting, Turkish Airlines was able to implement a Red Hat OpenShift AI environment and integrate it with existing systems and requirements, including creating custom development environments, automated deployment for AI models and pipelines and custom monitoring and alerts. When renewing or creating new applications, infrastructure teams used the world's leading enterprise Linux platform, Red Hat Enterprise Linux, for added integration benefits. Project scaling is now done automatically thanks to Red Hat OpenShift AI's provisioning and autoscaling capabilities, freeing up operations teams to focus on value-add projects, while embedded monitoring saves further time by automating fixes which enables teams to be more independent while decreasing maintenance costs. Blockers around resource reallocation can now be avoided, allowing multiple data scientists to use the same datasets equally. Running on Red Hat OpenShift, new bespoke environments for engineers can be built in minutes as opposed to hours, opening up teams to innovate and explore the new technology quickly. Everyday AI made real Enabling teams to use AI in their everyday operations was a big focus for Turkish Airlines, as it wanted to create a new generation of citizen data scientists able to develop their own AI projects. With the new platform, both business and data teams are able to access and develop with greater ease, with Turkish Technology's AI projects targeted to boost revenue while reducing operational costs. As part of this rollout, using Red Hat OpenShift AI has also enhanced productivity in the deployment phase, making the process more efficient thereby doubling deployment speed. Choosing Red Hat for its open source AI platform, Turkish Airlines has been able to improve services in the following ways: Improved accuracy of dynamic pricing models for airline ticket sales Enhanced models for fraud prevention – whether for customer card payments or via its loyalty program, Miles&Smiles – Turkish Airlines is able to screen with a higher degree of accuracy Customer service chatbots, built on Red Hat OpenShift AI and deployed on open source large language models (LLMs), rolled out to boost both customer satisfaction and employee productivity Detail assignment optimization, taking into account flight routes, aircraft details and fuel consumption, models can be used to confirm that the correct aircraft is picked for every route, leading to increased fuel consumption savings Added insights into operations for smoother service, including improved performance on ground plan prediction, block time prediction, and on-time performance (OTP) prediction, where operations teams are able to act proactively in the event of a bad prediction to minimise delays. AI initiatives at Turkish Airlines currently span more than 60 live models, with at least 40 more in development. Currently, more than 200 Turkish Airlines employees are working on AI-based development, with the number set to grow. Red Hat was able to help build trust in the platform among Turkish Airlines staff and contribute to its growing use, as well as break down barriers between teams by supporting data scientists in using Red Hat OpenShift AI and Red Hat OpenShift and opening new lines of communication between teams. A more open approach to experimenting and innovating with new technologies is now evident across the organization, primed to support future ambitions and growth. Red Hat's vision: Any model, any accelerator, any cloud. The future of AI must be defined by limitless opportunity, not constrained by infrastructure silos. Red Hat sees a horizon where organizations can deploy any model, on any accelerator, across any cloud, delivering an exceptional, more consistent user experience without exorbitant costs. To unlock the true potential of gen AI investments, enterprises require a universal inference platform - a standard for more seamless, high-performance AI innovation, both today and in the years to come. Red Hat Summit Join the Red Hat Summit keynotes to hear the latest from Red Hat executives, customers and partners: Modernized infrastructure meets enterprise-ready AI — Tuesday, May 20, 8-10 a.m. EDT (YouTube) Hybrid cloud evolves to deliver enterprise innovation — Wednesday, May 21, 8-9:30 a.m. EDT (YouTube) Supporting Quotes Haluk Tekin, country manager, Türkiye and CIS, Red Hat 'Creating citizen data scientists is a crucial achievement to enterprise-wide AI adoption, and we are honoured to collaborate with Turkish Airlines, the airline flying to more countries than any other, in this pursuit by making AI tools and platforms more accessible to every employee. Red Hat is committed to supporting the wide-scale cultural and technological transformation at Turkish Airlines, while also helping to improve operational flexibility, IT agility and AI application development speed through cross-team collaboration.' Serdar Gürbüz, General Manager, Turkish Technology, a Turkish Airlines subsidiary 'As Turkish Airlines continues its journey to becoming a fully data-driven organization, our focus is on embedding AI across every aspect of the business. From operational efficiency to customer experience, AI is strongly supporting our strategic decision-making processes. To enable this transformation, we have built a secure, scalable infrastructure based on open architectures. In this journey, Red Hat has served not just as a technology provider, but as a partner whose open source philosophy aligns closely with our own. Platforms like Red Hat OpenShift AI allow us to keep data on-premises while accelerating model development and enabling business units to leverage AI capabilities in a flexible way. This approach is helping us move AI beyond isolated use cases and into a scalable, organization-wide capability. As Turkish Airlines, we believe that the use of AI will play a vital role in enhancing our key achievements in the aviation industry.' Additional Resources Learn about recent enhancements to Red Hat OpenShift AI Learn more about Turkish Airlines Learn more about Red Hat OpenShift AI Read more Red Hat customer success stories Learn more about Red Hat Summit See all of Red Hat's announcements this week in the Red Hat Summit newsroom Follow @RedHatSummit or #RHSummit on X for event-specific updates Connect with Red Hat About Red Hat Red Hat is the open hybrid cloud technology leader, delivering a trusted, consistent and comprehensive foundation for transformative IT innovation and AI applications. Its portfolio of cloud, developer, AI, Linux, automation and application platform technologies enables any application, anywhere—from the datacenter to the edge. As the world's leading provider of enterprise open source software solutions, Red Hat invests in open ecosystems and communities to solve tomorrow's IT challenges. Collaborating with partners and customers, Red Hat helps them build, connect, automate, secure and manage their IT environments, supported by consulting services and award-winning training and certification offerings. About Turkish Airlines and Turkish Technology Established in 1933 with a fleet of five aircraft, Star Alliance member Turkish Airlines has a fleet of 479 (passenger and cargo) aircraft flying to 353 worldwide destinations as 300 international and 53 domestics in 131 countries. Founded in 2021, Turkish Technology is the leading airline's IT subsidiary, developing software that addresses the technological needs of the aviation and air cargo industries. Forward-Looking Statements Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the company's current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements. Red Hat, Red Hat Enterprise Linux, the Red Hat logo and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

Red Hat & NVIDIA unveil hybrid cloud solution for AI agents
Red Hat & NVIDIA unveil hybrid cloud solution for AI agents

Techday NZ

time21-05-2025

  • Business
  • Techday NZ

Red Hat & NVIDIA unveil hybrid cloud solution for AI agents

Red Hat has announced integration with the NVIDIA Enterprise AI Factory validated design, aiming to facilitate the deployment of agentic AI systems across hybrid cloud environments. The collaboration utilises NVIDIA RTX PRO Servers and NVIDIA Blackwell B200 systems, running on the Red Hat AI portfolio including Red Hat OpenShift AI, to support advanced generative and agentic AI workloads. Red Hat states that this move will provide enterprises with a flexible software foundation to deploy, operate and scale AI agents reliably. Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering at Red Hat emphasised the significance of this integration, saying: "AI agents represent the near-term future of enterprise AI, where fast-moving, independent models can speed through any number of tasks to free organizations for higher level innovation. Red Hat OpenShift AI is an ideal platform to run these workloads delivered by NVIDIA Enterprise AI Factory validated designs, offering the scale, flexibility and power necessary to deliver product-ready generative AI across the hybrid cloud." Red Hat's integration will enable NVIDIA Blackwell architecture support across Red Hat AI platforms, with a reference architecture for the NVIDIA Enterprise AI Factory developed on Red Hat OpenShift AI. This architecture has undergone full verification and performance-testing to address the growing demand for scaling AI agents. The company highlights OpenShift AI's capabilities for consistent deployment and management of AI agents, leveraging vLLM-based inference, as well as enhanced observability and monitoring features. The reference architecture makes use of NVIDIA NIM microservices accessible through the application catalogue, permitting organisations to assemble a fully validated AI software stack. This design allows operation of on-premises AI factories equipped with NVIDIA RTX PRO Servers and NVIDIA Blackwell B200 systems. Justin Boitano, Vice President, Enterprise AI at NVIDIA said: "NVIDIA and Red Hat are pioneering the future of enterprise AI by integrating Red Hat OpenShift AI with the NVIDIA Enterprise AI Factory validated design. Together, we're creating a more seamless, full-stack platform that IT can use as the foundation for transforming business data into actionable agentic AI intelligence." Red Hat stresses that its enterprise AI approach combines open source technologies, the adaptability of a hybrid cloud model, and partnership with multiple stakeholders across the AI development ecosystem. The company collaborates with hardware providers and system integrators, aiming to support customers with comprehensive AI solutions that match specific business requirements. This broader ecosystem-centric position, Red Hat notes, enables wider adoption of tools like the NVIDIA Enterprise AI Factory validated design, and is intended to meet customers' needs at various stages of the AI development lifecycle, both in the cloud and on-premises infrastructures. Follow us on: Share on:

Red Hat launches enterprise AI inference server for hybrid cloud
Red Hat launches enterprise AI inference server for hybrid cloud

Techday NZ

time21-05-2025

  • Business
  • Techday NZ

Red Hat launches enterprise AI inference server for hybrid cloud

Red Hat has introduced Red Hat AI Inference Server, an enterprise-grade offering aimed at enabling generative artificial intelligence (AI) inference across hybrid cloud environments. The Red Hat AI Inference Server emerges as an offering that leverages the vLLM community project, initially started by the University of California, Berkeley. Through Red Hat's integration of Neural Magic technologies, the solution aims to deliver higher speed, improved efficiency with a range of AI accelerators, and reduced operational costs. The platform is designed to allow organisations to run generative AI models on any AI accelerator within any cloud infrastructure. The solution can be deployed as a standalone containerised offering or as part of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI. Red Hat says this approach is intended to empower enterprises to deploy and scale generative AI in production with increased confidence. Joe Fernandes, Vice President and General Manager for Red Hat's AI Business Unit, commented on the launch: "Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment." The inference phase in AI refers to the process where pre-trained models are used to generate outputs, a stage which can be a significant inhibitor to performance and cost efficiency if not managed appropriately. The increasing complexity and scale of generative AI models have highlighted the need for robust inference solutions capable of handling production deployments across diverse infrastructures. The Red Hat AI Inference Server builds on the technology foundation established by the vLLM project. vLLM is known for high-throughput AI inference, ability to handle large input context, acceleration over multiple GPUs, and continuous batching to enhance deployment versatility. Additionally, vLLM extends support to a broad range of publicly available models, including DeepSeek, Google's Gemma, Llama, Llama Nemotron, Mistral, and Phi, among others. Its integration with leading models and enterprise-grade reasoning capabilities places it as a candidate for a standard in AI inference innovation. The packaged enterprise offering delivers a supported and hardened distribution of vLLM, with several additional tools. These include intelligent large language model (LLM) compression utilities to reduce AI model sizes while preserving or enhancing accuracy, and an optimised model repository hosted under Red Hat AI on Hugging Face. This repository enables instant access to validated and optimised AI models tailored for inference, designed to help improve efficiency by two to four times without the need to compromise on the accuracy of results. Red Hat also provides enterprise support, drawing upon expertise in bringing community-developed technologies into production. For expanded deployment options, the Red Hat AI Inference Server can be run on non-Red Hat Linux and Kubernetes platforms in line with the company's third-party support policy. The company's stated vision is to enable a universal inference platform that can accommodate any model, run on any accelerator, and be deployed in any cloud environment. Red Hat sees the success of generative AI relying on the adoption of such standardised inference solutions to ensure consistent user experiences without increasing costs. Ramine Roane, Corporate Vice President of AI Product Management at AMD, said: "In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD InstinctTM GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators." Jeremy Foster, Senior Vice President and General Manager at Cisco, commented on the joint opportunities provided by the offering: "AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next." Intel's Bill Pearson, Vice President of Data Center & AI Software Solutions and Ecosystem, said: "Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel Gaudi accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications." John Fanelli, Vice President of Enterprise Software at NVIDIA, added: "High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design." Red Hat has stated its intent to further build upon the vLLM community as well as drive development of distributed inference technologies such as llm-d, aiming to establish vLLM as an open standard for inference in hybrid cloud environments.

F5 Expands Strategic Collaboration With Red Hat To Enable Scalable, Secure Enterprise AI
F5 Expands Strategic Collaboration With Red Hat To Enable Scalable, Secure Enterprise AI

Scoop

time21-05-2025

  • Business
  • Scoop

F5 Expands Strategic Collaboration With Red Hat To Enable Scalable, Secure Enterprise AI

Press Release – F 5 Solutions address key challenges in enterprise AI adoptionenabling secure model serving, scalable data movement, and real-time inference across environments. F5 is collaborating with Red Hat to focus on the real-world building blocks enterprises … F5 (NASDAQ: FFIV), the global leader in delivering and securing every app and API, today announced an expanded collaboration with Red Hat, the world's leading provider of open source solutions, to help enterprises deploy and scale secure, high-performance AI applications. By enabling integration for the F5 Application Delivery and Security Platform with Red Hat OpenShift AI, F5 customers can adopt AI faster and more securely, focusing on practical, high-value use cases such as retrieval-augmented generation (RAG), secure model serving, and scalable data ingestion. 'Enterprises are eager to harness the power of AI, but they face significant challenges in scaling and securing these applications,' said Kunal Anand, Chief Innovation Officer at F5. 'Our collaboration with Red Hat aims to simplify this journey by providing integrated solutions that address performance, security, and observability needs, enabling organisations to realise tangible AI outcomes.' This collaboration comes at a time when AI adoption is accelerating. According to F5's 2025 State of Application Strategy Report, 96 per cent of organisations are now deploying AI models, a significant increase from just 25 per cent in 2023. Additionally, the report highlights that 72 per cent of respondents aim to use AI to optimise application performance, while 59 per cent focus on cost optimisation and security enhancements. To support these growing demands, F5 is collaborating with Red Hat to focus on the real-world building blocks enterprises need to operationalise AI. From securing data pipelines to optimising inference performance, F5 solutions are tailored to help organisations deploy AI with confidence, speed, and control. Key areas of collaboration include: RAG and model serving at scale – F5 supports AI-powered applications on Red Hat OpenShift AI that combine large language models with private datasets, helping to ensure secure data flow, high GPU utilisation, and fast response times. Big data movement and ingestion – With MinIO and F5 working in tandem on Red Hat OpenShift AI, customers can accelerate the ingestion of large datasets for training and inference. API-first AI security – F5 provides robust protection against evolving threats like prompt injection, model theft, and data leakage through its F5 Distributed Cloud WAAP and F5 BIG-IP solutions. As part of its vision, F5 is committed to driving open source innovation through its collaboration with Red Hat. Red Hat OpenShift AI provides a modular, open platform for building and deploying AI applications across hybrid environments, while F5's API Gateway and AI security capabilities are designed to integrate more seamlessly—without locking customers into a single cloud or toolset. With this collaboration, F5 is helping organisations take an open, flexible approach to AI infrastructure using Red Hat OpenShift AI. 'As AI becomes core to how businesses operate and compete, organisations need platforms that offer flexibility without compromising security,' said Joe Fernandes, Vice President and General Manager, AI Business Unit, Red Hat. 'We believe the future of AI is open source, and Red Hat OpenShift AI, when used in combination with F5's robust security and observability, gives organisations the necessary tools to build and scale AI applications with greater confidence, anywhere they choose to run them.' The collaboration will be featured at this week's Red Hat Summit 2025 (May 19–22 in Boston), where F5 and its partners will highlight real-world AI use cases—including secure model serving and RAG workloads—built on Red Hat OpenShift AI. Supporting Resources: About F5 F5, Inc. (NASDAQ: FFIV) is the global leader that delivers and secures every app. Backed by three decades of expertise, F5 has built the industry's premier platform—F5 Application Delivery and Security Platform (ADSP)—to deliver and secure every app, every API, anywhere: on-premises, in the cloud, at the edge, and across hybrid, multicloud environments. F5 is committed to innovating and partnering with the world's largest and most advanced organizations to deliver fast, available, and secure digital experiences. Together, we help each other thrive and bring a better digital world to life.

Red Hat launches Advanced Developer Suite with focus on AI
Red Hat launches Advanced Developer Suite with focus on AI

Techday NZ

time21-05-2025

  • Business
  • Techday NZ

Red Hat launches Advanced Developer Suite with focus on AI

Red Hat has announced the launch of Red Hat Advanced Developer Suite, a new addition to its OpenShift platform aimed at improving developer productivity and application security while supporting the integration of Red Hat AI technologies. Red Hat Advanced Developer Suite is designed to address two key priorities in modern software engineering: increasing developer productivity and integrating artificial intelligence (AI) into applications. A recent Gartner survey cited in Red Hat's announcement found that integration of AI and boosting developer productivity are among the top three strategic goals for software engineering departments in 2024, both registering at 48%. The suite provides tools enabling platform engineering and development teams to collaborate on creating "golden paths"—software templates that transparently deliver infrastructure, application services, toolchains, and policy best practices. These templates aim to help developers deliver applications with greater speed and security, and now include software templates from Red Hat AI. The offering combines several components, notably the Red Hat Developer Hub. This internal developer portal, based on Backstage, the open source framework from the Cloud Native Computing Foundation, now features an upgraded AI-centric user experience. This includes pre-configured software templates for common AI use cases, deployable on Red Hat OpenShift AI. Other elements of the suite include Red Hat Trusted Profile Analyzer, which manages software bills of materials (SBOMs), vulnerability exploitability exchanges (VEX), and common vulnerabilities and exposures (CVEs). This tool is intended to provide risk intelligence to developers and DevSecOps teams. Red Hat Trusted Artifact Signer is another feature, offering production-ready software artifact signing and verification through the Sigstore project. This capability now extends to AI models packaged in OCI format, aiming to ensure only trusted models are deployed in production environments. Integration with a range of existing Red Hat tools and third-party offerings allows organisations to incorporate the suite into current workflows. This includes Red Hat OpenShift, Red Hat OpenShift Pipelines, Red Hat OpenShift GitOps, and other continuous integration and delivery solutions. Tools such as the migration toolkit for applications, Podman Desktop, and Red Hat IDE Plugins can further enhance developer velocity and help streamline the development and migration of software. The Red Hat OpenShift Dev Spaces environment enables developers to access a cloud development environment that includes the necessary tooling, templates, and security requirements embedded into the process, supporting efficient onboarding for developers and contractors. Security is a central focus, with Red Hat positioning the Advanced Developer Suite as a way to combine speed and efficiency with robust supply chain protection. Integrating Trusted Artifact Signer and Trusted Profile Analyzer is designed to help organisations detect potential security vulnerabilities early and maintain oversight throughout the software development lifecycle. These tools are intended to create an audit trail for software development activities, which can inform decisions on risk and provide actionable intelligence regarding the software supply chain's security posture. Developers seeking to incorporate AI, particularly large language models (LLMs), into their applications face a range of challenges, according to Red Hat. The suite's integrations with Red Hat AI Inference Server, Red Hat Enterprise Linux AI (RHEL AI), and Red Hat OpenShift AI aim to address these, enabling the development of AI-enabled and cloud-native applications with an enhanced security focus. The Red Hat Developer Hub within the suite includes AI-focused software templates designed for scenarios such as chatbots, audio-to-text, object detection, code generation, and retrieval-augmented generation. These templates intend to provide a supported, pre-architected path for developers, without the need to deeply understand all of the underlying AI technology. An upcoming AI landing page will further assist developers in getting started. Mike Barrett, Vice President of Hybrid Platforms at Red Hat, commented on the launch: "We are excited about the release of Red Hat Advanced Developer Suite as it brings together several solutions in a manner that will allow our customers faster use of the technologies. It pivots the focus towards developer productivity by way of platform engineering technologies for your largest platform investments. In addition to that focus, we have also been able to turn those technologies and developer experiences towards integrating with Red Hat AI solutions. Speed, security and AI innovation should never be mutually exclusive. We feel by tightly integrating Red Hat Advanced Developer Suite with the rest of the Red Hat portfolio and open source ecosystems we will ensure speed, security and AI innovation are always top of mind."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store