Latest news with #RHELAI
Yahoo
5 days ago
- Business
- Yahoo
Analyst Recommends This Top Dividend Growth Stock for ‘Ability to Sleep at Night'
David Bahnsen, The Bahnsen Group CIO, recently talked on CNBC about the importance of dividend growth stocks and said the 'ability' to sleep at night that these stocks give to investors comes from their strong track record. He believes dividend growth 'immunizes' investors from volatility. 'I think you get some of that ability to sleep at night if you're not so reliant on expensive things getting more expensive. That's really the key, is you already have the things you're talking about — top-down macro uncertainty, tariff policy, geopolitics, things like that. But when you combine that with high PEs that you just need to get higher in order to see your investments do well, I think that becomes problematic. Dividend growth immunizes investors from some of that.' Bahnsen then talked about why he loves International Business Machines (NYSE:IBM) as a dividend growth play: Well I love talking about IBM Common Stock (NYSE:IBM) and that's one of the names that's done really well this year. There's other names that haven't done as well, which I like even more because they're cheaper. IBM's up 20%, but here's the thing. It's trading at 17 or 18 times what 2025 free cash flow will be. You're talking about that up against other names trading at 40, 50, 60 times what their free cash flow may be. IBM Common Stock (NYSE:IBM) has grown the dividend, Kelly, every single year since 1994. Think about how much has gone on in the world over that 30 plus years. IBM's grown the dividend every year in that period. Impressive. So we just think it's a great name that's tethered to both old tech and new tech. Image by Steve Buissinne from Pixabay IBM is indeed making a comeback. As of the end of Q4, IBM's AI products and services surpassed $5 billion in total bookings, with $2 billion added just since last quarter. Last year, IBM updated its Granite family of AI models for enterprise use, making them about 90% more cost-efficient than large models. RedHat is also key in IBM's open-source GenAI strategy. Management highlighted that RHEL AI and OpenShift AI platforms are gaining traction, along with IBM's watsonx AI solutions. The company expects its software business to grow by at least 10% in 2025, up from 8.3% growth in 2024. While we acknowledge the potential of IBM as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.


Techday NZ
21-05-2025
- Business
- Techday NZ
Red Hat launches enterprise AI inference server for hybrid cloud
Red Hat has introduced Red Hat AI Inference Server, an enterprise-grade offering aimed at enabling generative artificial intelligence (AI) inference across hybrid cloud environments. The Red Hat AI Inference Server emerges as an offering that leverages the vLLM community project, initially started by the University of California, Berkeley. Through Red Hat's integration of Neural Magic technologies, the solution aims to deliver higher speed, improved efficiency with a range of AI accelerators, and reduced operational costs. The platform is designed to allow organisations to run generative AI models on any AI accelerator within any cloud infrastructure. The solution can be deployed as a standalone containerised offering or as part of Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI. Red Hat says this approach is intended to empower enterprises to deploy and scale generative AI in production with increased confidence. Joe Fernandes, Vice President and General Manager for Red Hat's AI Business Unit, commented on the launch: "Inference is where the real promise of gen AI is delivered, where user interactions are met with fast, accurate responses delivered by a given model, but it must be delivered in an effective and cost-efficient way. Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment." The inference phase in AI refers to the process where pre-trained models are used to generate outputs, a stage which can be a significant inhibitor to performance and cost efficiency if not managed appropriately. The increasing complexity and scale of generative AI models have highlighted the need for robust inference solutions capable of handling production deployments across diverse infrastructures. The Red Hat AI Inference Server builds on the technology foundation established by the vLLM project. vLLM is known for high-throughput AI inference, ability to handle large input context, acceleration over multiple GPUs, and continuous batching to enhance deployment versatility. Additionally, vLLM extends support to a broad range of publicly available models, including DeepSeek, Google's Gemma, Llama, Llama Nemotron, Mistral, and Phi, among others. Its integration with leading models and enterprise-grade reasoning capabilities places it as a candidate for a standard in AI inference innovation. The packaged enterprise offering delivers a supported and hardened distribution of vLLM, with several additional tools. These include intelligent large language model (LLM) compression utilities to reduce AI model sizes while preserving or enhancing accuracy, and an optimised model repository hosted under Red Hat AI on Hugging Face. This repository enables instant access to validated and optimised AI models tailored for inference, designed to help improve efficiency by two to four times without the need to compromise on the accuracy of results. Red Hat also provides enterprise support, drawing upon expertise in bringing community-developed technologies into production. For expanded deployment options, the Red Hat AI Inference Server can be run on non-Red Hat Linux and Kubernetes platforms in line with the company's third-party support policy. The company's stated vision is to enable a universal inference platform that can accommodate any model, run on any accelerator, and be deployed in any cloud environment. Red Hat sees the success of generative AI relying on the adoption of such standardised inference solutions to ensure consistent user experiences without increasing costs. Ramine Roane, Corporate Vice President of AI Product Management at AMD, said: "In collaboration with Red Hat, AMD delivers out-of-the-box solutions to drive efficient generative AI in the enterprise. Red Hat AI Inference Server enabled on AMD InstinctTM GPUs equips organizations with enterprise-grade, community-driven AI inference capabilities backed by fully validated hardware accelerators." Jeremy Foster, Senior Vice President and General Manager at Cisco, commented on the joint opportunities provided by the offering: "AI workloads need speed, consistency, and flexibility, which is exactly what the Red Hat AI Inference Server is designed to deliver. This innovation offers Cisco and Red Hat opportunities to continue to collaborate on new ways to make AI deployments more accessible, efficient and scalable—helping organizations prepare for what's next." Intel's Bill Pearson, Vice President of Data Center & AI Software Solutions and Ecosystem, said: "Intel is excited to collaborate with Red Hat to enable Red Hat AI Inference Server on Intel Gaudi accelerators. This integration will provide our customers with an optimized solution to streamline and scale AI inference, delivering advanced performance and efficiency for a wide range of enterprise AI applications." John Fanelli, Vice President of Enterprise Software at NVIDIA, added: "High-performance inference enables models and AI agents not just to answer, but to reason and adapt in real time. With open, full-stack NVIDIA accelerated computing and Red Hat AI Inference Server, developers can run efficient reasoning at scale across hybrid clouds, and deploy with confidence using Red Hat Inference Server with the new NVIDIA Enterprise AI validated design." Red Hat has stated its intent to further build upon the vLLM community as well as drive development of distributed inference technologies such as llm-d, aiming to establish vLLM as an open standard for inference in hybrid cloud environments.


Forbes
01-04-2025
- Business
- Forbes
IBM's Enterprise AI Strategy: Trust, Scale, And Results
I Watsonx AI IBM generative AI platform displayed on a smartphone. On 10 August 2023 in Brussels, ... More Belgium. (Photo illustration by Jonathan Raa/NurPhoto via Getty Images) BM has rapidly established itself as a serious enterprise AI contender. It combines a full-stack platform strategy, proprietary models, deep integration with Red Hat hybrid cloud infrastructure, and global consulting scale. It's executing a multi-pronged approach that is already delivering operational leverage and financial upside. Its approach is paying off. In its most recent earnings, IBM disclosed that it'd grown its book of AI-related business to $5 billion in less than two years, with approximately 80% of that stemming from consulting engagements and the remainder from software subscriptions. IBM detailed its AI strategy at its recent investor day. It's a strategy centered on a pragmatic, enterprise-first approach that can deliver trusted, efficient, and domain-relevant AI solutions. IBM's AI straetgy brings together infrastructure software from Red Hat, foundation models from IBM Research, customer enablement capabilities from IBM Consulting, and integration with a broad ecosystem of partners. Unlike some competitors focused on developing massive general-purpose models, IBM's bet is on smaller, specialized models, deployed across hybrid cloud environments, and tightly integrated with its consulting services and data platforms. The goal is to help businesses operationalize AI in a way that's scalable, secure, and aligned with real-world enterprise needs. This is an approach particularly well-suited for companies in regulated industries — such as financial services, healthcare, and government — where data security, governance, and compliance concerns are paramount. At the core of IBM's AI stack is watsonx, an end-to-end platform designed to support the entire AI lifecycle. Watsonx allows businesses to build and train models using both IBM's proprietary tools and open-source models while also enabling them to fine-tune those models using their proprietary data. One of the most critical components of this platform is Granite, IBM's family of smaller, purpose-built foundation models tailored for enterprise use cases like code generation, document processing, and virtual agents. These cost-efficient, interpretable models are built to perform well in sensitive, highly regulated environments. IBM has even open-sourced several Granite models to support transparency and community-led development. IBM's AI technology is further strengthened by its integration with Red Hat's hybrid cloud tools. OpenShift AI and RHEL AI provide the infrastructure to build, deploy, and manage AI applications across on-premises, private, and public cloud environments. This hybrid model offers flexibility for enterprises that need control over their data while still wanting the agility of cloud-native services. Global system integrators are integral to helping IT organizations navigate complex new technologies, especially enterprise AI. Enterprises often struggle to understand the new technology while also attempting to extract value quickly. GSIs thrive in this market, promising quick time-to-value for AI transformation projects. A defining strength of IBM's approach is the synergy between its AI stack and its global consulting business. IBM Consulting, with its 'hybrid by design' approach, is central in driving client adoption of watsonx and Granite. This helps enterprises bring AI into mission-critical workflows across HR, procurement, customer service, and supply chain operations. IBM Consulting competes directly against companies like NTT DATA, Deloitte, Cognizant, and Capgemini. Each of these companies has AI platforms in place and AI-specific engagement models that offer a compelling choice for enterprises. Partnerships play a critical role in IBM's AI strategy. The company has built a rich ecosystem of collaborators that includes hyperscalers, chipmakers, open-source communities, and enterprise software vendors. Rather than trying to build and control every component internally, IBM focuses on integrating and orchestrating AI capabilities across a broad range of technologies. This strategy enables IBM to deliver value through its innovations and the strength of its partner network. A example of this is IBM's integration of watsonx with platforms like SAP, Salesforce, and ServiceNow. Operating within familiar business applications allowsd customers to leverage IBM's AI without disrupting existing workflows. T Collaboration extends to the systems integrators and hardware vendors that form the backbone of many enterprise deployments. IBM is working alongside companies like Dell, Lenovo, and Nokia to deliver AI-ready infrastructure, and has formed go-to-market alliances with integrators and resellers to accelerate customer adoption. Financially, IBM's AI bets are translating into real momentum. In its latest earnings release, the company reported that its book of AI business has grown to over $5 billion, and its software division posted double-digit growth in 2024 — its strongest in years — mainly fueled by demand for AI and hybrid cloud solutions. Free cash flow climbed to $12.7 billion, and IBM reports that for every dollar spent on watsonx, clients invest five to six dollars more across IBM's broader software and consulting portfolio. This multiplier effect highlights the strength of IBM's integrated offerings. Most AI-related revenue still comes from consulting, reflecting the power of IBM's services-led go-to-market model. However, the company's strategy of combining Red Hat infrastructure, watsonx software, and consulting expertise is clearly gaining traction. The tight integration of its software, infrastructure, and services sets IBM apart in the enterprise AI space. Red Hat's OpenShift and RHEL AI form the infrastructure foundation of IBM's AI strategy, powering the deployment of watsonx across diverse cloud and edge environments. IBM Consulting brings the human element, delivering AI solutions tailored to industry-specific challenges in sectors such as banking, healthcare, manufacturing, and government. Together, these arms of IBM provide the technological muscle and domain expertise needed to bring AI from concept to production at enterprise scale. IBM's end-to-end approach, spanning model development, deployment, governance, and business transformation, is a strategy that's clearly working. It's also a strategy that's difficult for competitors to match. As bookings grow, platform adoption accelerates, and ecosystem partnerships deepen, IBM is reshaping its identity around AI, hybrid cloud, and consulting. The company's ability to commercialize AI through a tightly connected stack of products, platforms, and people makes it one of the most interesting and credible enterprise AI players today. Disclosure: Steve McDowell is an industry analyst, and NAND Research is an industry analyst firm, that engages in, or has engaged in, research, analysis and advisory services with many technology companies; the author has provided paid services to many of the companies named in this article in the past and may again in the future, including IBM. Mr. McDowell does not hold any equity positions with any company mentioned.


Mid East Info
20-02-2025
- Business
- Mid East Info
AI Freedom of Choice with Red Hat - Middle East Business News and Information
Red Hat collaborates with NVIDIA, Lenovo, Microsoft, AWS, Dell, AMD and Intel Red Hat continues to build upon its long-standing collaborations with major IT players to help customers implement and extend AI innovation across hybrid cloud environments. Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI now offer a supported, optimised experience with a wide range of GPU-enabled hardware and software offerings from NVIDIA, Lenovo, Dell, Microsoft, AWS, Intel and others. Artificial intelligence (AI) model training requires optimized hardware and powerful computation capabilities. AI platforms must also support a broad choice of accelerated compute architectures and GPUs. Customers can get more from RHEL AI and Red Hat OpenShift AI by extending it with other integrated services and products announced in the last eight months: RHEL AI on Lenovo ThinkSystem SR675 V3 servers RHEL AI on Dell PowerEdge RHEL AI support for AMD Instinct Accelerators RHEL AI on Microsoft Azure RHEL AI and Red Hat OpenShift AI on AWS Red Hat OpenShift AI with Intel Gaudi AI and Intel Xeon and Core processors Red Hat OpenShift with NVIDIA AI Enterprise The Cloud is Hybrid. So is AI. For 30 years, open source has been a driving force behind innovation. Red Hat has played a key role in this evolution, first with enterprise-grade Linux (RHEL) in the 2000s and later with Red Hat OpenShift for containers and Kubernetes. Today, Red Hat continues this journey with AI in the hybrid cloud. AI models often need to run as close to an organization's data as possible to reduce latency and improve efficiency. This requires models to be supported wherever needed, from the datacenter to public clouds to the edge. AI platforms must be able to stretch across all of these footprints, seamlessly and without integration challenges. AI is the ultimate hybrid workload.


Zawya
20-02-2025
- Business
- Zawya
AI freedom of choice with Red Hat
Red Hat continues to build upon its long-standing collaborations with major IT players to help customers implement and extend AI innovation across hybrid cloud environments. Red Hat Enterprise Linux AI (RHEL AI) and Red Hat OpenShift AI now offer a supported, optimised experience with a wide range of GPU-enabled hardware and software offerings from NVIDIA, Lenovo, Dell, Microsoft, AWS, Intel and others. Artificial intelligence (AI) model training requires optimized hardware and powerful computation capabilities. AI platforms must also support a broad choice of accelerated compute architectures and GPUs. Customers can get more from RHEL AI and Red Hat OpenShift AI by extending it with other integrated services and products announced in the last eight months: RHEL AI on Lenovo ThinkSystem SR675 V3 servers RHEL AI on Dell PowerEdge RHEL AI support for AMD Instinct Accelerators RHEL AI on Microsoft Azure RHEL AI and Red Hat OpenShift AI on AWS Red Hat OpenShift AI with Intel Gaudi AI and Intel Xeon and Core processors Red Hat OpenShift with NVIDIA AI Enterprise The Cloud is Hybrid. So is AI. For 30 years, open source has been a driving force behind innovation. Red Hat has played a key role in this evolution, first with enterprise-grade Linux (RHEL) in the 2000s and later with Red Hat OpenShift for containers and Kubernetes. Today, Red Hat continues this journey with AI in the hybrid cloud. AI models often need to run as close to an organization's data as possible to reduce latency and improve efficiency. This requires models to be supported wherever needed, from the datacenter to public clouds to the edge. AI platforms must be able to stretch across all of these footprints, seamlessly and without integration challenges. AI is the ultimate hybrid workload. For further information, please contact: Orient Planet Group (OPG) Email: media@ Website: