logo
Nvidia reveals RTX 5050 GPU: Affordable next gen graphics card for budget gamers

Nvidia reveals RTX 5050 GPU: Affordable next gen graphics card for budget gamers

Mint4 days ago

Nvidia has officially revealed the GeForce RTX 5050 GPU for laptops and desktops. This new entry-level card is designed for the budget-conscious gamers and creators who want the latest features without spending a fortune. The desktop variant of the RTX 5050 is expected to be available in July 2025, with a starting price of $249. Laptop variants are already available with Asus, MSI and more.
Ray tracing and DLSS 4: The RTX 5050 supports full ray tracing, DLSS 4 with multi-frame generation and Nvidia Reflex. This makes the card suitable for both single-player AAA titles and fast-paced esports titles.
Gaming: The GPU delivers more than 150 fps with DLSS 4 on modern titles like Cyberpunk 2077, Doom: The Dark Ages, Apex Legends, Counter-Strike 2 and more. This is a significant improvement over the RTX 3050.
AI performance: The card supports up to 421 TOPS, which is about six times more than the performance of the RTX 3050.
Display support: The card supports up to a 4K display at 480Hz or 8K at 165Hz with DSC and supports up to 4 displays.
Efficiency: The laptop version of the RTX 5050 comes with GDDR7 memory, which is up to twice as efficient as GDDR6 memory. This helps in reducing the size and thickness of gaming laptops.
Buy the RTX 5050 GPU if you are looking for an affordable option for your new gaming PC. If you are a casual gamer and 1080p gaming is perfect for you, then go for this card. And anyone who is looking for an upgrade over RTX 20 or 30 series video cards, the RTX 5050 can offer a significant performance boost.
The desktop variant of the RTX 5050 GPU will be available in mid-July and the price will start from $249 ( ₹ 21,300 approx). The card will be available through partners like Asus, Gigabyte and more. Gaming laptops equipped with the RTX 5050 are available in India from Asus, MSI and more.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI turns to Google's AI chips to power its products: The information
OpenAI turns to Google's AI chips to power its products: The information

Time of India

time13 hours ago

  • Time of India

OpenAI turns to Google's AI chips to power its products: The information

OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and other products, The Information reported on Friday, citing a person involved in the arrangement. The move, which marks the first time OpenAI has used non-Nvidia chips in a meaningful way, shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centres, potentially boosting Google's tensor processing units (TPUs) as a cheaper alternative to Nvidia's graphics processing units (GPUs), the report said. As one of the largest purchasers of Nvidia's GPUs, OpenAI uses AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Both OpenAI and Google did not immediately respond to Reuters requests for comment. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house TPUs, which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two OpenAI competitors launched by former OpenAI leaders.

OpenAI turns to Google's AI chips to power its products
OpenAI turns to Google's AI chips to power its products

Time of India

time13 hours ago

  • Time of India

OpenAI turns to Google's AI chips to power its products

OpenAI has recently begun renting Google 's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.

Microsoft's AI Chip ‘Braga' Delayed to 2026, Expected to Trail Nvidia's Blackwell: Report
Microsoft's AI Chip ‘Braga' Delayed to 2026, Expected to Trail Nvidia's Blackwell: Report

Hans India

time14 hours ago

  • Hans India

Microsoft's AI Chip ‘Braga' Delayed to 2026, Expected to Trail Nvidia's Blackwell: Report

Microsoft's ambitious plans to mass-produce its next-generation AI chip, code-named Braga, have reportedly hit a significant delay, with production now expected in 2026 instead of this year. This development, as reported by The Information on Friday, has been attributed to unexpected design revisions, staffing issues, and a high rate of employee turnover within the project team. Initially slated to power Microsoft's data centers by the end of 2025, the Braga chip is the successor to the Maia AI chip, which was introduced in November 2023. According to the report citing three individuals directly involved in the project, the Braga chip is not only delayed but is also expected to significantly underperform when compared to Nvidia's Blackwell chip, which launched in late 2024 and is currently leading the market in AI chip performance. The delay marks a setback in Microsoft's broader strategy to reduce its dependency on Nvidia's GPUs—currently the dominant force in AI hardware—and establish itself as a serious contender in the custom chip space. Microsoft has not issued an official comment in response to the report, as noted by Reuters. The push for custom chips has become a defining trend among major cloud providers. Like its tech rivals Amazon and Alphabet (Google), Microsoft has invested heavily in in-house silicon to support the exponential growth in demand for AI computing. These custom processors are not only crucial for boosting performance but also for managing rising operational costs in AI workloads. Despite introducing the Maia chip in late 2023, Microsoft has struggled to scale production in line with competitors. Google's Tensor Processing Units (TPUs), for example, have been pivotal in powering many of its AI services. The search giant recently launched its seventh-generation TPU in April 2025, with notable performance upgrades designed to accelerate large-scale AI applications. Meanwhile, Amazon continues to make strides with its Trainium chip line. In December 2024, the company unveiled Trainium3, its next-gen AI processor, scheduled for release later this year, promising improved training speeds and energy efficiency. Microsoft's delay could give both Amazon and Google further time to cement their positions in the AI chip arena. Moreover, the setback may compel Microsoft to lean more on third-party chipmakers like Nvidia and possibly consider interim solutions to meet its data center demands. In a related development, OpenAI—the AI research lab heavily backed by Microsoft—was recently reported to be testing Google's AI chips to power some of its products. This move hints at the broader industry reality: even AI leaders may be forced to look beyond their preferred partnerships when custom solutions lag in readiness. As the race for AI supremacy intensifies, delays like Braga's could have far-reaching consequences—not just for Microsoft, but for the entire ecosystem of AI infrastructure.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store