logo
Navitas stock soars 33% to 52-week high — what's driving the surge in this semiconductor star?

Navitas stock soars 33% to 52-week high — what's driving the surge in this semiconductor star?

Time of India5 days ago
Navitas Semiconductor's stock soared, reaching a 52-week high of $9.18 with a market cap of $1.75 billion, fueled by a partnership with NVIDIA and anticipation for upcoming earnings. Despite a Deutsche Bank downgrade, the stock price target increased, and a new board member was added. The stock jumped 28.5% simply by announcing the earnings date, hinting at positive expectations.
Tired of too many ads?
Remove Ads
Recent business news that boosted confidence
Tired of too many ads?
Remove Ads
The real trigger behind today's stock jump
What analysts and experts are saying
Upcoming earnings call info
About Navitas
FAQs
Navitas Semiconductor's stock price hit $9.18, the highest in 52 weeks — a big milestone for the company. The company now has a market cap of $1.75 billion. Just last week, the stock rose 15.67% and has jumped 90.73% in 6 months.In the last one year, Navitas stock has gone up by 68.49%, showing strong investor interest. Liquidity is strong with a current ratio of 5.61, meaning the company can easily pay short-term bills. However, experts warn that the stock may be overvalued at its current price. The beta is 3.01, meaning the stock is very volatile compared to the overall market, as per the Investing report.Shareholders approved all proposals at Navitas' 2025 annual meeting. Three directors were re-elected: Gene Sheridan, Ranbir Singh, and Cristiano Amoruso. KPMG LLP was approved again as the company's auditor for 2025, as per the reports.Navitas partnered with Powerchip Semiconductor to make 200mm GaN on silicon chips — this should boost performance in many tech products. It is also working with NVIDIA on 800V high-voltage direct current tech for AI data centers — a very big deal.Even though Deutsche Bank downgraded Navitas from Buy to Hold, it still raised the stock's price target because of the NVIDIA partnership.Navitas added Cristiano Amoruso to the board, who brings helpful experience as the company moves into AI, data centers, and EVs, as per the report by Investing.Strangely, Navitas stock jumped 28.5% in early morning trading just because it announced the date of its next earnings report — August 4th. No earnings results were shared yet — just the date — but the stock still exploded, as per The Motley Fool report.There might be rumors on Wall Street that the earnings will be better than expected. Most analysts expect a $0.05 per-share loss, but maybe some believe it'll beat that, as per the reports.Seaport Global upgraded Texas Instruments, a bigger rival in power chips, saying the inventory cycle is improving — this could be good news for Navitas too. Still, Navitas has lost money 4 of the past 5 years, and is expected to lose money for 4 more years. The Motley Fool's Stock Advisor says Navitas didn't make it into their top 10 stock picks right now.Navitas will report Q2 results on Aug 4, 2025, after markets close. The earnings call will be at 2:00 PM Pacific / 5:00 PM Eastern, and investors can listen live online. A replay will be posted on the company's Investor Relations site, as per the GlobeNewswire.Navitas is a power semiconductor company started in 2014, focused on GaNFast™ and GeneSiC™ tech for faster charging and energy savings. It works in key areas like AI data centers, electric vehicles, and mobile devices, as per the reports.The company has over 300 patents and is the first to offer a 20-year GaNFast warranty. Navitas is also the first semiconductor firm to be CarbonNeutral® certified, as stated by GlobeNewswire.Navitas stock jumped 33% due to strong investor excitement over its NVIDIA partnership, new tech developments, and upcoming earnings.Navitas has high growth potential but is still losing money, so experts are divided on whether it's a good buy.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China PM warns against a global AI 'monopoly'
China PM warns against a global AI 'monopoly'

Time of India

time2 hours ago

  • Time of India

China PM warns against a global AI 'monopoly'

China PM warns against a global AI 'monopoly' China will spearhead the creation of an international organisation to jointly develop AI, the country's premier said, seeking to ensure that the world-changing technology doesn't become the province of just a few nations or companies. Artificial intelligence harbours risks from widespread job losses to economic upheaval that require nations to work together to address, Premier Li Qiang told the World Artificial Intelligence Conference in Shanghai on Saturday. That means more international exchanges, Beijing's No 2 official said during China's most important annual technology summit. Li didn't name any countries in his short address to kick off the event. But Chinese executives and officials have taken aim at Washington's efforts to curtail the Asian country's tech sector, including by slapping restrictions on the export of Nvidia chips crucial to AI development. On Saturday, Li acknowledged a shortage of semiconductors was a major bottleneck, but reaffirmed President Xi Jinping's call to establish policies to propel Beijing's ambitions. The govt will now help create a body - loosely translated as the World AI Cooperation Organization - through which countries can share insights and talent. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Most Beautiful Female Athletes Right Now Undo "Currently, key resources and capabilities are concentrated in a few countries and a few enterprises. If we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises," Li told hundreds of delegates huddled at the conference venue on the banks of Shanghai's iconic Huangpu river. China and the US are locked in a race to develop a technology with the potential to turbocharge economies and - over the long run - tip the balance of geopolitical power. This week, US President Donald Trump signed executive orders to loosen regulations and expand energy supplies for data centers - a call to arms to ensure companies like OpenAI and Google help safeguard America's lead in the post-ChatGPT era. At the same time, the breakout success of DeepSeek has inspired Chinese tech leaders and startups to accelerate research and roll out AI products. The weekend conference in Shanghai - gathering star founders, Beijing officials and deep-pocketed financiers by the thousands - is designed to catalyze that movement. The event, which has featured Elon Musk and Jack Ma in years past, was launched in 2018. This year's attendance may hit a record because it's taking place at a critical juncture in the global race to lead GenAI development. It's already drawn some notable figures: Nobel Prize laureate Geoffrey Hinton and former Google chief Eric Schmidt were among heavyweights who met Shanghai party boss Chen Jining Thursday, before they were due to speak at the event.

Meet Lumo, the new AI chatbot that protects user privacy
Meet Lumo, the new AI chatbot that protects user privacy

Indian Express

time7 hours ago

  • Indian Express

Meet Lumo, the new AI chatbot that protects user privacy

Proton, the company that introduced the encrypted email service Proton Mail, has now unveiled an AI chatbot with focus on user privacy. Named Lumo, the chatbot can generate code, write email, summarise documents, and much more. Proton has dubbed its AI chatbot as an alternative to ChatGPT, Gemini, Copilot, etc. The AI chatbot preserves user privacy while storing data locally on users' devices. Lumo is powered by several open-source large language models that run on Proton's servers in Europe, including Mistral's Nemo, Mistral Small 3, Nvidia's OpenHands 32B, and the Allen Institute for AI's OLMO 2 32B model. Lumo can field requests through different models depending on which is better suited for a query. The company claims that the new chatbot will protect information with 'zero-access' encryption, which grants the user an encryption key that allows them exclusive access to their data. This encryption key will block third parties and even Proton from accessing the user content, meaning the company will not be sharing any personal information. Proton has reportedly used Transport Layer Security (TLS) encryption for data transmission and 'asymmetrically' encrypts prompts, allowing only the Lumo GPU servers to decrypt them. When it comes to features, Ghost mode ensures that your active chat sessions are not saved, not even on local devices. With the Web search feature, Lumo can look up recent or new information on the internet to add to its current knowledge. It can also understand and analyse your files, but does not keep a record of them. Lastly, integration with Proton Drive makes it simple to add end-to-end encrypted files from your Proton Drive to your Lumo chats. The chatbot comes with internet search, however, it is disabled by default to ensure privacy. Once enabled, Lumo will deploy privacy-friendly search engines to provide responses to user queries. It can analyse uploaded files, but it does not store any of the data. Proton Drive files, which are meant to remain end-to-end encrypted while communicating with the chatbot, can also be linked by users to Lumo. The chatbot comes in both a free and premium version. Those without an account with Lumo or Proton, will be able to ask 25 queries per week. They will not be able to access chat histories. On the other hand, users with a free account can ask up to 100 questions per week. Lumo Plus plan is priced at $12.99 a month and comes with unlimited chats, an extended encrypted chat history, and more.

The new chips designed to solve AI's energy problem
The new chips designed to solve AI's energy problem

Mint

time7 hours ago

  • Mint

The new chips designed to solve AI's energy problem

'I can't wrap my head around it," says Andrew Wee, who has been a Silicon Valley data-center and hardware guy for 30 years. The 'it" that has him so befuddled—irate, even—is the projected power demands of future AI supercomputers, the ones that are supposed to power humanity's great leap forward. Wee held senior roles at Apple and Meta, and is now head of hardware for cloud provider Cloudflare. He believes the current growth in energy required for AI—which the World Economic Forum estimates will be 50% a year through 2030—is unsustainable. 'We need to find technical solutions, policy solutions and other solutions that solve this collectively," he says. To that end, Wee's team at Cloudflare is testing a radical new kind of microchip, from a startup founded in 2023, called Positron, which has just announced a fresh round of $51.6 million in investment. These chips have the potential to be much more energy efficient than ones from industry leader Nvidia at the all-important task of inference, which is the process by which AI responses are generated from user prompts. While Nvidia chips will continue to be used to train AI for the foreseeable future, more efficient inference could collectively save companies tens of billions of dollars, and a commensurate amount of energy. There are at least a dozen chip startups all battling to sell cloud-computing providers the custom-built inference chips of the future. Then there are the well-funded, multiyear efforts by Google, Amazon and Microsoft to build inference-focused chips to power their own internal AI tools, and to sell to others through their cloud services. The intensity of these efforts, and the scale of the cumulative investment in them, show just how desperate every tech giant—along with many startups—is to provide AI to consumers and businesses without paying the 'Nvidia tax." That's Nvidia's approximately 60% gross margin, the price of buying the company's hardware. Nvidia is very aware of the growing importance of inference and concerns about AI's appetite for energy, says Dion Harris, a senior director at Nvidia who sells the company's biggest customers on the promise of its latest AI hardware. Nvidia's latest Blackwell systems are between 25 and 30 times as efficient at inference, per watt of energy pumped into them, as the previous generation, he adds. To accomplish their goals, makers of novel AI chips are using a strategy that has worked time and again: They are redesigning their chips, from the ground up, expressly for the new class of tasks that is suddenly so important in computing. In the past, that was graphics, and that's how Nvidia built its fortune. Only later did it become apparent graphics chips could be repurposed for AI, but arguably it's never been a perfect fit. Jonathan Ross is chief executive of chip startup Groq, and previously headed Google's AI chip development program. He says he founded Groq (no relation to Elon Musk's xAI chatbot) because he believed there was a fundamentally different way of designing chips—solely to run today's AI models. Groq claims its chips can deliver AI much faster than Nvidia's best chips, and for between one-third and one-sixth as much power as Nvidia's. This is due to their unique design, which has memory embedded in them, rather than being separate. While the specifics of how Groq's chips perform depends on any number of factors, the company's claim that it can deliver inference at a lower cost than is possible with Nvidia's systems is credible, says Jordan Nanos, an analyst at SemiAnalysis who spent a decade working for Hewlett Packard Enterprise. Positron is taking a different approach to delivering inference more quickly. The company, which has already delivered chips to customers including Cloudflare, has created a simplified chip with a narrower range of abilities, in order to perform those tasks more quickly. The company's latest funding round came from Valor Equity Partners, Atreides Management and DFJ Growth, and brings the total amount of investment in the company to $75 million. Positron's next-generation system will compete with Nvidia's next-generation system, known as Vera Rubin. Based on Nvidia's road map, Positron's chips will have two to three times better performance per dollar, and three to six times better performance per unit of electricity pumped into them, says Positron CEO Mitesh Agrawal. Competitors' claims about beating Nvidia at inference often don't reflect all of the things customers take into account when choosing hardware, says Harris. Flexibility matters, and what companies do with their AI chips can change as new models and use cases become popular. Nvidia's customers 'are not necessarily persuaded by the more niche applications of inference," he adds. Cloudflare's initial tests of Positron's chips were encouraging enough to convince Wee to put them into the company's data centers for more long-term tests, which are continuing. It's something that only one other chip startup's hardware has warranted, he says. 'If they do deliver the advertised metrics, we will open the spigot and allow them to deploy in much larger numbers globally," he adds. By commoditizing AI hardware, and allowing Nvidia's customers to switch to more-efficient systems, the forces of competition might bend the curve of future AI power demand, says Wee. 'There is so much FOMO right now, but eventually, I think reason will catch up with reality," he says. One truism of the history of computing is that whenever hardware engineers figure out how to do something faster or more efficiently, coders—and consumers—figure out how to use all of the new performance gains, and then some. Mark Lohmeyer is vice president of AI and computing infrastructure for Google Cloud, where he provides both Google's own custom AI chips, and Nvidia's, to Google and its cloud customers. He says that consumer and business adoption of new, more demanding AI models means that no matter how much more efficiently his team can deliver AI, there is no end in sight to growth in demand for it. Like nearly all other big AI providers, Google is making efforts to find radical new ways to produce energy to feed that AI—including both nuclear power and fusion. The bottom line: While new chips might help individual companies deliver AI more efficiently, the industry as a whole remains on track to consume ever more energy. As a recent report from Anthropic notes, that means energy production, not data centers and chips, could be the real bottleneck for future development of AI. Write to Christopher Mims at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store