logo
'We will not play those games.'

'We will not play those games.'

The Verge19-05-2025

In a 22-minute video, Gamers Nexus talks about 'Nvidia's last several months of pressure to talk about DLSS more frequently in reviews, plus [Multi Frame Generation] 4X pressure from the company' and how 'Nvidia has repeatedly made comments to GN that interviews, technical discussion, and access to engineers unrelated to MFG 4X and DLSS are made possible by talking about MFG 4X and DLSS.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This Defense Stock Could Be the Next Palantir. Should You Buy It Now?
This Defense Stock Could Be the Next Palantir. Should You Buy It Now?

Yahoo

time20 minutes ago

  • Yahoo

This Defense Stock Could Be the Next Palantir. Should You Buy It Now?

The host of the popular CNBC show 'Mad Money,' Jim Cramer, has turned cheerleader for a defense stock that has slipped under the radar of many. Touting it as the 'Palantir of Hardware,' Cramer believes that following stellar earnings, this company has the potential to be a massive wealth creator for investors, just like the Alex Karp-led company. What is this high-potential company? AeroVironment (AVAV). Jeff Bezos Unloads $5.4B in Amazon Shares: Should You Buy or Sell AMZN Stock Now? Options Flow Alert: Bulls Making Their Move in GOOGL Stock Elon Musk's Tesla Makes History With 'First Time That a Car Has Delivered Itself to Its Owner' Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! Founded in 1971, AeroVironment initially focused on lightweight human-powered and solar-powered aircraft. Virginia-based AeroVironment (AVAV) has since evolved to focus on unmanned aerial vehicles (UAVs) and advanced defense technologies. With a market cap of nearly $7.7 billion, AVAV stock has rallied 85.4% on a year-to-date basis. The near term has been spectacular, with shares up more than 120% over the past three months and more than 45% in just the past five days. Is Jim Cramer right? And if so, how much farther can AeroVironment fly? A key aspect of the recent bullishness around the stock has been its blowout numbers for its fiscal Q4. AeroVironment reported a beat on both revenue and earnings. Revenues for Q4 were at $275.1 million, up 40% year over year and a new record for the company. Earnings moved up by almost 4 times in the same period to come in at $1.61 per share, compared to just $0.43 per share in the prior year and much ahead of the consensus estimate of $1.44 per share. Order backlog, a key indicator of both the demand for the company's products and services as well as revenue visibility, was at $726.6 million as of April 30, 2025. This marked a significant yearly uptick of 81.6% from the prior-year period. Although the company reported a net cash outflow from operating activities of $1.3 million in fiscal 2025, AeroVironment's liquidity position remained solid with a cash balance of $40.9 million and with no short-term debt on its books. In fact, the long-term debt of $30 million was also less than the cash balance, an impressive feat for a company in such a capital-intensive industry. For fiscal 2026, the company said it expects revenue to range between $1.9 billion and $2 billion and earnings to be between $2.80 and $3.00 per share. Drones front and center in modern warfare. As defense spending rises across the world, drones are going to play an even larger role. And this is where Cramer's 'Palantir of Hardware' stance for AeroVironment kicks in as the company's expertise in UAVs makes it a compelling contender to serve this rapidly growing and critical market. Evidence of this is AeroVironment's Switchblade 600 loitering munition being one of just three platforms selected for the initial tranche of the U.S. Department of Defense's Replicator program. Replicator Tranche 1 represents the program's inaugural phase, targeting the rapid deployment (within 18 to 24 months) of mass-produced, expendable autonomous platforms such as drones, loitering munitions, and unmanned maritime assets. The objective is clear: strengthen America's defensive capabilities through speed and scale to deter adversarial threats more effectively. Then, another critical piece of AeroVironment's value proposition is its Autonomy Retrofit Kit, powered by the AVACORE framework. This system equips legacy Puma airframes and future models with plug-and-play computer vision capabilities, enabling them to search, identify, and strike targets independently of radio commands. The advanced 600L variant exemplifies this leap in autonomy, as it features automatic armored-vehicle recognition, selectable approach vectors, and onboard target classification systems that drastically shorten the engagement timeline. Complementing these technical advancements is the firm's innovation arm, MacCready Works, which is integrating swarm-enabling code with Nvidia (NVDA) Jetson-class GPUs. This collaboration lays the groundwork for munitions capable of operating cooperatively in real time, allowing for dynamic in-mission retasking and coordinated strikes. Notably, AeroVironment is also no longer positioning itself as merely a drone manufacturer. With its acquisition of BlueHalo and the expansion of its Utah facility, the company is evolving into a more broadly diversified defense-technology player. BlueHalo's emphasis on software-defined systems is expected to accelerate margin expansion and free cash flow, supporting AeroVironment's ambitions for long-term double-digit growth. Crucially, AeroVironment's current portfolio is primed for the growing wave of AI-driven robotics. From its lightweight Raven reconnaissance drones to the Switchblade line of loitering weapons, its systems are naturally aligned for next-gen autonomous enhancements. Looking ahead, a new slate of products signals where AeroVironment is headed. The P550 eVTOL, for example, boasts a five-hour electric flight capability, supports payloads up to 15 pounds, and can deploy precision-guided munitions, potentially aligning with the Army's Long-Range Reconnaissance platform needs. Meanwhile, the JUMP 20-X modifies the firm's Group 3 VTOL for shipboard applications, featuring autonomous launch and recovery even in sea state-5 conditions, a bar few competitors can clear. Then there's Red Dragon, a strike drone designed for contested environments where GPS access is compromised. Its onboard vision-based targeting system makes it highly relevant for the kind of autonomous lethality envisioned by the Pentagon's Replicator strategy. Taken together, these developments reinforce AeroVironment's transformation from a niche player into a leading-edge defense innovator deeply aligned with the future of autonomous warfare. Thus, analysts have deemed AVAV stock to be a 'Strong Buy.' Both the mean target price as well as the high target price have already been surpassed, a testament to the stock's recent sharp rally. Out of 7 analysts covering the stock, six have a 'Strong Buy' rating and one has a 'Moderate Buy' rating. On the date of publication, Pathikrit Bose did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI
Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI

Forbes

time25 minutes ago

  • Forbes

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI

How an intelligence explosion might lift us from conventional AI to reaching the vaunted AGI ... More (artificial general intelligence). In today's column, I continue my special series covering the anticipated pathways that will get us from conventional AI to the revered hoped-for AGI (artificial general intelligence). The focus here is an analytically speculative deep dive into the detailed aspects of a so-called intelligence explosion during the journey to AGI. I've previously outlined that there are seven major paths for advancing AI to reach AGI (see the link here) – one of those paths consists of the improbable moonshot path, whereby there is a hypothesized breakthrough such as an intelligence explosion that suddenly and somewhat miraculously spurs AGI to arise. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who have been following along on my special series about AGI pathways, please note that I provide similar background aspects at the start of this piece as I did previously, setting the stage for new readers. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AI Experts Consensus On AGI Date Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. Seven Major Pathways As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Futures Forecasting Let's undertake a handy divide-and-conquer approach to identify what must presumably happen to get from current AI to AGI. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. The idea is to map out the next fifteen years and speculate what will happen with AI during that journey. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. Intelligence Explosion On The Way To AGI The moonshot path entails a sudden and generally unexpected radical breakthrough that swiftly transforms conventional AI into AGI. All kinds of wild speculation exists about what such a breakthrough might consist of, see my discussion at the link here. One of the most famous postulated breakthroughs would be the advent of an intelligence explosion. The idea is that once an intelligence explosion occurs, assuming that such a phenomenon ever happens, AI will in rapid-fire progression proceed to accelerate into becoming AGI. This type of path is in stark contrast to a linear pathway. In a linear pathway, the progression of AI toward AGI is relatively equal each year and consists of a gradual incremental climb from conventional AI to AGI. I laid out the details of the linear path in a prior posting, see the link here. When would the intelligence explosion occur? Since we are assuming a timeline of fifteen years and the prediction is that AGI will be attained in 2040, the logical place that an intelligence explosion would occur is right toward the 2040 date, perhaps happening in 2039 or 2038. This makes logical sense since if the intelligence explosion happens sooner, we would apparently reach AGI sooner. For example, suppose the intelligence explosion occurs in 2032. If indeed the intelligence explosion garners us AGI, we would declare 2032 or 2033 as the AGI date rather than 2040. Let's use this as our postulated timeline in this context: Defining An Intelligence Explosion You might be curious what an intelligence explosion would consist of and why it would necessarily seem to achieve AGI. The best way to conceive of an intelligence explosion is to first reflect on chain reactions such as what occurs in an atomic bomb or nuclear reactor. We all nowadays know that atomic particles can be forced or driven into wildly bouncing off each other, rapidly progressing until a massive explosion or burst of energy results. This is generally taught in school as a fundamental physics principle, and many blockbuster movies have dramatically showcased this activity (such as Christopher Nolan's famous Oppenheimer film). A theory in the AI community is that intelligence can do likewise. It goes like this. You bring together a whole bunch of intelligence and get that intelligence to feed off the collection in hand. Almost like catching fire, at some point, the intelligence will mix with and essentially fuel the creation of additional intelligence. Intelligence gets amassed in rapid succession. Boom, an intelligence chain reaction occurs, which is coined as an intelligence explosion. The AI community tends to attribute the initially formulated idea of an AI intelligence explosion to a research paper published in 1965 by John Good Irving entitled 'Speculations Concerning The First Ultraintelligent Machine' (Advances in Computers, Volume 6). Irving made this prediction in his article: Controversies About Intelligence Explosions Let's consider some of the noteworthy controversies about intelligence explosions. First, we have no credible evidence that an intelligence explosion per se is an actual phenomenon. To clarify, yes, it is perhaps readily apparent that if you have some collected intelligence and combine it with other collected intelligence, the odds are that you will have more intelligence collected than you had to start with. There is a potential synergy of intelligence fueling more intelligence. But the conception that intelligence will run free with other intelligence in some computing environments and spark a boatload of intelligence, well, this is an interesting theory, and we have yet to see this happen on any meaningful scale. I'm not saying it can't happen. Never say never. Second, the pace of an intelligence explosion is also a matter of great debate. The prevailing viewpoint is that once intelligence begins feeding off other intelligence, a rapid chain reaction will arise. Intelligence suddenly and with immense fury overflows into massive torrents of additional intelligence. One belief is that this will occur in the blink of an eye. Humans won't be able to see it happen and instead will merely be after-the-fact witnesses to the amazing result. Not everyone goes along with that instantaneous intelligence explosion conjecture. Some say it might take minutes, hours, days, weeks, or maybe months. Others say it could take years, decades, or centuries. Nobody knows. Starting And Stopping An Intelligence Explosion There are additional controversies in this worrisome basket. How can we start an intelligence explosion? In other words, assume that humans want to have an intelligence explosion arise. The method of getting this to occur is unknown. Something must somehow spark the intelligence to mix with the other intelligence. What algorithm gets this to happen? One viewpoint is that humans won't find a way to make it happen, and instead, it will just naturally occur. Imagine that we have tossed tons of intelligence into some kind of computing system. To our surprise, out of the blue, the intelligence starts mixing with the other intelligence. Exciting. This brings us to another perhaps obvious question, namely how will we stop an intelligence explosion? Maybe we can't stop it, and the intelligence will grow endlessly. Is that a good outcome or a bad outcome? Perhaps we can stop it, but we can't reignite it. Oops, if we stop the intelligence explosion too soon, we might have shot our own foot since we didn't get as much new intelligence as we could have garnered. A popular saga that gets a lot of media play is that an intelligence explosion will run amok. Things happen this way. A bunch of AI developers are sitting around toying with conventional AI when suddenly an intelligence explosion is spurred (the AI developers didn't make it happen, they were bystanders). The AI rapidly becomes AGI. Great. But the intelligence explosion keeps going, and we don't know how to stop it. Next thing we know, ASI has been reached. The qualm is that ASI is going to then decide it doesn't need humans around or that the ASI might as well enslave us. You see, we accidentally slipped past AGI and inadvertently landed at ASI. The existential risk of ASI arises, ASI clobbers us, and we are caught completely flatfooted. Timeline To AGI With Intelligence Explosion Now that I've laid out the crux of what an intelligence explosion is, let's assume that we get lucky and have a relatively safe intelligence explosion that transforms conventional AI into AGI. We will set aside the slipping and sliding into ASI. Fortunately, just like in Goldilocks, the porridge won't be too hot or too cold. The intelligence explosion will take us straight to the right amount of intelligence that suffices for AGI. Period, end of story. Here then is a strawman futures forecast roadmap from 2025 to 2040 that encompasses an intelligence explosion that gets us to AGI: Years 2025-2038 (Before the intelligence explosion): Years 2038-2039 (Intelligence explosion): Years 2039-2040 (AGI is attained): Contemplating The Timeline I'd ask you to contemplate the strawman timeline and consider where you will be and what you will be doing if an intelligence explosion happens in 2038 or 2039. You must admit, it would be quite a magical occurrence, hopefully of a societal upbeat result and not something gloomy. The Dalai Lama made this famous remark: 'It is important to direct our intelligence with good intentions. Without intelligence, we cannot accomplish very much. Without good intentions, the way we exercise our intelligence may have destructive results.' You have a potential role in guiding where we go if the above timeline plays out. Will AGI be imbued with good intentions? Will we be able to work hand-in-hand with AGI and accomplish good intentions? It's up to you. Please consider doing whatever you can to leverage a treasured intelligence explosion to benefit humankind.

Microsoft says its new health AI beat doctors in accurate diagnoses by a mile
Microsoft says its new health AI beat doctors in accurate diagnoses by a mile

Yahoo

time25 minutes ago

  • Yahoo

Microsoft says its new health AI beat doctors in accurate diagnoses by a mile

Microsoft said its medical AI diagnosed cases four times as accurately as human doctors. The AI system also solved cases "more cost-effectively" than its human counterparts, Microsoft said. The study comes as AI's growing role in healthcare raises questions about its place in medicine. Microsoft said its medical AI system diagnosed cases more accurately than human doctors by a wide margin. In a blog post published on Monday, the tech giant said its AI system, the Microsoft AI Diagnostic Orchestrator, diagnosed cases four times as accurately as a group of experienced physicians in a test. Microsoft's study comes as AI tools rapidly make their way into hospitals and clinics, raising questions about how much of medicine can or should be automated and what role doctors will play as diagnostic AI systems get more capable. The experiment involved 304 case studies sourced from the New England Journal of Medicine. Both the AI and physicians had to solve these cases step by step, just like they would in a real clinic: ordering tests, asking questions, and narrowing down possibilities. The AI system was paired with large language models from tech companies like OpenAI, Meta, Anthropic, and Google. When coupled with OpenAI's o3, the AI diagnostic system correctly solved 85.5% of the cases, Microsoft said. By contrast, 21 practicing physicians from the US and UK — each with five to 20 years of experience — averaged 20% accuracy across the completed cases, the company added. In the study, the doctors did not have access to resources they might typically tap for diagnostics, including coworkers, books, and AI. The AI system also solved cases "more cost-effectively" than its human counterparts, Microsoft said. "Our findings also suggest that AI reduce unnecessary healthcare costs. US health spending is nearing 20% of US GDP, with up to 25% of that estimated to be wasted," it added. "We're taking a big step towards medical superintelligence," said Mustafa Suleyman, the CEO of Microsoft's AI division, in a post on X. He added that the cases used in the study are "some of the toughest and most diagnostically complex" a physician can face. Suleyman previously led AI efforts at Google. Microsoft did not respond to a request for comment from Business Insider. Microsoft said in the blog post that AI "represents a complement to doctors and other health professionals." "While this technology is advancing rapidly, their clinical roles are much broader than simply making a diagnosis. They need to navigate ambiguity and build trust with patients and their families in a way that AI isn't set up to do," Microsoft said. "Clinical roles will, we believe, evolve with AI," it added. Tech leaders like Microsoft cofounder Bill Gates have said that AI could help solve the long-standing shortage of doctors. "AI will come in and provide medical IQ, and there won't be a shortage," he said on an episode of the "People by WTF" podcast published in April. But doctors have told BI that AI can't and shouldn't replace clinicians just yet. AI can't replicate physicians' presence, empathy, and nuanced judgment in uncertain or complex conditions, said Dr. Shravan Verma, the CEO of a Singapore-based health tech startup. Chatbots and AI tools can handle the first mile of care, but they must escalate to qualified professionals when needed, he told BI last month. Do you have a story to share about AI in healthcare? Contact this reporter at cmlee@ Read the original article on Business Insider

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store