logo
#

Latest news with #AIExperts

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI
Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI

Forbes

timea day ago

  • Science
  • Forbes

Future Forecasting A Massive Intelligence Explosion On The Path From AI To AGI

How an intelligence explosion might lift us from conventional AI to reaching the vaunted AGI ... More (artificial general intelligence). In today's column, I continue my special series covering the anticipated pathways that will get us from conventional AI to the revered hoped-for AGI (artificial general intelligence). The focus here is an analytically speculative deep dive into the detailed aspects of a so-called intelligence explosion during the journey to AGI. I've previously outlined that there are seven major paths for advancing AI to reach AGI (see the link here) – one of those paths consists of the improbable moonshot path, whereby there is a hypothesized breakthrough such as an intelligence explosion that suddenly and somewhat miraculously spurs AGI to arise. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who have been following along on my special series about AGI pathways, please note that I provide similar background aspects at the start of this piece as I did previously, setting the stage for new readers. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AI Experts Consensus On AGI Date Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. Seven Major Pathways As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Futures Forecasting Let's undertake a handy divide-and-conquer approach to identify what must presumably happen to get from current AI to AGI. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. The idea is to map out the next fifteen years and speculate what will happen with AI during that journey. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. Intelligence Explosion On The Way To AGI The moonshot path entails a sudden and generally unexpected radical breakthrough that swiftly transforms conventional AI into AGI. All kinds of wild speculation exists about what such a breakthrough might consist of, see my discussion at the link here. One of the most famous postulated breakthroughs would be the advent of an intelligence explosion. The idea is that once an intelligence explosion occurs, assuming that such a phenomenon ever happens, AI will in rapid-fire progression proceed to accelerate into becoming AGI. This type of path is in stark contrast to a linear pathway. In a linear pathway, the progression of AI toward AGI is relatively equal each year and consists of a gradual incremental climb from conventional AI to AGI. I laid out the details of the linear path in a prior posting, see the link here. When would the intelligence explosion occur? Since we are assuming a timeline of fifteen years and the prediction is that AGI will be attained in 2040, the logical place that an intelligence explosion would occur is right toward the 2040 date, perhaps happening in 2039 or 2038. This makes logical sense since if the intelligence explosion happens sooner, we would apparently reach AGI sooner. For example, suppose the intelligence explosion occurs in 2032. If indeed the intelligence explosion garners us AGI, we would declare 2032 or 2033 as the AGI date rather than 2040. Let's use this as our postulated timeline in this context: Defining An Intelligence Explosion You might be curious what an intelligence explosion would consist of and why it would necessarily seem to achieve AGI. The best way to conceive of an intelligence explosion is to first reflect on chain reactions such as what occurs in an atomic bomb or nuclear reactor. We all nowadays know that atomic particles can be forced or driven into wildly bouncing off each other, rapidly progressing until a massive explosion or burst of energy results. This is generally taught in school as a fundamental physics principle, and many blockbuster movies have dramatically showcased this activity (such as Christopher Nolan's famous Oppenheimer film). A theory in the AI community is that intelligence can do likewise. It goes like this. You bring together a whole bunch of intelligence and get that intelligence to feed off the collection in hand. Almost like catching fire, at some point, the intelligence will mix with and essentially fuel the creation of additional intelligence. Intelligence gets amassed in rapid succession. Boom, an intelligence chain reaction occurs, which is coined as an intelligence explosion. The AI community tends to attribute the initially formulated idea of an AI intelligence explosion to a research paper published in 1965 by John Good Irving entitled 'Speculations Concerning The First Ultraintelligent Machine' (Advances in Computers, Volume 6). Irving made this prediction in his article: Controversies About Intelligence Explosions Let's consider some of the noteworthy controversies about intelligence explosions. First, we have no credible evidence that an intelligence explosion per se is an actual phenomenon. To clarify, yes, it is perhaps readily apparent that if you have some collected intelligence and combine it with other collected intelligence, the odds are that you will have more intelligence collected than you had to start with. There is a potential synergy of intelligence fueling more intelligence. But the conception that intelligence will run free with other intelligence in some computing environments and spark a boatload of intelligence, well, this is an interesting theory, and we have yet to see this happen on any meaningful scale. I'm not saying it can't happen. Never say never. Second, the pace of an intelligence explosion is also a matter of great debate. The prevailing viewpoint is that once intelligence begins feeding off other intelligence, a rapid chain reaction will arise. Intelligence suddenly and with immense fury overflows into massive torrents of additional intelligence. One belief is that this will occur in the blink of an eye. Humans won't be able to see it happen and instead will merely be after-the-fact witnesses to the amazing result. Not everyone goes along with that instantaneous intelligence explosion conjecture. Some say it might take minutes, hours, days, weeks, or maybe months. Others say it could take years, decades, or centuries. Nobody knows. Starting And Stopping An Intelligence Explosion There are additional controversies in this worrisome basket. How can we start an intelligence explosion? In other words, assume that humans want to have an intelligence explosion arise. The method of getting this to occur is unknown. Something must somehow spark the intelligence to mix with the other intelligence. What algorithm gets this to happen? One viewpoint is that humans won't find a way to make it happen, and instead, it will just naturally occur. Imagine that we have tossed tons of intelligence into some kind of computing system. To our surprise, out of the blue, the intelligence starts mixing with the other intelligence. Exciting. This brings us to another perhaps obvious question, namely how will we stop an intelligence explosion? Maybe we can't stop it, and the intelligence will grow endlessly. Is that a good outcome or a bad outcome? Perhaps we can stop it, but we can't reignite it. Oops, if we stop the intelligence explosion too soon, we might have shot our own foot since we didn't get as much new intelligence as we could have garnered. A popular saga that gets a lot of media play is that an intelligence explosion will run amok. Things happen this way. A bunch of AI developers are sitting around toying with conventional AI when suddenly an intelligence explosion is spurred (the AI developers didn't make it happen, they were bystanders). The AI rapidly becomes AGI. Great. But the intelligence explosion keeps going, and we don't know how to stop it. Next thing we know, ASI has been reached. The qualm is that ASI is going to then decide it doesn't need humans around or that the ASI might as well enslave us. You see, we accidentally slipped past AGI and inadvertently landed at ASI. The existential risk of ASI arises, ASI clobbers us, and we are caught completely flatfooted. Timeline To AGI With Intelligence Explosion Now that I've laid out the crux of what an intelligence explosion is, let's assume that we get lucky and have a relatively safe intelligence explosion that transforms conventional AI into AGI. We will set aside the slipping and sliding into ASI. Fortunately, just like in Goldilocks, the porridge won't be too hot or too cold. The intelligence explosion will take us straight to the right amount of intelligence that suffices for AGI. Period, end of story. Here then is a strawman futures forecast roadmap from 2025 to 2040 that encompasses an intelligence explosion that gets us to AGI: Years 2025-2038 (Before the intelligence explosion): Years 2038-2039 (Intelligence explosion): Years 2039-2040 (AGI is attained): Contemplating The Timeline I'd ask you to contemplate the strawman timeline and consider where you will be and what you will be doing if an intelligence explosion happens in 2038 or 2039. You must admit, it would be quite a magical occurrence, hopefully of a societal upbeat result and not something gloomy. The Dalai Lama made this famous remark: 'It is important to direct our intelligence with good intentions. Without intelligence, we cannot accomplish very much. Without good intentions, the way we exercise our intelligence may have destructive results.' You have a potential role in guiding where we go if the above timeline plays out. Will AGI be imbued with good intentions? Will we be able to work hand-in-hand with AGI and accomplish good intentions? It's up to you. Please consider doing whatever you can to leverage a treasured intelligence explosion to benefit humankind.

Uber Expands AI Solutions Platform to 30 Countries With New Training Tools
Uber Expands AI Solutions Platform to 30 Countries With New Training Tools

Yahoo

time23-06-2025

  • Business
  • Yahoo

Uber Expands AI Solutions Platform to 30 Countries With New Training Tools

Uber Technologies, Inc. (NYSE:UBER) is one of the 10 AI Stocks in the Spotlight. On June 20, the company announced that it has expanded its AI data services business, Uber AI Solutions, enabling its technology platform to support AI labs and enterprises from around the world. Uber AI Solutions is now available in 30 countries, offering a platform that connects enterprises to global talent, including experts in coding, finance, law, science, and linguistics. The company has a new data foundry that provides massive datasets to train large AI models. The AI Solutions will also provide the tools and data to help train smart AI agents, helping them navigate real-world business processes. The company is also making its internal platforms available to enterprise clients. Pixabay/Public Domain These platforms include AI-powered smart onboarding, quality checks, smart task decomposition and routing, and others that ensure accuracy and efficiency. Together, the capabilities and offerings from Uber AI Solutions will enable it to become the global interface between humans and machines, redefining AI collaboration. Uber is also planning on developing an AI-powered interface, enabling clients to describe their data needs in plain language. The platform then handles tasks such as setup, assignment, workflow optimization, and quality management for scalable AI training. 'We're bringing together Uber's platform, people, and AI systems to help other organizations build smarter AI more quickly. With today's updates, we're scaling our platform globally to meet the growing demand for reliable, real-world AI data.' -Megha Yethadka, GM and Head of Uber AI Solutions. Uber Technologies, Inc. (NYSE:UBER) is engaged in developing and operating proprietary technology applications. While we acknowledge the potential of UBER as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and Disclosure: None. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store