logo
Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Forbes05-06-2025
Future forecasting the yearly path of advancing todays to AGI by 2040.
In today's column, I am continuing my special series on the likely pathways that will get us from conventional AI to the avidly sought attainment of AGI (artificial general intelligence). AGI would be a type of AI that is fully on par with human intellect in all respects. I've previously outlined seven major paths that seem to be the most probable routes of advancing AI to reach AGI (see the link here).
Here, I undertake an analytically speculative deep dive into one of those paths, namely I explore the year-by-year aspects of the considered most-expected route, the linear path. Other upcoming postings will cover each of the other remaining paths. The linear path consists of AI being advanced incrementally, one step at a time until we arrive at AGI.
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Right now, efforts to forecast when AGI is going to be attained consist principally of two paths.
First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040.
Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus?
Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused.
The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date.
Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here.
As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). The most often presumed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. Each of the other remaining major routes involves various twists and turns.
Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI:
You can apply those seven possible pathways to whatever AGI timeline that you want to come up with.
Let's undertake a handy divide-and-conquer approach to identify what must presumably happen on a year-by-year basis to get from current AI to AGI.
Here's how that goes.
We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. In the particular case of the linear path, the key assumption is that AI is advancing in a stepwise fashion each year. There aren't any sudden breakthroughs or miracles that perchance arise. It is steady work and requires earnestly keeping our nose to the grind and getting the job done in those fifteen years ahead.
The idea is to map out the next fifteen years and speculate what will happen with AI in each respective year.
This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting.
Is this kind of a forecast of the future ironclad?
Nope.
If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever.
All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial.
I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed.
I opted to identify AI technological advancements for each of the years and added some brief thoughts on the societal implications too. Here's why. AI ethics and AI law are bound to become increasingly vital and will to some degree foster AI advances and in other ways possibly dampen some AI advances, see my in-depth coverage of such tensions at the link here.
Here then is a strawman futures forecast year-by-year roadmap from 2025 to 2040 of a linear path getting us to AGI:
Year 2025: AI multi-modal models finally become robust and fully integrated into LLMs. Significant improvements in AI real-time reasoning, sensorimotor integration, and grounded language understanding occur. The use of AI in professional domains such as law, medicine, and the like rachet up. Regulatory frameworks remain sporadic and generally unadopted.
Year 2026: Agentic AI starts to blossom and become practical and widespread. AI systems with memory and planning capabilities achieve competence in open-ended tasks in simulation environments. Public interest in governing AI increases.
Year 2027: The use of AI large-scale world models spurs substantially improved AI capabilities. AI can now computationally improve from fewer examples via advancements in AI meta-learning. Some of these advances allow AI to be employed in white-collar jobs that have a mild displacement economically, but only to a minor degree.
Year 2028: AI agents have gained wide acceptance and are capable of executing multi-step tasks semi-autonomously in digital and physical domains, including robotics. AI becomes a key element as taught in schools and as used in education, co-teaching jointly with human teachers.
Year 2029: AI is advanced sufficiently to have a generalized understanding of physical causality and real-world constraints through embodied learning. Concerns about AI as a job displacer reach heightened attention.
Year 2030: Self-improving AI systems begin modifying their own code under controlled conditions, improving efficiency without human input. This is an important underpinning. Some claim that AGI is now just a year or two away, but this is premature, and ten more years will first take place.
Year 2031: Hybrid AI consisting of integrated cognitive architectures unifying symbolic reasoning, neural networks, and probabilistic models has become the new accepted approach to AI. Infighting among AI developers as to whether hybrid AI was the way to go has now evaporated. AI-based tutors fully surpass human teachers in personalization and subject mastery, putting human teachers at great job risk.
Year 2032: AI agents achieve human-level performance across most cognitive benchmarks, including abstraction, theory of mind (ToM), and cross-domain learning. This immensely exceeds prior versions of AI that did well on those metrics but not nearly to this degree. Industries begin to radically restructure and rethink their businesses with an AI-first mindset.
Year 2033: AI scalability alignment protocols improve in terms of human-AI values alignment. This opens the door to faster adoption of AI due to a belief that AI safety is getting stronger. Trust in AI grows. But so is societal dependence on AI.
Year 2034: AI interaction appears to be indistinguishable from human-to-human interaction, even as tested by those who are versed in tricking AI into revealing itself. The role of non-human intelligence and how AI stretches our understanding of philosophy, religion, and human psychology has become a high priority.
Year 2035: AI systems exhibit bona fide signs of self-reflection, not just routinized mimicry or parroting. Advances occur in having AI computationally learn from failure across domains and optimizing for long-term utility functions. Debates over some form of UBI (universal basic income) lead to various trials of the approach to aid human labor displacements due to AI.
Year 2036: AI advancement has led to fluid generalization across a wide swath of domains. Heated arguments take place about whether AGI is emerging, some say it is, and others insist that a scaling wall is about to be hit and that this is the best that AI will be. Nations begin to covet their AI and set up barriers to prevent other nations from stealing or copying the early AGI systems.
Year 2037: Advances in AI showcase human-like situational adaptability and innovation. New inventions and scientific discoveries are being led by AI. Questions arise about whether this pre-AGI has sufficient moral reasoning and human goal alignment.
Year 2038: AI systems now embody persistent identities, seemingly able to reflect on experiences across time. Experts believe we are on the cusp of AI reaching cognitive coherence akin to humans. Worldwide discourse on the legal personhood and rights of AI intensifies.
Year 2039: Some of the last barriers to acceptance of AI as nearing AGI are overcome when AI demonstrates creativity, emotional nuance, and abstract reasoning in diverse contexts. This was one of the last straws on the camel's back. Existential risks and utopian visions fully dominate public apprehensions.
Year 2040: General agreement occurs that AGI has now been attained, though it is still early days of AGI and some are not yet convinced that AGI is truly achieved. Society enters a transitional phase: post-scarcity economics, redefinition of human purpose, and consideration of co-evolution with AGI.
Mull over the strawman timeline and consider where you will be and what you will be doing during each of those fifteen years.
One viewpoint is that we are all along for the ride and there isn't much that anyone can individually do. I don't agree with that sentiment. Any of us can make a difference in how AI plays out and what the trajectory and impact of reaching AGI is going to be.
As per the famous words of Abraham Lincoln: 'The most reliable way to predict the future is to create it.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Good foundations: The ROI of AI in the architectural design phase
Good foundations: The ROI of AI in the architectural design phase

Fast Company

time20 minutes ago

  • Fast Company

Good foundations: The ROI of AI in the architectural design phase

Construction isn't getting cheaper, and demand for space isn't getting smaller. Architecture 2030 projects the world will add 2.6 trillion square feet of new floor space between 2020 and 2060. Not only do we need more buildings, but we need ones that are efficient, cost-effective, and support occupant well-being. The construction industry is woefully inefficient. A 2015 KPMG report found only 25% of construction projects stayed within 10% of their original deadlines. Research has found that design change is one of the top reasons for cost overrun. How can we improve the architectural design process to eliminate inefficiencies, identify cost savings, and create better projects? The answer is AI. It's not enough to aim for better buildings alone. We need a better building process. Breakthroughs in AI offer powerful solutions—but before AI can revolutionize the process, it must first confront architectural design, the foundation of every successful project. This phase is characterized by creativity and compromise, where architects strike a balance between aesthetics and function, stakeholder expectations and regulations, and innovation and cost. Understanding the challenges in this stage is key to appreciating how AI can elevate the design process. THE OBSTACLES OF THE ARCHITECTURAL DESIGN PHASE 1. Predesign The predesign phase is meticulous. Architects must create a design that fits project goals, stays within budget, and follows zoning laws and building codes. Projects involving specialized industries such as medicine or chemical storage add further complications. AI can expedite this process by analyzing a specific site's zoning laws and environmental regulations and indicating potential site-building challenges, such as slope gradient, flood risk, building height restrictions, etc. AI can also research these concurrently, saving more time. 2. Schematic Design Next is schematic design, where the project's layout and design begin to take shape. With AI, architects can create numerous iterations of a design in record time, allowing stakeholders to provide feedback and ask for alterations without slowing the process. They can also simulate and test aspects such as energy efficiency and spatial layout. AI-powered cost estimation solutions, which draw from real-time material inventory databases, allow architects to integrate financial considerations earlier in the design process. 3. Developmental Design This is the stage where architects dive into the design's technical details. AI streamlines material specification by enabling rapid creation of multiple design versions with varying material choices and associated cost estimates. Teams can customize estimation tools to align with sustainability targets and material preferences. Equally importantly, AI facilitates quick integration of stakeholder feedback, which speeds design iteration and finalization. Despite its benefits, like anything new, AI has its drawbacks and challenges. AI's major advantage is its ability to use past data to generate insights and predictions for future projects. A firm could feed an architectural model five years of work and generate a new design based on that data; however, AI's accuracy depends on the data it's given. If the project data is full of errors or incomplete, those flaws will be reflected in the results. Unlike humans, who can question information, AI takes what it is given as fact. This data objectivity can create bias. While AI models themselves aren't biased, the human-generated data they're trained on often is. For example, an early version of an image-generating AI was criticized online because, when prompted to generate an image of a 'CEO' or 'director,' its output primarily featured white men. Why? Stock photography sites in the model's training data prominently labeled photos of white men as CEOs. AI models are not magical solutions. They're tools, and like any tool, people must know how to use them correctly. KEY CONSIDERATIONS BEFORE BUILDING WITH AI Architecture firms must be strategic to incorporate AI into their design process. Simply picking a tool and telling employees to use it is a quick way to waste money. Here are a few tips: • Define your objectives. What areas do you want to improve? Where are your bottlenecks? This will help you find the right tool for your needs. • Research integrations. AI is useless if it can't integrate with your most important systems. • Include employees. Ownership in the process ensures a better fit for your business's needs and higher adoption rates. • Start small. Select one project as a test run; when it's over, talk with your team to refine the process for the next one. ARCHITECTURAL BUILDING DESIGN WITH AI ASSISTANCE In the architectural design process, time is money, and so is predictability. AI presents a strategic advantage in an industry known for razor-thin margins and chronic overruns. By embedding AI into the process, architects can achieve faster approvals, reduce design-related rework, optimize material use, and gain early cost clarity—all while preserving design integrity and meeting increasingly complex code and sustainability requirements. The future of development lies in building better, and it starts with a better process, assisted by AI.

Beware! Research shows Gmail's AI email summaries can be hacked
Beware! Research shows Gmail's AI email summaries can be hacked

Android Authority

time29 minutes ago

  • Android Authority

Beware! Research shows Gmail's AI email summaries can be hacked

Edgar Cervantes / Android Authority TL;DR A researcher recently demonstrated a Gemini flaw that could be exploited to inject malicious instructions while using Gmail's email summary feature. These instructions were hidden in plain text under the body of the email. Google responded to the research, stating that it had updated its models to identify such prompt engineering measures and block phishing links. Big tech companies have been billing AI as the ubiquitous tool that frees us from mundane activities, and that includes reading long emails thoroughly. But little do we hear about the possibility of AI unknowingly leading us into traps that may be used to steal our sensitive data. That's precisely what recent research highlighted when it discussed the possibility of hackers using Gemini as means for phishing. Recently, a cybersecurity researcher demonstrated a vulnerability targeting Google Workspace users where Gemini can be manipulated to display malicious instructions. The vulnerability was submitted to 0din, which is the Mozilla Foundation's bug bounty program for AI applications, and talks more specifically about the ease of misguiding Gmail's email summarization feature for Google Workspace subscribers. The submission demonstrates how deceptive prompts can be inserted into an email's body in plain HTML format or as text hidden with an invisible font color. Gemini interprets these prompts as commands and can display them in the email summary without any caution. Since the message is hidden in the body of the original email, it goes unnoticed by the receiver, who is likely to believe it to be a warning generated by Gemini. Researcher blurrylogic pointed out that this can be exploited to display messages that may compel the recipient to share sensitive information without proper verification, which could lead to their credentials being stolen using social engineering. Shortly after the findings were published on 0din, Google shared details about steps it had taken to make Gemini more resilient against such tactics. Addressing reports about Gemini's vulnerability, Google said it continually updates its repository of malicious prompts or instructions that can manipulate the chatbot's output. The underlying machine learning models are constantly trained to ensure they don't respond to malicious instructions. Google Google also listed other steps it takes to counter different forms of phishing attempts. It noted that Gemini identifies suspicious or rogue links disguised as useful ones in the email body and redacts them from the email summaries. To further strengthen its security measures, Gemini also requests confirmation for actions such as deleting specific tasks. Despite Google's prompt measures, we should be warned that online threat perpetrators usually think one step ahead. Therefore, we advise against blindly trusting any messages in Gemini that prompt actions such as clicking a link, making a call, or emailing a specific person. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Boulder startup Ridley aims to revolutionize home sales with AI
Boulder startup Ridley aims to revolutionize home sales with AI

Axios

time41 minutes ago

  • Axios

Boulder startup Ridley aims to revolutionize home sales with AI

A Boulder entrepreneur who sold his own house without a real estate agent — and went viral doing it — is launching a startup Tuesday to help others do the same. Why it matters: Real estate commissions, typically 5%-6%, remain stubbornly high, even after last year's landmark antitrust settlement was supposed to shake up how agents get paid. Driving the news: Mike Chambers is debuting his AI-fueled agent-free platform Ridley in Colorado. He made national headlines earlier this year when he successfully sought to prove he could sell his house without an agent after taking to social media with the handle @realtorshateme to chronicle the DIY process. He says most sellers don't need an agent — just the right tools. Ridley aims to be that toolkit. What he's saying: "The No. 1 mission of this company is to empower consumers to take control of this process on their own and save tens of thousands of dollars in unnecessary fees in the process," Chambers told Axios Denver. How it works: Ridley's desktop-only platform breaks the home-selling process into stages with checklists, AI guidance and human support. Tools include: Pricing guidance using AI that factors in upgrades, defects, features and market data. A property page builder for direct offers and showings. MLS access via partner brokerages, plus syndication to Zillow, Redfin and A document center with smart pre-filled forms and highlighted explanations. A vendor scheduler via Thumbtack for photographers, inspectors and more. By the numbers: It's $999 for the base service, with add-ons available for MLS access and legal support. Between the lines: Despite his cheeky Instagram handle, Chambers insists he's not "anti-agent" — just anti-system. He's also not naive. He "100%" expects industry backlash. What's next: Chambers plans to expand Ridley to other states. He's also building out an agent mode for professionals who want to use the same tools or offer à la carte services to sellers who still want a hand.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store