logo
#

Latest news with #TarunEldho

How Close Is The World To AGI?
How Close Is The World To AGI?

Forbes

time02-07-2025

  • Science
  • Forbes

How Close Is The World To AGI?

Tarun Eldho Alias is a technology leader with over a decade of experience in software development and is the founder and CTO of Neem Inc. During a recent discussion with a data scientist who studies the limitations of current AI architectures and the breakthroughs needed for future general intelligence, a key question was raised that sparked a critical conversation: With the rise of AI tools like chatGPT, how close are we to artificial general intelligence? While the capabilities of LLMs are pretty impressive and there's no doubt about their commercial value, most people can agree that they aren't really intelligent. They lack a deeper understanding of the basic concepts that we take for granted, and they don't have the agency that humans have. While many experts claim that we may be closer to AGI than many people realize, are LLMs going to get us there? The Expectations Of AGI Before we can debate this, let's look at the most common traits expected by people from an AGI: • Generalization: This is the ability to learn and transfer knowledge from one domain to another and learn a variety of tasks or functions using the same model or architecture. • Reasoning: This ability solves new problems without being specifically instructed or trained on similar problems. • Awareness: This is the contextual understanding of a situation or task and the ability to maintain continuity of this understanding in both the short and long term. • Self-Supervised Learning: The model must be able to learn both knowledge and contexts without explicit labels and only through interaction with the environment and occasional prompting. • Continuous Learning: AGI must learn continuously through interaction with its environment without separate training and inference phases. Current LLMs display some of these traits, but only in limited forms. They achieve generalization and some reasoning through training on massive datasets, a process that often takes weeks or months. However, these models build statistical correlations rather than developing causal, hierarchical world models. Incorporating new information is difficult; updating an LLM with fresh data poses a challenge because of the high risk of forgetting the knowledge learned from past data. They also occasionally suffer from poor contextual awareness, particularly during long or complex conversations. In contrast, the human brain operates very differently. We interact with the world through our bodies, generating data via sensory input and motor actions. Learning happens continuously as we compare our predictions with sensory feedback, adjusting our internal models accordingly. This lifelong, embodied learning process creates a rich and evolving world model—a contextual foundation that shapes all perception and decision-making. Knowledge is built hierarchically: Simple patterns become the building blocks for increasingly complex concepts. Over time, we form new associations from past experiences, enabling us to solve problems we've never encountered before. From LLMs To AGI: Bridging The Gaps To transition from LLMs to AGI in this way, we need to overcome several major limitations and introduce fundamentally new capabilities that most current AI systems lack. These include: Training a machine learning model is essentially an optimization problem: It involves adjusting the weights between nodes in a neural network to minimize prediction errors across massive datasets. This is typically done using an algorithm called gradient descent, which requires significant computational power—especially for large language models trained on nearly the entire internet. These models often contain billions of parameters, making training both resource-intensive and time-consuming. To move toward more efficient and adaptable systems, we need new learning algorithms—potentially inspired by biological processes like Hebbian learning, where connections strengthen based on experience. Such an approach could support continuous training, enabling models to learn and make predictions at the same time, rather than in separate phases. In current neural networks, knowledge is stored in a highly distributed and overlapping way. This makes it difficult to isolate specific pathways or selectively adjust weights without disrupting previously learned information—a challenge known as catastrophic forgetting. To overcome this, we need algorithms and architectures that promote self-organizing structures and hierarchical representations. These would allow knowledge to be compartmentalized more effectively, enabling models to learn continuously without overwriting what they already know. Hierarchies also encourage the reuse of existing knowledge, which can significantly accelerate the learning of new concepts. Our brains constantly construct and maintain internal models of the world that are tailored to the context of each situation. These models help us store relevant knowledge and use it to make accurate predictions. For example, when we move through our environment, we rely on a combination of visual input, somatosensory feedback and motor activity to build a three-dimensional mental map of the space around us. This internal model is what allows us to walk to the bed in the dark without bumping into anything, or to reach for an object we've dropped without needing to see it. Planning is what allows us to apply our knowledge in pursuit of favorable outcomes. At the core of this ability is an intrinsic reward model—a mechanism that enables a system to evaluate and compare multiple possible outcomes and choose the most beneficial one. To do this effectively, the model must map sequences of events onto a timeline and simulate the sensory feedback that different actions would produce. By estimating the reward associated with each sequence, the model can identify and select the most promising course of action. As new sensory input becomes available, the plan is continuously updated to adapt to changing circumstances. Closing Thoughts While no current model possesses all of the key attributes required for AGI—and some of the necessary algorithms haven't even been developed yet—it's safe to say that we're still quite a way off from achieving true AGI. At the same time, our understanding of how the human brain accomplishes these capabilities remains incomplete. Neuroscience continues to make progress, but studying the brain in action is limited by the current tools available, making breakthroughs slow and complex. That said, progress is steady. As research advances, AGI has the potential to unlock far more capable forms of machine intelligence, powering autonomous agents, self-driving vehicles, robotics and systems that can help tackle some of humanity's most difficult problems. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store