SpaceX Starship rocket explodes in setback to Musk's Mars mission
The explosion occurred around 11 p.m. local time while Starship was on a test stand at its Brownsville, Texas Starbase while preparing for the tenth test flight, SpaceX said in a post on Musk's social-media platform X.
The company attributed it to a "major anomaly,' and said all personnel were safe.
'Preliminary data suggests that a nitrogen COPV in the payload bay failed below its proof pressure,' Musk said in a post on X, in a reference to a nitrogen gas storage unit known as a Composite Overwrapped Pressure Vessel. 'If further investigation confirms that this is what happened, it is the first time ever for this design,' he continued.
SpaceX didn't immediately respond to a request for further comment.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Yahoo
18 minutes ago
- Yahoo
Satellite backed by Google, Bezos and Musk to track methane is lost in space
An $88mn satellite backed by Google, Jeff Bezos and Elon Musk's SpaceX has been lost in space, in a blow to global efforts to detect the Sign in to access your portfolio


The Hill
36 minutes ago
- The Hill
New brain implants poised to help people with disabilities
People who have lost their ability to move or speak may have the option to do so again, thanks to the development of surgically implanted devices that link the brain to a computer. Several firms are now set to take brain-computer interface, or BCI, from the experimental stage to commercial usage more than 20 years after researchers first demonstrated that a person could move a computer cursor with their thoughts. Implanted BCIs work by detecting and decoding signals coming from areas of the brain that control movement or speech. These signals indicate when a person is trying to move a limb or speak a word. Tech billionaire Elon Musk's Neuralink is one of the most talked about players in the BCI field, but the first product to reach the market could possibly come from competitors BlackRock and Neurotech. The first BCI users are likely to be people living with paralysis from a spinal injury or amyotrophic lateral sclerosis. Early products will allow them to control a computer cursor or generate artificial speech.


Forbes
an hour ago
- Forbes
How Close Is The World To AGI?
Tarun Eldho Alias is a technology leader with over a decade of experience in software development and is the founder and CTO of Neem Inc. During a recent discussion with a data scientist who studies the limitations of current AI architectures and the breakthroughs needed for future general intelligence, a key question was raised that sparked a critical conversation: With the rise of AI tools like chatGPT, how close are we to artificial general intelligence? While the capabilities of LLMs are pretty impressive and there's no doubt about their commercial value, most people can agree that they aren't really intelligent. They lack a deeper understanding of the basic concepts that we take for granted, and they don't have the agency that humans have. While many experts claim that we may be closer to AGI than many people realize, are LLMs going to get us there? The Expectations Of AGI Before we can debate this, let's look at the most common traits expected by people from an AGI: • Generalization: This is the ability to learn and transfer knowledge from one domain to another and learn a variety of tasks or functions using the same model or architecture. • Reasoning: This ability solves new problems without being specifically instructed or trained on similar problems. • Awareness: This is the contextual understanding of a situation or task and the ability to maintain continuity of this understanding in both the short and long term. • Self-Supervised Learning: The model must be able to learn both knowledge and contexts without explicit labels and only through interaction with the environment and occasional prompting. • Continuous Learning: AGI must learn continuously through interaction with its environment without separate training and inference phases. Current LLMs display some of these traits, but only in limited forms. They achieve generalization and some reasoning through training on massive datasets, a process that often takes weeks or months. However, these models build statistical correlations rather than developing causal, hierarchical world models. Incorporating new information is difficult; updating an LLM with fresh data poses a challenge because of the high risk of forgetting the knowledge learned from past data. They also occasionally suffer from poor contextual awareness, particularly during long or complex conversations. In contrast, the human brain operates very differently. We interact with the world through our bodies, generating data via sensory input and motor actions. Learning happens continuously as we compare our predictions with sensory feedback, adjusting our internal models accordingly. This lifelong, embodied learning process creates a rich and evolving world model—a contextual foundation that shapes all perception and decision-making. Knowledge is built hierarchically: Simple patterns become the building blocks for increasingly complex concepts. Over time, we form new associations from past experiences, enabling us to solve problems we've never encountered before. From LLMs To AGI: Bridging The Gaps To transition from LLMs to AGI in this way, we need to overcome several major limitations and introduce fundamentally new capabilities that most current AI systems lack. These include: Training a machine learning model is essentially an optimization problem: It involves adjusting the weights between nodes in a neural network to minimize prediction errors across massive datasets. This is typically done using an algorithm called gradient descent, which requires significant computational power—especially for large language models trained on nearly the entire internet. These models often contain billions of parameters, making training both resource-intensive and time-consuming. To move toward more efficient and adaptable systems, we need new learning algorithms—potentially inspired by biological processes like Hebbian learning, where connections strengthen based on experience. Such an approach could support continuous training, enabling models to learn and make predictions at the same time, rather than in separate phases. In current neural networks, knowledge is stored in a highly distributed and overlapping way. This makes it difficult to isolate specific pathways or selectively adjust weights without disrupting previously learned information—a challenge known as catastrophic forgetting. To overcome this, we need algorithms and architectures that promote self-organizing structures and hierarchical representations. These would allow knowledge to be compartmentalized more effectively, enabling models to learn continuously without overwriting what they already know. Hierarchies also encourage the reuse of existing knowledge, which can significantly accelerate the learning of new concepts. Our brains constantly construct and maintain internal models of the world that are tailored to the context of each situation. These models help us store relevant knowledge and use it to make accurate predictions. For example, when we move through our environment, we rely on a combination of visual input, somatosensory feedback and motor activity to build a three-dimensional mental map of the space around us. This internal model is what allows us to walk to the bed in the dark without bumping into anything, or to reach for an object we've dropped without needing to see it. Planning is what allows us to apply our knowledge in pursuit of favorable outcomes. At the core of this ability is an intrinsic reward model—a mechanism that enables a system to evaluate and compare multiple possible outcomes and choose the most beneficial one. To do this effectively, the model must map sequences of events onto a timeline and simulate the sensory feedback that different actions would produce. By estimating the reward associated with each sequence, the model can identify and select the most promising course of action. As new sensory input becomes available, the plan is continuously updated to adapt to changing circumstances. Closing Thoughts While no current model possesses all of the key attributes required for AGI—and some of the necessary algorithms haven't even been developed yet—it's safe to say that we're still quite a way off from achieving true AGI. At the same time, our understanding of how the human brain accomplishes these capabilities remains incomplete. Neuroscience continues to make progress, but studying the brain in action is limited by the current tools available, making breakthroughs slow and complex. That said, progress is steady. As research advances, AGI has the potential to unlock far more capable forms of machine intelligence, powering autonomous agents, self-driving vehicles, robotics and systems that can help tackle some of humanity's most difficult problems. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?