logo
Shubhanshu Shukla's Axiom Mission Postponed: How Does Liquid Oxygen Leak Affect Take-Off? Explained

Shubhanshu Shukla's Axiom Mission Postponed: How Does Liquid Oxygen Leak Affect Take-Off? Explained

News1811-06-2025
Last Updated:
Liquid oxygen is oxygen that has been cooled to a point where it becomes a pale blue liquid and is used extensively in rocket propulsion systems and spacecraft life support systems
India may have to wait a little longer to see its astronaut Shubhanshu Shukla in Space as the Axiom-4 mission to the International Space Station, carrying three more astronauts, has been put off for the time being, with engineers seeking more time to repair a leak in the SpaceX's Falcon-9 rocket.
In a post on X, SpaceX announced that it was 'standing down" from the Falcon-9 launch of the Axiom-4 mission to allow repairs of the liquid oxygen leak identified during the post-static booster inspections. 'Once complete—and pending Range availability—we will share a new launch date," SpaceX said.
Earlier, addressing a pre-launch press conference, SpaceX Vice-President William Gerstenmaier said engineers had fixed some snags in the Falcon-9 rocket that were discovered during the static fire test and had gone unnoticed during the post-flight refurbishment of boosters.
Gerstenmaier said engineers had discovered a lock leak that was previously seen on the booster during its entry on the last mission and was not fully repaired during the refurbishment.
WHAT IS LIQUID OXYGEN?
Liquid oxygen (LOX) is oxygen that has been cooled to a point where it becomes a pale blue liquid. It's used extensively in rocket propulsion systems and spacecraft life support systems. LOX is essential for providing breathable air for astronauts, as well as being a key component in the fuel systems for launch.
During pre-launch preparations, a liquid oxygen leak was detected in the system that supplies the spacecraft with oxygen for both life support and propulsion. The leak itself wasn't catastrophic, but it posed significant safety risks for the crew and the mission. Even a small leak of LOX in spacecraft systems can be dangerous because of the extremely cold temperatures of liquid oxygen (-183°C or -297°F) and the potential for it to evaporate into gaseous oxygen, creating risks for fire or explosions in the pressurised environment.
WHY DID THE LEAK DELAY THE MISSION?
Safety Concerns: The primary reason for delaying the mission was the safety of the crew. A leak in the oxygen system, especially if it couldn't be sealed or fully contained, could lead to catastrophic failure during launch or while in orbit. Ensuring that all systems are completely operational and leak-free is critical for crewed missions, and any issue with life support or propulsion systems could jeopardize the mission.
Technical Complexities: Spacecraft systems, especially those used for crewed missions, are highly sensitive and complex. Even a seemingly minor issue, like a leak in the oxygen system, can trigger a more thorough investigation, technical fixes, and re-testing. Given the tight tolerance and precision required for these systems, any malfunction necessitates a delay to ensure that the spacecraft is fully operational and safe for launch.
Crew Oxygen Supply: The spacecraft needs a stable supply of LOX to create breathable air for astronauts during launch, orbital operations, and re-entry. If there's a leak in the LOX system, it could compromise the crew's oxygen supply, especially during the early phases of flight when the spacecraft is still in the lower atmosphere and the crew is exposed to high g-forces and other launch stresses.
Air Quality and Safety: In a crewed mission, any disruption to the life support system (like a leaking oxygen tank) is a serious concern. Spacecraft need to maintain precise levels of oxygen and pressure to ensure the crew's safety, and a leak could risk causing an unsafe environment. This would prevent the mission from launching until the leak was fixed or isolated.
Launch Pad Procedures: In space missions, there are strict procedures for handling any anomaly during pre-launch, including detecting leaks in critical systems. A LOX leak detected prior to take-off would trigger an automatic halt to the launch sequence because safety is the top priority. This means that even if the leak were small, engineers would need to address it to ensure that the spacecraft could handle the extreme conditions of take-off.
Fuelling and Pre-Flight Readiness: The fuelling process for LOX can take hours, and a leak could disrupt the timing of fuel loading. If the leak were detected during fuelling, the launch would need to be scrubbed and rescheduled after repairs, further delaying the mission.
THE AXIOM MISSION
The Axiom-4 (Ax-4) mission is part of a series of private spaceflights organised by Axiom Space, a private space company, to send private astronauts to the International Space Station (ISS). The Ax-4 mission is part of Axiom's goal to commercialise space travel and provide private citizens and researchers the opportunity to experience life in orbit and conduct research aboard the ISS.
NASA's Axiom-4 mission will 'realize the return" to human spaceflight for India, Poland, and Hungary, with each nation's first government-sponsored flight after over 40 years. While Axiom-4 marks these countries' second human spaceflight mission in history, it will be the first time all three nations will execute a mission on board the ISS. The four crew members will be led by Axiom Space's Director of Human Spaceflight, Peggy Whitson while Shukla will serve as the pilot in the mission.
This historic mission highlights how Axiom Space is redefining the pathway to low-Earth orbit and elevating national space programs globally. The space crew will carry out around 60 experiments during their 14-day stay in space, ranging from life science to technology demonstrations to diabetes research.
First Published:
News explainers Shubhanshu Shukla's Axiom Mission Postponed: How Does Liquid Oxygen Leak Affect Take-Off? Explained
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom
‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom

Economic Times

time7 minutes ago

  • Economic Times

‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom

OpenAI is on the verge of releasing GPT-5, the most powerful model it has ever built. But its CEO, Sam Altman, isn't celebrating just yet. Instead, he's sounding the a revealing podcast appearance on This Past Weekend with Theo Von, Altman admitted that testing the model left him shaken. 'It feels very fast,' he said. 'There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?''His words weren't about performance metrics. They were about compared the development of GPT-5 to the Manhattan Project — the World War II effort that led to the first atomic bomb. The message was clear: speed and capability are growing faster than our ability to think through what they actually continued, 'Maybe it's great, maybe it's bad—but what have we done?' This wasn't just about AI as a tool. Altman was questioning whether humanity is moving so fast that it can no longer understand — or control — what it builds. 'It feels like there are no adults in the room,' he added, suggesting that regulation is far behind the pace of specs for GPT-5 are still under wraps, but reports suggest significant leaps over GPT-4: better multi-step reasoning, longer memory, and sharper multimodal capabilities. Altman himself didn't hold back about the previous version, saying, 'GPT-4 is the dumbest model any of you will ever have to use again, by a lot.'For many users, GPT-4 was already advanced. If GPT-5 lives up to the internal hype, it could change how people work, create, and another recent conversation, Altman described a moment where GPT-5 answered a complex question he couldn't solve himself. 'I felt like useless relative to the AI,' he admitted. 'It was really hard, but the AI just did it like that.'OpenAI's long-term goal has always been Artificial General Intelligence (AGI). That's AI capable of understanding and reasoning across almost any task — human-like once downplayed its arrival, suggesting it would 'whoosh by with surprisingly little societal impact.' Now, he's sounding far less sure. If GPT-5 is a real step toward AGI, the absence of a global framework to govern it could be dangerous. AGI remains loosely defined. Some firms treat it as a technical milestone. Others see it as a $100 billion opportunity, as Microsoft's partnership contract with OpenAI implies. Either way, the next model may blur the line between AI that helps and AI that acts. OpenAI isn't just facing ethical dilemmas. It's also under financial are pushing for the firm to transition into a for-profit entity by the end of the year. Microsoft, which has invested $13.5 billion in OpenAI, reportedly wants more control. There are whispers that OpenAI could declare AGI early in order to exit its agreement with Microsoft — a move that would shift the power balance in the AI sector insiders have reportedly described their wait-and-watch approach as the 'nuclear option.' In response, OpenAI is said to be prepared to go to court, accusing Microsoft of anti-competitive behaviour. One rumoured trigger could be the release of an AI coding agent so capable it surpasses a human programmer — something GPT-5 might be edging meanwhile, has tried to lower expectations about rollout glitches. Posting on X, he said, 'We have a ton of stuff to launch over the next couple of months — new models, products, features, and more. Please bear with us through some probable hiccups and capacity crunches.'While researchers and CEOs debate long-term AI impacts, one threat is already here: fraud. Haywood Talcove, CEO of the Government Group at LexisNexis Risk Solutions, works with over 9,000 public agencies. He says the AI fraud crisis is not approaching — it's already happening. 'Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programmes,' he warned. 'Criminal networks are using deepfakes, synthetic identities, and large language models to outpace outdated fraud defences — and they're winning.' During the pandemic, fraudsters exploited weaknesses to steal hundreds of billions in unemployment benefits. That trend has only accelerated. Today's tools are more advanced and automated, capable of filing tens of thousands of fake claims in a day. Talcove believes the AI arms race between criminals and institutions is widening. 'We may soon recognise a similar principle for AI that I call 'Altman's Law': every 180 days, AI capabilities double.' His call to action is blunt. 'Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.'Not everyone is convinced by Altman's remarks. Some see them as clever marketing. But his past record and unfiltered tone suggest genuine might be OpenAI's most ambitious release yet. It could also be a signpost for the world to stop, look around, and ask itself what kind of intelligence it really wants to build — and how much control it's willing to give up.

‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom
‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom

Time of India

timean hour ago

  • Time of India

‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom

OpenAI is on the verge of releasing GPT-5 , the most powerful model it has ever built. But its CEO, Sam Altman , isn't celebrating just yet. Instead, he's sounding the alarm. In a revealing podcast appearance on This Past Weekend with Theo Von, Altman admitted that testing the model left him shaken. 'It feels very fast,' he said. 'There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?'' Explore courses from Top Institutes in Please select course: Select a Course Category Finance Data Science MBA others Data Analytics Management Cybersecurity Public Policy Artificial Intelligence Product Management Digital Marketing Data Science MCA PGDM Technology CXO Project Management Others Healthcare Operations Management healthcare Leadership Design Thinking Degree Skills you'll gain: Duration: 9 Months IIM Calcutta SEPO - IIMC CFO India Starts on undefined Get Details Skills you'll gain: Duration: 7 Months S P Jain Institute of Management and Research CERT-SPJIMR Fintech & Blockchain India Starts on undefined Get Details His words weren't about performance metrics. They were about consequences. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Are you ready to conquer a planet? Undo 'What have we done?': Sam Altman Altman compared the development of GPT-5 to the Manhattan Project — the World War II effort that led to the first atomic bomb. The message was clear: speed and capability are growing faster than our ability to think through what they actually mean. He continued, 'Maybe it's great, maybe it's bad—but what have we done?' Live Events This wasn't just about AI as a tool. Altman was questioning whether humanity is moving so fast that it can no longer understand — or control — what it builds. 'It feels like there are no adults in the room,' he added, suggesting that regulation is far behind the pace of innovation. A big leap from ChatGPT-4 Exact specs for GPT-5 are still under wraps, but reports suggest significant leaps over GPT-4: better multi-step reasoning, longer memory, and sharper multimodal capabilities. Altman himself didn't hold back about the previous version, saying, 'GPT-4 is the dumbest model any of you will ever have to use again, by a lot.' For many users, GPT-4 was already advanced. If GPT-5 lives up to the internal hype, it could change how people work, create, and think. In another recent conversation, Altman described a moment where GPT-5 answered a complex question he couldn't solve himself. 'I felt like useless relative to the AI,' he admitted. 'It was really hard, but the AI just did it like that.' AGI: The endgame or a marketing hook? OpenAI's long-term goal has always been Artificial General Intelligence (AGI). That's AI capable of understanding and reasoning across almost any task — human-like intelligence. Altman once downplayed its arrival, suggesting it would 'whoosh by with surprisingly little societal impact.' Now, he's sounding far less sure. If GPT-5 is a real step toward AGI, the absence of a global framework to govern it could be dangerous. AGI remains loosely defined. Some firms treat it as a technical milestone. Others see it as a $100 billion opportunity, as Microsoft 's partnership contract with OpenAI implies. Either way, the next model may blur the line between AI that helps and AI that acts. Internal tensions and external pressure OpenAI isn't just facing ethical dilemmas. It's also under financial pressure. Investors are pushing for the firm to transition into a for-profit entity by the end of the year. Microsoft, which has invested $13.5 billion in OpenAI, reportedly wants more control. There are whispers that OpenAI could declare AGI early in order to exit its agreement with Microsoft — a move that would shift the power balance in the AI sector dramatically. Microsoft insiders have reportedly described their wait-and-watch approach as the 'nuclear option.' In response, OpenAI is said to be prepared to go to court, accusing Microsoft of anti-competitive behaviour. One rumoured trigger could be the release of an AI coding agent so capable it surpasses a human programmer — something GPT-5 might be edging towards. Altman, meanwhile, has tried to lower expectations about rollout glitches. Posting on X, he said, 'We have a ton of stuff to launch over the next couple of months — new models, products, features, and more. Please bear with us through some probable hiccups and capacity crunches.' Fraudsters are already using AI to exploit public systems While researchers and CEOs debate long-term AI impacts, one threat is already here: fraud. Haywood Talcove, CEO of the Government Group at LexisNexis Risk Solutions, works with over 9,000 public agencies. He says the AI fraud crisis is not approaching — it's already happening. 'Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programmes,' he warned. 'Criminal networks are using deepfakes , synthetic identities , and large language models to outpace outdated fraud defences — and they're winning.' During the pandemic, fraudsters exploited weaknesses to steal hundreds of billions in unemployment benefits. That trend has only accelerated. Today's tools are more advanced and automated, capable of filing tens of thousands of fake claims in a day. Talcove believes the AI arms race between criminals and institutions is widening. 'We may soon recognise a similar principle for AI that I call 'Altman's Law': every 180 days, AI capabilities double.' His call to action is blunt. 'Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.' Not everyone is convinced by Altman's remarks. Some see them as clever marketing. But his past record and unfiltered tone suggest genuine concern. GPT-5 might be OpenAI's most ambitious release yet. It could also be a signpost for the world to stop, look around, and ask itself what kind of intelligence it really wants to build — and how much control it's willing to give up.

SpaceX delivers 4 astronauts to ISS just 15 hrs after launch
SpaceX delivers 4 astronauts to ISS just 15 hrs after launch

Hans India

time3 hours ago

  • Hans India

SpaceX delivers 4 astronauts to ISS just 15 hrs after launch

Washington: SpaceX delivered a fresh crew to the International Space Station on Saturday, making the trip in a quick 15 hours. The four US, Russian and Japanese astronauts pulled up in their SpaceX capsule after launching from NASA's Kennedy Space Centre. They will spend at least six months at the orbiting lab, swapping places with colleagues up there since March. SpaceX will bring those four back as early as Wednesday. Moving in are NASA's Zena Cardman and Mike Fincke, Japan's Kimiya Yui and Russia's Oleg Platonov — each of whom had been originally assigned to other missions. 'Hello, space station!' Fincke radioed as soon as the capsule docked high above the South Pacific. Cardman and another astronaut were pulled from a SpaceX flight last year to make room for NASA's two stuck astronauts, Boeing Starliner test pilots Butch Wilmore and Suni Williams, whose space station stay went from one week to more than nine months. Fincke and Yui had been training for the next Starliner mission. But with Starliner grounded by thruster and other problems until 2026, the two switched to SpaceX. Platonov was bumped from the Soyuz launch lineup a couple of years ago because of an undisclosed illness. Their arrival temporarily puts the space station population at 11. The astronauts greeting them had cold drinks and hot food waiting for them. While their taxi flight was speedy by US standards, the Russians hold the record for the fastest trip to the space station — a lightning-fast three hours.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store