
How to watch a SpaceX Crew Dragon splash down with 4 ISS crew members
Traveling inside a SpaceX Crew Dragon, private astronauts Peggy Whitson (U.S.), Shubhanshu Shukla (India), Sławosz Uznański-Wiśniewski (Poland), and Tibor Kapu (Hungary) undocked from the ISS at 7:05 a.m. ET on Monday. After a journey time of just over 22 hours, they're expected to splash down off the coast of Florida in the early hours of Tuesday, July 15.
Axiom Space will livestream the homecoming of the Crew Dragon capsule, including its high-speed descent, parachute deployment, and splashdown.
The crew are in for an exhilarating ride through Earth's atmosphere before the spacecraft's parachutes deploy to dramatically reduce its speed prior to splashdown.
On his own trip home in 2020 in what was the Crew Dragon's first-ever crewed descent from orbit, NASA astronaut Bob Behnken later described the unique experience.
'As we descended through the atmosphere, the thrusters were firing almost continuously,' Behnken recounted. 'It doesn't sound like a machine, it sounds like an animal coming through the atmosphere with all the puffs that are happening from the thrusters and the atmosphere.'
The mission — the fourth private ISS visit organized by Texas-based Axiom Space — involved the most research and science-related activities to date, with the four crew members working on around 60 scientific studies and activities supplied by more than 30 countries.
It's hoped that the results from their efforts will enhance global knowledge in human research, Earth observation, as well as life, biological, and material sciences.
How to watch
The Crew Dragon and its four occupants are expected to splash down off the coast of California at about 5:30 a.m. ET (2:30 a.m. PT) on Tuesday, July 15.
Axiom Space will live stream the final moments of the homecoming. You can watch the webcast via the video player embedded at the top of this page, or via Axiom Space's YouTube channel, which will carry the same feed.
Besides footage from an array of cameras, you'll also get to hear the live audio communications between the Ax-4 crew and Mission Control.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
22 minutes ago
- The Hill
Private spaceflight ends with a Pacific splashdown for astronauts from India, Poland and Hungary
CAPE CANAVERAL, Fla. (AP) — A private spaceflight featuring the first astronauts in more than 40 years from India, Poland and Hungary came to a close Tuesday with a Pacific splashdown. Their SpaceX capsule undocked from the I nternational Space Station on Monday and parachuted into the ocean off the Southern California coast, less than 24 hours later. The crew of four launched nearly three weeks ago on a flight chartered by the Houston company Axiom Space. Axiom's Peggy Whitson, the most experienced U.S. astronaut, served as commander. Joining her were India's Shubhanshu Shukla, Poland's Slawosz Uznanski-Wisniewski and Hungary's Tibor Kapu, whose countries paid more than $65 million apiece for the mission. 'Thanks for the great ride and safe trip,' Whitson radioed moments after splashdown. Her record now stands at 695 days in space over five missions, longer than any other American or woman. The visiting astronauts conducted dozens of experiments in orbit while celebrating their heritage. The last time India, Poland and Hungary put anyone in space was during the late 1970s and 1980s, launching with the Soviets. They waved and smiled as they emerged from the capsule, one by one, into the early morning darkness. It was Axiom's fourth mission to the orbiting outpost since 2022, part of NASA's ongoing effort to open up space to more businesses and people. The company is one of several developing their own space stations to replace the current one. NASA plans to abandon the outpost in 2030, after more than 30 years of operation. ___ The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute's Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.


TechCrunch
34 minutes ago
- TechCrunch
Research leaders urge tech industry to monitor AI's ‘thoughts'
AI researchers from OpenAI, Google DeepMind, Anthropic, as well as a broad coalition of companies and nonprofit groups, are calling for deeper investigation into techniques for monitoring the so-called thoughts of AI reasoning models in a position paper published Tuesday. A key feature of AI reasoning models, such as OpenAI's o3 and DeepSeek's R1, is their chains-of-thought or CoTs — an externalized process in which AI models work through problems, similar to how humans use a scratch pad to work through a difficult math question. These models are a core technology for powering AI agents, and the paper's authors argue that CoT monitoring could be a core method to keep AI agents under control as they become more widespread and capable. 'CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions,' said the researchers in the position paper. 'Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved.' The position paper asks leading AI model developers to study what makes CoTs 'monitorable' — in other words, what factors can increase or decrease transparency into how AI models really arrive at answers. The paper's authors say that CoT monitoring may be a key method for understanding AI reasoning models, but note that it could be fragile, cautioning against any interventions that could reduce their transparency or reliability. The paper's authors also call on AI model developers to track CoT monitorability and study how the method could one day be implemented as a safety measure. Notable signatories of the paper include OpenAI chief research officer Mark Chen, Safe Superintelligence CEO Ilya Sutskever, Nobel laureate Geoffrey Hinton, Google DeepMind cofounder Shane Legg, xAI safety adviser Dan Hendrycks, and Thinking Machines co-founder John Schulman. Other signatories come from organizations including the UK AI Security Institute, METR, Apollo Research, and UC Berkeley. The paper marks a moment of unity among many of the AI industry's leaders in an attempt to boost research around AI safety. It comes at a time when tech companies are caught in a fierce competition — which has led Meta to poach top researchers from OpenAI, Google DeepMind, and Anthropic with million-dollar offers. Some of the most highly sought-after researchers are those building AI agents and AI reasoning models. Techcrunch event LIVE NOW! TechCrunch All Stage Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW 'We're at this critical time where we have this new chain-of-thought thing. It seems pretty useful, but it could go away in a few years if people don't really concentrate on it,' said Bowen Baker, an OpenAI researcher who worked on the paper, in an interview with TechCrunch. 'Publishing a position paper like this, to me, is a mechanism to get more research and attention on this topic before that happens.' OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024. In the months since, the tech industry was quick to release competitors that exhibit similar capabilities, with some models from Google DeepMind, xAI, and Anthropic showing even more advanced performance on benchmarks. However, there's relatively little understood about how AI reasoning models work. While AI labs have excelled at improving the performance of AI in the last year, that hasn't necessarily translated into a better understanding of how they arrive at their answers. Anthropic has been one of the industry's leaders in figuring out how AI models really work — a field called interpretability. Earlier this year, CEO Dario Amodei announced a commitment to crack open the black box of AI models by 2027 and invest more in interpretability. He called on OpenAI and Google DeepMind to research the topic more, as well. Early research from Anthropic has indicated that CoTs may not be a fully reliable indication of how these models arrive at answers. At the same time, OpenAI researchers have said that CoT monitoring could one day be a reliable way to track alignment and safety in AI models. The goal of position papers like this is to signal boost and attract more attention to nascent areas of research, such as CoT monitoring. Companies like OpenAI, Google DeepMind, and Anthropic are already researching these topics, but it's possible that this paper will encourage more funding and research into the space.


Gizmodo
40 minutes ago
- Gizmodo
There's a Strange New Hole in Yellowstone National Park
Last April, geologists conducting routine maintenance at temperature logging stations in Yellowstone National Park's Norris Geyser Basin found something unexpected: a previously undocumented thermal pool of blue water. The newly identified pool, found in the Porcelain Basin subbasin, is about 13 feet (4 meters) wide, its idyllic blue water is around 109 degrees Fahrenheit (43 degrees Celsius), and the water's surface sits about one foot (30 centimeters) below the rim of the pool, according to a United States Geological Survey statement. The geologists found light-gray mud-covered rocks, including rocks up to one foot wide (30 cm), surrounding the pool. How did this feature form? According to the geologists, the clues actually paint a relatively clear picture: the pool likely resulted from a hydrothermal explosion—when liquid water turns to steam and causes underground pressure changes, creating a steam blast. Hydrothermal explosions are not uncommon at Norris Geyser Basin, which has experienced similar events before. Well-documented ones include the 1989 explosion of Porkchop Geyser. More recently, a new monitoring station installed in 2023 detected an explosion in the Porcelain Terrace area on April 15, 2024. Satellite imagery shows that the new pool did not exist before December 19, 2024. By January 6, 2025, a small cavity had begun to take shape, and on February 13, the water pool had fully formed. However, the recently installed monitoring station—which detects hydrothermal activity via infrasound (extremely low-frequency sound waves)—did not register any strong or distinct explosions during that time. It did, though, detect a number of weak acoustic signals from the direction of the pool, including on December 25, 2024, January 15, 2025, and February 11, 2025, but without an associated seismic signal that would normally accompany a strong explosion. As such, the pool likely formed after a number of smaller explosions chucked out rocks and silica mud, as opposed to a single big event. Silica-rich water then filled the resulting hole. The activity probably started on December 25, 2024, and continued in January and early February of this year. Norris Geyser Basin is the oldest and most active thermal area in Yellowstone and hosts the tallest geyser—a sporadically explosive hot water spring—on Earth. Yellowstone itself has over 10,000 thermal features, such as geysers, hot springs, steam vents, and mudpots, which attract tourists and scientists alike from all over the globe. The thermal activity is driven by an underground magma reservoir (part of the giant Yellowstone volcano complex), which heats up groundwater and triggers a series of chemical and physical reactions. Ultimately, the finding shows that even the most studied landscapes can still surprise us.