
Space, Tech, And AI: What Astronaut Tim Peake Can Teach Us About The Future Of Humanity
From space-based solar power to AI-guided decision-making, astronaut Tim Peake shares powerful ... More insights into the technologies shaping our world and beyond.
When you've spent 6 months orbiting Earth in the International Space Station, your perspective on the planet and its problems is likely to change forever. Few people understand this more intimately than Tim Peake, the British astronaut, test pilot, and ambassador for STEM (Science, Technology, Engineering and Mathematics) education, who joined me for a fascinating conversation about space, AI, and the future of life on Earth.
What struck me most in our conversation was how clearly Tim connects the dots between space exploration and the challenges we face on Earth, drawing on his remarkable experience and expertise. Whether it's the climate crisis, the energy transition, or the role of AI in decision-making, space is not some distant frontier. It is deeply entangled with our present and our future.
Peake vividly describes the emotional and intellectual impact of seeing our planet from above.
'It gives you a fresh appreciation of how isolated and remote the planet is,' he told me. 'A lot of people say fragile. I caution against using that word because I think the Earth's pretty robust. But in terms of being remote and isolated, it makes you realize that this small rock is perfectly designed to support the life that has evolved on it.'
And while the view from orbit can feel peaceful and serene, it's also a powerful reminder of just how interconnected and dynamic our ecosystems really are. From wildfires in one region to dust storms in another, the visible signs of global interdependence are unmistakable from space.
Peake explained, 'You see wildfires and the smoke spreading across continents. You see sandstorms in the Sahara drifting across Northern Europe. That's because the atmosphere is so thin, so tiny, and you see that very clearly from space.'
Beyond the view, Peake is just as excited about what space can do for us back on Earth. Advances in manufacturing, communications, and energy are all being accelerated by what's happening in orbit.
One of the most compelling developments he pointed to is space-based manufacturing. In the absence of gravity, new kinds of structures can be created with unprecedented purity and precision.
'For example, we can grow very large protein crystals in space that you can't grow on Earth,' he said. 'That can help pharmaceutical companies create better drugs with fewer side effects and lower dosages. Or if you're trying to print out a human heart, doing that on Earth needs some sort of scaffolding. In space, gravity is not distorting the cellular structure.'
He also believes that space-based solar power is not just science fiction. It could soon become a meaningful contributor to our global energy mix.
"If we can make two-kilometer square solar arrays that beam energy back to Earth using microwaves, we can reduce the pressure on our grid and use space to help solve the energy crisis,' Peake explained.
The falling cost of getting into orbit is a key enabler. As heavy-lift launch costs continue to drop, opportunities that once sounded fantastical, like factories in space or orbital data centers, suddenly look commercially viable.
Naturally, we also discussed artificial intelligence. Peake believes that AI has a crucial role to play in helping humanity manage the deluge of data coming from satellites, sensors, and scientific instruments.
'AI can analyze vast amounts of data and make good assumptions from it,' he said. 'If a government is introducing a carbon emission policy in a city, AI can help measure the impact, evaluate the policy, and improve it based on outcomes.'
But Peake also emphasized the continued need for human oversight. When it comes to critical decisions, especially in high-stakes environments like space missions or healthcare, humans must remain in the loop.
'If you're screening for breast cancer, for example, AI can assist doctors. But you still want the diagnosis coming from a person,' he said. 'As humans, we like that reassurance. We want someone to put their intelligence on top of the AI's assessment.'
In other words, AI is not a replacement for human decision-making but a powerful augmentor, especially in environments where timely action matters.
Throughout our conversation, one theme kept coming up: the importance of inspiring the next generation, especially around STEM. For Peake, this is not a side mission; it's central to why he does what he does.
'I try to encourage kids to get involved in STEM, even if they don't see themselves taking it to higher education,' he said. 'The more you know about science and tech today, the more doors it opens for your future.' One initiative doing an outstanding job of sparking that curiosity is the Future Lab at the Goodwood Festival of Speed, where Peake serves as an ambassador. Curated by Lucy Johnston, the Future Lab showcases cutting-edge innovations from across the globe, from robotic rescue dogs and deep-sea exploration tools to mind-blowing space tech like the James Webb Space Telescope. 'It's hands-on, inspiring, and brilliantly curated,' Peake said. 'You see people of all ages walking around in awe, and that's exactly the kind of experience that can ignite a lifelong passion for science and technology.'
Having taken my own son to Future Lab, I can say with certainty that it works. There's something magical about seeing kids light up as they touch, feel, and interact with the technology that's shaping tomorrow.
Another eye-opener in our chat was just how much space already affects daily life. 'On average, everyone touches about 42 satellites a day,' Peake said. Whether it's making an online purchase, using navigation, or checking the weather, you're using space infrastructure.
And that footprint is only growing. Companies are already working on putting data centers in orbit to reduce energy consumption and cooling requirements on Earth. Communications, navigation, Earth observation, and climate monitoring are all becoming more dependent on space-based assets.
But with growth comes risk. Peake is also an ambassador for The Astra Carta, an initiative supported by King Charles aimed at ensuring space is used sustainably. Space debris, orbital traffic, and light pollution are becoming serious issues.
'We need rules of the road for space,' he said. 'If we want to keep using it safely, we need to manage how we operate up there.'
As we wrapped up our conversation, I asked Tim the big one: Does he believe there's intelligent life out there?
"I absolutely do," he said without hesitation. "Statistically, the odds are too strong. When you're in space, and you see 200 billion stars in our galaxy alone, and then remember there are hundreds of billions of galaxies, it's hard to believe we're alone."
He also believes that space exploration will help answer some of the biggest questions humanity has ever asked about life, existence, and our place in the universe. But even if we don't find extraterrestrials any time soon, the journey itself has value.
Space inspires. It informs. And, increasingly, it enables.
That, I think, is what makes Peake's perspective so valuable. He's lived at the intersection of science, technology, and wonder. And he reminds us that the frontier of space is not just about what lies out there but about what it can help us achieve here on Earth.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
2 hours ago
- USA Today
Spoilers! Why 'M3GAN 2.0' is actually a 'redemption story'
Spoiler alert! We're discussing major details about the ending of 'M3GAN 2.0' (in theaters now), so beware if you haven't seen it yet. 'You wouldn't give your child cocaine. Why would you give them a smartphone?' That's the sardonic hypothetical posed by roboticist Gemma (Allison Williams) at the start of 'M3GAN 2.0,' a high-octane sequel to the 2023 hit horror comedy. When the new movie picks up, Gemma is tirelessly advocating for government oversight of artificial intelligence, after creating a bratty, pussy-bowed animatronic named M3GAN that killed four people and a dog in the original film. 'Honestly, Gemma has a point,' jokes Williams, the mother of a 3-year-old, Arlo, with actor Alexander Dreymon. 'Any time my son looks at my screen, I'm like, 'This does feel like the way people react to cocaine. This is not going to be easy to remove from his presence.' ' The first movie was an allegory about parenting and how technology is compromising the emotional human bonds that we share with one another. But in the action-packed follow-up, writer/director Gerard Johnstone wanted to explore the real-life ramifications of having M3GAN-like technology unleashed on the world. 'With the way AI was changing, and the conversation around AI was evolving, it opened up a door narratively to where we could go in the sequel,' Johnstone says. How does 'M3GAN 2.0' end? 'M3GAN 2.0' introduces a new villain in Amelia (Ivanna Sakhno), a weapons-grade automaton built by the U.S. military using M3GAN's stolen programming. But when Amelia goes rogue on a lethal mission for AI to rule the world, Gemma comes to realize that M3GAN is the only one who can stop her. Gemma reluctantly agrees to rebuild her impudent robot in a new body, and the sequel ends with an explosive showdown between Amelia and M3GAN, who nearly dies in a noble attempt to save Gemma and her niece, Cady (Violet McGraw). 'If Amelia walked out of that intact, that's a very different world we're all living in. M3GAN literally saves the world,' Williams says. 'When the first movie ends, you're like, 'Oh, she's a bad seed and I'm glad she's gone.' But by the end of this movie, you have completely different feelings about her. There's a feeling of relief when you realize she's still here, which is indicative of how much ground gets covered in this movie.' M3GAN's willingness to sacrifice herself shows real growth from the deadpanning android that audiences fell in love with two years ago. But Johnstone has always felt 'a strong empathy' towards M3GAN and never wanted to make her an outright villain. Even in the first film, 'everything she does is a result of her programming,' Johnstone says. 'As soon as she does something that Gemma disagrees with, Gemma tries to turn her off, erase her, reprogram her, and effectively kill her. So from that point of view, M3GAN does feel rightly short-changed.' M3GAN's desire to prove herself, and take the moral high ground, is 'what this movie was really about,' Johnstone adds. 'I love redemption stories.' Does 'M3GAN 2.0' set up a third movie? For Williams, part of the appeal of a sequel was getting to play with how M3GAN exists in the world, after her doll exterior was destroyed in the first movie. M3GAN is offscreen for much of this film, with only her voice inhabiting everything from a sports car to a cutesy smart home assistant. 'She's just iterating constantly, which tore through a persona that we've come to know and love,' Williams says. 'It's an extremely cool exercise in a movie like this, where we get to end the movie with a much deeper understanding of who this character is. We've now interacted with her in so many different forms, and yet we still feel the consistency of who she 'is.' That's really the fun of it.' In a way, 'she's like this digital poltergeist that's haunting them from another dimension,' Johnstone adds. 'It was a way to remind people she's more than a doll in a dress – she's an entity.' In the final scene of 'M3GAN 2.0,' we see the character living inside Gemma's computer, in a nostalgic nod to the Microsoft Word paper clip helper. (As millennials, 'our relationship with Clippy was very codependent and very complicated,' Williams quips.) But if there is a third 'M3GAN' movie, it's unlikely that you'll see her trapped in that virtual realm forever. 'M3GAN always needs to maintain a physical form,' Johnstone says. 'One aspect of AI philosophy that we address in this film is this idea of embodiment: If AI is ever going to achieve true consciousness, it has to have a physical form so it can feel anchored. So that's certainly M3GAN's point of view at the beginning of the movie: She feels that if she stays in this formless form for too long, she's going to fragment. 'M3GAN always has to be in a physical body that she recognizes – it's another reason why she won't change her face, even if it draws attention to herself. It's like, 'This is who I am and I'm not changing.' '
Yahoo
2 hours ago
- Yahoo
Nozzle blows off rocket booster during test for NASA's Artemis program (video)
When you buy through links on our articles, Future and its syndication partners may earn a commission. An upgraded version of one of the solid rocket boosters being used for NASA's Space Launch System (SLS) experienced an anomaly during a test June 26. The Demonstration Motor-1 (DM-1) Static Test took place at Northrop Grumman's facility in Promontory, Utah, simulating a launch-duration burn lasting about two minutes. It was the first demonstration of Grumman's Booster Obsolescence and Life Extension (BOLE) upgrade, an enhanced five-segmented motor designed with greater lifting power for later versions of SLS. Shortly after the spokesperson on Grumman's recording marks T+100 seconds into the test, an outburst of flames can be seen erupting form the top of the engine nozzle. A few seconds later, as another spokesperson announces, "activate aft deluge," an even larger burst comes from the rocket's exhaust, blowing nearby debris into the flames and around the test site. "Whoa," one of the test operators said as burn continued, before audibly gasping. Beyond that in-the-moment reaction, though, the anomaly was not acknowledged during the remainder of the test, which seemed to conclude as planned. "While the motor appeared to perform well through the most harsh environments of the test, we observed an anomaly near the end of the two-plus minute burn. As a new design, and the largest segmented solid rocket booster ever built, this test provides us with valuable data to iterate our design for future developments," Jim Kalberer, Grumman's vice president of propulsion systems, said in a statement. SLS, NASA's rocket supporting the agency's Artemis program, was designed on the foundation of legacy systems used during the space shuttle era. SLS's core stage fuel tank is an augmented version of the one used to launch space shuttles, and the same RS-25 engines responsible for launching the space shuttles are launching to space again on SLS missions. The segments from the shuttle's solid rocket boosters are also flying again, too. Northrop Grumman supported Artemis 1, and will support Artemis 2 and Artemis 3 with shuttle-era hardware, before transitioning to newer hardware for Artemis 4 through Artemis 8. The company's BOLE engines aren't slated to be introduced for launch until Artemis 9, on the SLS Block 2. The upgraded BOLE engines include improved, newly-fabricated parts replacing those no longer in production, carbon fiber composite casings and updated propellant efficiencies that increase the booster's performance more than 10 percent compared to the solid rocket engines being used on earlier SLS launches. Thursday's DM-1 BOLE test included more than 700 points of data collection throughout the booster, which produced over 4 million pounds of thrust, according to Northrop Grumman. Whether the BOLE design will ever fly, however, is far from certain. NASA's proposed budget for 2026 calls for the cancelation of the SLS rocket following Artemis 3.


Fast Company
5 hours ago
- Fast Company
These two game-changing breakthroughs advance us toward artificial general intelligence
The biggest technology game changers don't always grab the biggest headlines. Two emerging AI developments may not go viral on TikTok or YouTube, but they represent an inflection point that could radically accelerate the development of artificial general intelligence (AGI). That's AI that can function and learn like us. Coming to our senses: WildFusion As humans, we rely on all sorts of stimuli to navigate in the world, including our senses: sight, sound, touch, taste, smell. Until now, AI devices have been solely reliant on a single sense—visual impressions. Brand-new research from Duke University goes beyond reliance only on visual perception. It's called WildFusion, combining vision with touch and vibration. The four-legged robot used by the research team includes microphones and tactile sensors in addition to the standard cameras commonly found in state-of-the-art robots. The WildFusion robot can use sound to assess the quality of a surface (dry leaves, wet sand) as well as pressure and resistance to calibrate its balance and stability. All of this data is gathered and combined or fused, into a single data representation that improves over time with experience. The research team plans enhance the robot's capabilities by enabling it to gauge things like heat and humidity. As the types of data used to interact with the environment become richer and more integrated, AI moves inexorably closer to true AGI. Learning to learn The second underreported AI technology game changer comes from researchers at the universities of Surrey and Hamburg. While still in the early stages of development, this breakthrough allows robots that interact socially with humans (social robots) to train themselves with minimal human intervention. It achieves this by replicating what humans would visually focus on in complex social situations. For example, we learn over time as humans to look at a person's face when talking to them or to look at what they are pointing to rather than at their feet or off into space. But robots won't do that without being specifically trained. Until now, the training to refine behavior in robots was primarily reliant on constant human monitoring and supervision. This new innovative approach uses robotic simulations to track, monitor, and importantly, improve the quality of the robot interactions with minimal human involvement. Robots learn social skills without constant human oversight. This marks an important step forward in the overall advancement of social robotics and could prove to be a huge AGI accelerator. Self-teaching AI could lead to advancements at an exponential rate, a prospect some of us view as thrilling, others as chilling. AI signal over noise Amazing as they may be to watch, dancing humanoid robots and mechanical dogs can be characterized as narrow AI—AI designed only for a specific task or purpose. The feats of these purpose-built tools are impressive. But these two new developments advance how AI experiences the world and how it learns from those experiences. They will dramatically change how technology exists (and coexists with us) in the world. Taken together, these breakthroughs and the work of other researchers and entrepreneurs along similar paths are resetting the trajectory and the timetable for achieving AGI. This could mark the tipping point that turns the slow march toward AGI into an all-out run.