Hello, neighbor! See the Andromeda galaxy like never before in stunning new image from NASA's Chandra telescope (video)
When you buy through links on our articles, Future and its syndication partners may earn a commission.
The galaxy next door to the Milky Way, Andromeda, has never looked as stunning as it does in a new image from NASA's Chandra X-ray space telescope.
The image of the galaxy, also known as Messier 31 (M31), was created with assistance from a range of other space telescopes and ground-based instruments including the European Space Agency (ESA) XMM-Newton mission, NASA's retired space telescopes GALEX and the Spitzer Space Telescope as well as the Infrared Astronomy Satellite, COBE, Planck, and Herschel, in addition to radio data from the Westerbork Synthesis Radio Telescope.
All these instruments observed Andromeda in different wavelengths of light across the electromagnetic spectrum, with astronomers bringing this data together to create a stunning and intricate image. The image is a fitting tribute to astronomer Vera C. Rubin, who was responsible for the discovery of dark matter thanks to her observations of Andromeda.
As the closest large galaxy to the Milky Way, at just around 2.5 million light-years away, Andromeda has been vital in allowing astronomers to study aspects of galaxies that aren't accessible from our own galaxy. For example, from inside the Milky Way, we can't see our galaxy's spiral arms, but we can see the spiral arms of Andromeda.
Every wavelength of light that was brought together to create this incredible new image of Andromeda tells astronomers something different and unique about the galaxy next door.
For example, the X-ray data provided by Chandra has revealed the high-energy radiation released from around Andromeda's central supermassive black hole, known as M31*.
M31* is considerably larger than the supermassive black hole at the heart of the Milky Way, known as Sagittarius A* (Sgr A*). While our home supermassive black hole has a mass 4.3 million times that of the sun, M31* dwarfs it with a mass 100 million times that of the sun. M31* is also notable for its occasional flares, one of which was observed in X-rays back in 2013, while Sgr A* is a much "quieter" black hole.
Andromeda was chosen as a tribute to Rubin because this neighboring galaxy played a crucial role in the astronomer's discovery of a missing element of the universe. An element that we now call dark matter.
In the 1960s, Rubin and collaborators precisely measured the rotation of Andromeda. They found that the speed at which this galaxy's spiral arms spun indicated that the galaxy was surrounded by a vast halo of an unknown and invisible form of matter.
The mass of this matter provided the gravitational influence that was preventing Andromeda from flying apart due to its rotational speed. The gravity of its visible matter wouldn't have been sufficient to hold this galaxy together.Since then, astronomers have discovered that all large galaxies seem to be surrounded by similar haloes of what is now known as dark matter. This has led to the discovery that the matter which comprises all the things we see around us — stars, planets, moons, our bodies, next door's cat — accounts for just 15% of the "stuff" in the cosmos, with dark matter accounting for the other 85%. The finding has also prompted the search for particles beyond the standard model of particle physics that could compose dark matter.
Thus, there's no doubt that Rubin's work delivered a watershed moment in astronomy, and one of the most important breakthroughs in modern science, fundamentally changing our concept of the universe.
Related Stories:
— How did Andromeda's dwarf galaxies form? Hubble Telescope finds more questions than answers
— The Milky Way may not collide with neighboring galaxy Andromeda after all: 'From near-certainty to a coin flip'
— Gorgeous deep space photo captures the Andromeda Galaxy surrounded by glowing gas
June 2025 has been a brilliant month of recognition of Rubin's immense impact on astronomy and her lasting legacy. In addition to this tribute image, the Vera C. Rubin Observatory released its first images of the cosmos as it gears up to conduct a 10-year observing program of the southern sky called the Legacy Survey of Space and Time (LSST).
Additionally, in recognition of Rubin's monumental contributions to our understanding of the universe, the United States Mint recently released a quarter featuring Rubin as part of its American Women Quarters Program. She is the first astronomer to be honored in the series.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
25 minutes ago
- Fast Company
These two game-changing breakthroughs advance us toward artificial general intelligence
The biggest technology game changers don't always grab the biggest headlines. Two emerging AI developments may not go viral on TikTok or YouTube, but they represent an inflection point that could radically accelerate the development of artificial general intelligence (AGI). That's AI that can function and learn like us. Coming to our senses: WildFusion As humans, we rely on all sorts of stimuli to navigate in the world, including our senses: sight, sound, touch, taste, smell. Until now, AI devices have been solely reliant on a single sense—visual impressions. Brand-new research from Duke University goes beyond reliance only on visual perception. It's called WildFusion, combining vision with touch and vibration. The four-legged robot used by the research team includes microphones and tactile sensors in addition to the standard cameras commonly found in state-of-the-art robots. The WildFusion robot can use sound to assess the quality of a surface (dry leaves, wet sand) as well as pressure and resistance to calibrate its balance and stability. All of this data is gathered and combined or fused, into a single data representation that improves over time with experience. The research team plans enhance the robot's capabilities by enabling it to gauge things like heat and humidity. As the types of data used to interact with the environment become richer and more integrated, AI moves inexorably closer to true AGI. Learning to learn The second underreported AI technology game changer comes from researchers at the universities of Surrey and Hamburg. While still in the early stages of development, this breakthrough allows robots that interact socially with humans (social robots) to train themselves with minimal human intervention. It achieves this by replicating what humans would visually focus on in complex social situations. For example, we learn over time as humans to look at a person's face when talking to them or to look at what they are pointing to rather than at their feet or off into space. But robots won't do that without being specifically trained. Until now, the training to refine behavior in robots was primarily reliant on constant human monitoring and supervision. This new innovative approach uses robotic simulations to track, monitor, and importantly, improve the quality of the robot interactions with minimal human involvement. Robots learn social skills without constant human oversight. This marks an important step forward in the overall advancement of social robotics and could prove to be a huge AGI accelerator. Self-teaching AI could lead to advancements at an exponential rate, a prospect some of us view as thrilling, others as chilling. AI signal over noise Amazing as they may be to watch, dancing humanoid robots and mechanical dogs can be characterized as narrow AI—AI designed only for a specific task or purpose. The feats of these purpose-built tools are impressive. But these two new developments advance how AI experiences the world and how it learns from those experiences. They will dramatically change how technology exists (and coexists with us) in the world. Taken together, these breakthroughs and the work of other researchers and entrepreneurs along similar paths are resetting the trajectory and the timetable for achieving AGI. This could mark the tipping point that turns the slow march toward AGI into an all-out run.


Forbes
an hour ago
- Forbes
Quantum, Moore's Law, And AI's Future
microchip integrated on motherboard In the game of AI acceleration, there are several key moving parts. One of them is hardware: what do the chips look like? And this is a very interesting question. Another is quantum computing: what role will it play? Another is scaling. Everyone from CEOs and investors to engineers is scrambling to figure out what the future looks like, but we got a few ideas from a recent panel at Imagination in Action that assembled some of the best minds on the matter. WSE and the Dinner Plate of Reasoning Not too long ago, I wrote about the Cerebras WSE chip, a mammoth piece of silicon about the size of a dinner plate, that is allowing the centralization of large language model efforts. This is an impressive piece of hardware by any standard, and has a role in coalescing the vanguard of what we are doing with AI hardware. In the aforementioned panel discussion, Julie Choi from Cerebras started by showing off the company's WSE superchip, noting that some call it the 'caviar of inference.' (I thought that was funny.) 'I think that as we evolve, we're just going to see even more innovative, novel approaches at the hardware architecture level,' she said. 'The optimization space is extremely large,' said Dinesh Maheshwari, discussing architecture and compute units. 'So I encourage everyone to look at it.' Panelist Caleb Sirak, also of MIT, talked about ownership of hardware. 'As the models themselves start to change, how can businesses themselves integrate them directly and get them for a fair price, but also convert that AI, and the energy involved, into a productive utility?' 'What is a computer, and what can a computer do?' asked Alexander Keesling, explaining his company's work on hardware. 'We took the fundamental unit of matter, a single atom, and turned it into the fundamental unit of information, which is a quantum bit … a quantum computer is the first time in human history where we can take advantage of the fundamental properties of nature to do something that is different and more powerful.' Jeremy Kepner of MIT's Lincoln Lab had some thoughts on the singularity of computing – not the race toward AGI, but a myopic centralization of an overarching 'operation.' 'Every single computer in the high end that we built for the last many decades has only done one operation,' he said. 'So there's a lot to unpack there, but it's for very deep mathematical and physics reasons: that's the only operation we've ever been able to figure out how to accelerate over many decades. And so what I often tell the users is, the computer picks the application. AI happens to be acceleratable by that operation.' He urged the audience to move forward in a particular way. 'Think about whatever you want to do, and if you can accelerate it with that kind of mathematical operation, you know the sky is the limit on what you can do,' he said. 'And someone in your field will figure it out, and they will move ahead dramatically.' Engineering Challenges and AI Opportunities The panel also mentioned some of the headwinds that innovators must contend with. On the other hand, Jeff Grover noted the near-term ability of systems to evolve. 'We're actually quite excited about this,' he said. The Software End Panelists discussed the relevance of software and the directions that coding is going in. 'Programming languages are built for people,' Sirak said. 'How do you actually change that to build languages and tools that AI can use?' Choi mentioned benchmarks like inference rates of 2900 tokens per second for Llama 4. 'Open source models are rich for developers,' she said. 'What that's doing is building a bridge between the bravest developers. I would say the early adopters tend to be very courageous, and they're willing to code on things that they've never seen before.' The Fast Car Several panelists talked about a particular metaphor to a Ferrari, with Choi referencing 'Ferrari-level' speeds for the Cerebras chip. Maheshwari talked about 'exotic' chips, and design from an architecture paradigm, comparing certain builds to 'picking up groceries in a Ferrari.' He also mentioned the imperative of keeping the technology 'street legal.' Moore's Law and Progress Kepner talked about being surprised by what computers can do, and the size of investment in the industry. Moore's law, he said, implied an upper limit for spending. He predicted another decade of efficiencies, and cited the Ozaki scheme, a matrix method for preserving precision in calculations. What About Quantum? 'I think that the first area where we're going to see quantum computing impact is going to be in research,' Keesling said. 'These problems, at their core, are (about) trying to answer what happens when atoms and electrons interact with one another and develop these emergent behaviors … how we think about chemistry, how we think about drug interactions, how we think about material properties, all comes from electrons and atoms moving.' There was a lot to unpack in this panel discussion, including details on how we're going to achieve progress in the next few years. The Ozaki Scheme Going back to this matrix idea, I was not familiar with this term, so I looked it up and asked ChatGPT to describe it in basic English. 'It's named after Makoto Ozaki, the person who came up with the idea,' the model told me. 'He found a smart way to do very accurate math (like multiplying big grids of numbers) using fast but less accurate tools (like low-precision numbers). His method splits the work into small, simple steps and then carefully puts the pieces back together to get the exact right answer.' Going further, ChatGPT, just to be nice, even gave me a medieval storyline to show how the Ozaki scheme works, and to contrast it to other alternatives. I'm just going to print that here, because it's interesting. The Tale of the Kingdom of Matrixland In the kingdom of Matrixland, the royal court has a big job: multiplying giant tables of numbers (called matrices). But the royal calculator is slow when it uses fancy, high-precision numbers. So the King holds a contest: 'Who can multiply big matrices both quickly and accurately?' Sir Ozaki's Clever Trick Sir Ozaki, a wise mathematician, enters the contest. He says: 'I'll break each matrix into small, easy pieces that the royal calculator can handle quickly. Then I'll multiply those simple parts and put them back together perfectly.' The crowd gasps! His method is fast and still gives the exact right answer. The King declares it the Ozaki Scheme. The Other Contestants But other knights have tricks too: Lady Refina (Iterative Refinement) She does the quick math first, then checks her work. If it's off, she fixes it — again and again — until it's just right. She's very accurate, but takes more time. Sir Compenso (Compensated Summation) He notices small errors that get dropped during math and catches them before they vanish. He's good at adding accurately, but can't handle full matrix multiplication like Ozaki. Lady Mixie (Mixed Precision) She charges in with super speed, using tiny fast numbers (like FP8 or FP16). Her answers aren't perfect, but they're 'good enough' for training the kingdom's magical beasts (AI models). Baron TensorFloat (TF32) He uses a special number format invented by the kingdom's engineers. Faster than full precision, but not as sharp as Ozaki. A favorite of the castle's GPU-powered wizard lab. The Ending Sir Ozaki's method is the most exact while still using fast tools. Others are faster or simpler, but not always perfect. The King declares: 'All of these knights are useful, depending on the task. But if you want both speed and the exact answer, follow Sir Ozaki's path!' Anyway, you have a range of ideas here about quantum computing, information precision, and acceleration in the years to come. Let me know what you think about what all of these experts have said about the future of AI.
Yahoo
an hour ago
- Yahoo
Merck's (MRK) WINREVAIR Shows Strong Results in HYPERION Trial
Merck & Co., Inc. (NYSE:MRK) is one of the 12 stocks that will make you rich in 10 years. On June 23, the company shared encouraging results from its Phase 3 HYPERION trial. The study tested a drug called WINREVAIR (sotatercept-csrk) against a placebo, both combined with standard treatment, in adults recently diagnosed with pulmonary arterial hypertension (PAH)—a serious lung condition. A lab technician in a biopharmaceutical laboratory, surrounded by technology and equipment necessary for advanced research. The trial involved around 320 people from different countries. Most participants were already on two background treatments, unlike earlier trials where most were on three. WINREVAIR showed it could significantly slow down the worsening of PAH compared to the placebo. However, the company didn't release the exact numbers yet, only saying the results were both statistically and medically meaningful. Merck & Co., Inc. (NYSE:MRK) is a global biopharmaceutical company. It discovers, develops, manufactures, and markets prescription medicines, vaccines, biologic therapies, and animal health products. Its key products include Keytruda (cancer), Gardasil (HPV vaccine), Januvia (diabetes), and Bridion (anesthesia reversal). While we acknowledge the potential of MRK as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and . Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data