New technique promises clearer, more frequent views of black holes
When you buy through links on our articles, Future and its syndication partners may earn a commission.
A powerful new technique is poised to revolutionize how astronomers observe black holes, by producing sharp, multicolored images that could reveal their dynamic evolution in real time.
By compensating for Earth's turbulent atmosphere, the technique — called frequency phase transfer (FPT) — enables scientists using the global Event Horizon Telescope (EHT) array to see finer details and fainter features of cosmic objects (like black holes) than ever before. This method also improves the frequency of observations by expanding the EHT's limited observation window, allowing scientists to potentially create time-lapse "movies" of black hole activity.
An international team of researchers have put this new technique to the test using three of the 12 telescopes belonging to the EHT array, including the IRAM 30-meter telescope atop Pico Veleta in Spain and the James Clerk Maxwell Telescope and Submillimeter Array observatories in Hawai'i, according to a statement from the Center for Astrophysics at Harvard & Smithsonian (CfA).
The challenge of observing the cosmos with ground-based telescopes begins with Earth's atmosphere, which distorts radio waves coming from space, according to Sara Issaoun, lead author of the new study and an astronomer with the CfA. These distortions are especially problematic at higher frequencies like the 230 gigahertz (GHz) band — also known as the millimeter band, which the EHT currently uses — where signals are rapidly scrambled by atmospheric turbulence and water vapor. As a result, data can be collected only over short time spans, limiting sensitivity and making it harder to detect faint signals.
The FPT technique works by taking advantage of the fact that atmospheric variations affect different frequencies in similar ways, creating a measurable correlation. By observing at a lower frequency, specifically 86 GHz, which experiences slower atmospheric fluctuations, scientists can use that data to correct for the faster, more disruptive variations at 230 GHz. This allows for much longer averaging periods at the higher frequency, significantly boosting signal clarity and sensitivity. This leap in performance could enable the EHT to detect dimmer black holes and finer details than ever before, Issaoun told Space.com.
The EHT is a global network of radio telescopes that uses a technique called Very Long Baseline Interferometry (VLBI) to digitally combine observations from around the world. Currently, the EHT is only operational for about 10 days each April, when weather conditions align across the widespread telescopes. With FPT, astronomers could greatly extend that window, opening up opportunities to observe black holes more regularly and flexibly, even under less-than-ideal weather conditions.
That increased cadence is key to a major goal for the EHT: turning still images of black holes into movies that show how they change over time. Because most black holes evolve slowly, repeated observations are essential to track how matter swirls around them, how jets of material are launched, and how magnetic fields shift. By observing more frequently throughout the year, the EHT would be able to watch black holes change over time — potentially capturing phenomena in real time for the first time, Issaoun noted.
To make this possible, telescopes in the EHT array are being upgraded to support simultaneous observations at multiple frequencies. This includes adding receivers for the 86 GHz band. However, not every telescope in the array needs to be outfitted with the new receiver for FPT to be effective. Even partial implementation can enhance the performance of the full network, since all telescopes work in tandem to build a complete picture of a cosmic target. While the required hardware upgrades are relatively minor, each telescope has unique technical constraints, posing challenges to implementation, according to Issaoun.
RELATED STORIES
— Event Horizon Telescope: A complete guide
— Event Horizon Telescope spies jets erupting from nearby supermassive black hole
— After snapping a photo of the Milky Way's monster black hole, scientists dream of videos
In addition to boosting performance, this technique also adds a new layer of complexity to the images themselves. With multiple frequency bands, researchers can overlay data in different colors to reveal more detailed structures around a black hole. These multiband images will help disentangle features like swirling gas and magnetic fields, painting a more dynamic, multidimensional portrait of black hole environments.
Ultimately, the FPT technique could enable the EHT to not only see black holes more clearly but also more often, unlocking a new era of black hole science.
The team's initial findings were published on March 26 in The Astronomical Journal. The researchers continually work on developing the full potential of the EHT network and exploring even higher-frequency capabilities — such as 345 GHz — that can further complement multiband observations.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
Western State Utilities Plan To Mitigate Wildfires, Others Unprepared
Flames burn near power lines in California. An arm of the U.S. Department of Energy has unveiled an online public database with 400 wildfire mitigation plans from utilities in 19 states to increase wildfire resilience at a time when some utilities may be unprepared. Pacific Northwest National Laboratory has released the database as a tool for state lawmakers and regulators as well as electric utility officials to better develop ways to mitigate the destruction caused by ever-increasing wildfires and better withstand these extreme events. 'Wildfires are no longer a seasonal threat—they're a year-round, national challenge with many areas in the country experiencing a new emerging risk,' said André Coleman, PNNL chief scientist, noted in a June 10 public announcement. 'This database empowers decision-makers at every level to see the different approaches being used, understand what's working and where gaps exist, aid in new plan development, and collaborate on more effective mitigation strategies.' Based in Richland, Wash., PNNL says the public release of the Wildfire Mitigation Plans Database 'comes at a time when wildfires are becoming more frequent and severe, posing increasing threats to power infrastructure, public safety, and taxpayer dollars.' The effort was funded by DOE's Grid Deployment Office, which was founded in 2022 to fund projects that support critical power generation and make the national electricity grid more resilient. Another focus areas is to bolster electric transmission and distribution systems. 'Our vision is to share a complete archive of all publicly available utility wildfire plans to see what the extended community of wildfire stakeholders wants to know. This open data enables conversations and analysis beyond the doors of the laboratory while giving us keen insight into proposed and potential industry solutions and trends,' said Rebecca O'Neil, PNNL advisor of electricity infrastructure. 'We will continue to update the database and to offer wayfinding tools, through short topic-based analyses and a tool to search plans intelligently. Most of all, though, we want to hear from the user community what they are doing with the information and what more we can make possible.' High-voltage towers in the forest. The Grid Deployment Office released a 294-page 'National Transmission Needs Study' in October 2023. The report addressed the importance of improve wildfire reliability and resilience in the California and the western part of the United States. It noted that the Northwest and Southwest regions were at 'risk of load curtailment during extreme weather events and wildfires,' especially as those areas rely more 'on variable energy resources to meet peak demand. Additional transmission upgrades would reduce risks to electricity reliability from extreme events.' PNNL stated that U.S. Congress Joint Economic Committee estimated that wildfires nationally incur annual costs ranging from $394 and $893 billion, with as much to $202 billion from electricity losses. 'On average, in the United States, wildfires caused by power utilities represent about 10% of wildfire starts, though they account for roughly 19% of the annual average national burn area,' PNNL says. States with electric utility wildfire mitigation plans in PNNL's database. Of interest in the PNNL database is the lack of wildlife mitigation plans for much of the Eastern part of the United States. The database is searchable by location, by year/range of years, and by utility or type. The state with the most wildlife mitigation plans was California (224 utility plans), followed by Oregon (87), Washington (53), Utah (19), Idaho (18) and Colorado (16). In June, Stanford University's Stanford Climate and Energy Policy Program published a 34-page report called 'Wildfire: An Updated Look at Utility Risk and Mitigation.' It was written by Eric Macomber, I. Avery Bick, Michael Wara and Michael Mastrandrea. The report underscored the dangers of wildfires ignited by electric utility infrastructure in the United States. It discussed California wildfires as well as a greater awareness there and regulatory framework regarding risks of wildfires started by electric utility infrastructure. Consequently, Western states such California, Nevada, Oregon and Utah have more developed wildfire utility mitigation plans particularly by investor-owned utilities. At the same time, the study noted that likelihood of catastrophic wildfires is much higher today in areas where it had not been a concern. 'Wildfire risk has continued to increase across North America as a result of a number of interrelated trends, the report noted. 'These include shifting weather conditions linked to climate change, which can cause fires to burn at higher intensity and spread more quickly across the landscape; historical fire suppression practices, which have caused flammable dead and dry vegetation to build up in many forested areas, increasing fire intensity; and development and land use patterns that have led vulnerable structures to be located in or near areas where fires are likely to occur, increasing the risk of catastrophic fires that spread from structure to structure and destroy entire communities.' Specifically investor-owned utilities in the Gulf Coast, Southeast and Upper Midwest mostly haven't devised such mitigation plans even though they may be located in areas great risk for wildfires. Midwestern new homes next to power lines. Stanford researchers acknowledged challenges some utilities like rural electric cooperatives and those owned by the public may face in devising wildfire mitigation plans. However electric utility officials and regulators should take measures to safeguard power supplies and minimize the likelihood of their infrastructure sparking a wildfire. For instance, deactivated transmission infrastructure could ignite a wildfire. The report suggested than 'an approach to wildfire mitigation which reduces the likelihood of electric infrastructure igniting catastrophic fires is key not only to protecting the safety of homes and communities threatened by fires, but also to the future development of the energy system. Because the costs that utilities incur as a result of both wildfire liability and infrastructure projects like mitigation plans are ultimately passed on to their customers in rates,it is important that mitigation programs are conducted in a manner that is not only practical and timely, but also efficient and cost-effective.'
Yahoo
an hour ago
- Yahoo
Experts Urge Caution as Study Links This Popular Drink to 2x the Risk of Diabetes
Experts Urge Caution as Study Links This Popular Drink to 2x the Risk of Diabetes originally appeared on Parade. There's nothing quite like a crisp diet soda on a hot day. It's cool, bubbly and somehow so refreshing. Add some lime and you've got a summertime treat. You can even make a dirty soda by adding a little cream. Yum! But a new study is shining light on diet soda's health effects, and they might not be as harmless as they seem. A new study has found that consuming artificially sweetened beverages, which are commonly marketed as the 'healthier' alternative to sugary drinks, could significantly increase your risk of developing type 2 diabetes. Researchers collected data on more than 4,654 adults in the 30-year-long Coronary Artery Risk Development in Young Adults (CARDIA) study. They assessed the diets of participants at the start of the study, then again after seven and 20 years. 🎬 SIGN UP for Parade's Daily newsletter to get the latest pop culture news & celebrity interviews delivered right to your inbox 🎬 Researchers discovered that individuals who consumed the highest amounts of artificially sweetened drinks, like diet sodas or sugar-free flavored waters, were more than twice as likely to develop type 2 diabetes than those who rarely drank them. That's a 129% increase in risk, raising serious questions about what we're really sipping on when we go 'sugar-free.'The study adds fuel to the growing debate around artificial sweeteners. Long considered a safer option for those watching their weight or blood sugar, these sugar substitutes may interfere with insulin sensitivity and disrupt gut health, which are two key factors in developing diabetes. While the beverages contain few or no calories, their long-term metabolic effects may not be so benign. Of course, sugary drinks aren't off the hook either. Regular sodas, sweetened teas and energy drinks remain one of the most well-documented contributors to rising diabetes and obesity rates worldwide. Even 100% fruit juices, which many perceive as healthy, can elevate blood sugar when consumed in excess. This new study challenges the widely accepted belief that diet drinks are a safe swap. And while more research is needed to pinpoint exactly how artificial sweeteners may influence insulin response, the evidence so far suggests that moderation—and even reconsideration—is warranted. So, what should you drink instead? Nutrition experts continue to recommend simple, no-frills hydration. Water remains the gold standard, but unsweetened tea and black coffee are also considered safe and beneficial for most people. Infused waters with fruit or herbs can offer a flavorful twist without the potential downsides of sugar or artificial sweeteners. I'm a daily Coke Zero drinker, but after this study, I may be willing to make a change. I do love iced tea!Experts Urge Caution as Study Links This Popular Drink to 2x the Risk of Diabetes first appeared on Parade on Jun 27, 2025 This story was originally reported by Parade on Jun 27, 2025, where it first appeared.
Yahoo
2 hours ago
- Yahoo
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md