logo
Tree of Knowledge: Fairmont State receives Division of Forestry grant for planting project

Tree of Knowledge: Fairmont State receives Division of Forestry grant for planting project

Yahoo12-06-2025
Jun. 11—FAIRMONT — Don't call it "landscaping."
Fairmont State University last week received a grant from the West Virginia Division of Forestry, which it will use for a tree-planting project on its Locust Avenue campus — but the motivation, participants say, has roots going way deeper.
"The trees themselves are a great project, " architecture major Ryan Williams said.
They include the American Holly, Eastern Rosebud, Common Witch Hazel trees, all of which are native to north-central West Virginia, and all planted near the dormitories that line the Campus Drive East entrances, which are currently devoid of greenery.
"But even more than that, I hope it will inspire more students to get involved, " continued Williams, who is on the school's Creative Sustainability Council and helped secure the Community EquiTree grant through the forestry division.
"The more people are engaged with their community and campus, " the student said of the project expected to be completed at month's end, "the more we can better serve."
And the best way to engage and serve, Williams continued, is by celebrating nature, which is perpetually blooming, growing and evolving. Such celebration, the student said, happens whether we realize it or not.
For many, the academic family tree goes back 41 years to 1984, when celebrated biologist and naturalist Edmund O. Wilson wrote and published a slim book he titled, "Biophilia " — or literally, "love of life."
"Life, " in the case of Fairmont State, being what happens on the other side of the classroom wall.
Humans, inherently — and unconsciously, even — just "feel " better in the presence of greenery and trees in places they may not normally be found, the naturalist surmised.
Especially, he said, in cities and other locales where concrete, neon and gray buildings prevail.
College campuses, too.
At the height of the pandemic five years ago, with its student population sequestered under quarantine, Fairmont State did a nod to Wilson.
The university created Falcon Park, a walking trail which takes in 7.9 wooded acres at the top of campus and is a haven for native birds, turtles and butterflies.
Falcon Park has since been recognized by the National Wildlife Federation.
Meanwhile, the most recent tree project, said Devin Carpenter, an assistant construction manager at Fairmont State, will be enjoyed for future generations — aesthetically and environmentally.
As the trees grow, he said, their root systems will stabilize hillsides, preventing soil erosion. Their canopies will shade asphalt during the hottest days of summer. And their fall foliage will add to the beauty of campus, Carpenter said.
"And they offer back to nature a piece of what urban development once removed, " he said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

For Algorithms, Memory Is a Far More Powerful Resource Than Time
For Algorithms, Memory Is a Far More Powerful Resource Than Time

WIRED

time13-07-2025

  • WIRED

For Algorithms, Memory Is a Far More Powerful Resource Than Time

Jul 13, 2025 7:00 AM One computer scientist's 'stunning' proof is the first progress in 50 years on one of the most famous questions in computer science. Illustration: Irene Pérez/Quanta Magazine The original version of this story appeared in Quanta Magazine . One July afternoon in 2024, Ryan Williams set out to prove himself wrong. Two months had passed since he'd hit upon a startling discovery about the relationship between time and memory in computing. It was a rough sketch of a mathematical proof that memory was more powerful than computer scientists believed: A small amount would be as helpful as a lot of time in all conceivable computations. That sounded so improbable that he assumed something had to be wrong, and he promptly set the proof aside to focus on less crazy ideas. Now, he'd finally carved out time to find the error. That's not what happened. After hours of poring over his argument, Williams couldn't find a single flaw. 'I just thought I was losing my mind,' said Williams, a theoretical computer scientist at the Massachusetts Institute of Technology. For the first time, he began to entertain the possibility that maybe, just maybe, memory really was as powerful as his work suggested. Over the months that followed, he fleshed out the details, scrutinized every step, and solicited feedback from a handful of other researchers. In February, he finally posted his proof online, to widespread acclaim. 'It's amazing. It's beautiful,' said Avi Wigderson, a theoretical computer scientist at the Institute for Advanced Study in Princeton, New Jersey. As soon as he heard the news, Wigderson sent Williams a congratulatory email. Its subject line: 'You blew my mind.' Time and memory (also called space) are the two most fundamental resources in computation: Every algorithm takes some time to run and requires some space to store data while it's running. Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their run time, and researchers had long assumed there's no way to do better. Williams' proof established a mathematical procedure for transforming any algorithm—no matter what it does—into a form that uses much less space. Ryan Williams stunned his colleagues with a milestone proof about the relationship between time and space in computation. Photograph: Katherine Taylor for Quanta Magazine What's more, this result—a statement about what you can compute given a certain amount of space—also implies a second result, about what you cannot compute in a certain amount of time. This second result isn't surprising in itself: Researchers expected it to be true, but they had no idea how to prove it. Williams' solution, based on his sweeping first result, feels almost cartoonishly excessive, akin to proving a suspected murderer guilty by establishing an ironclad alibi for everyone else on the planet. It could also offer a new way to attack one of the oldest open problems in computer science. 'It's a pretty stunning result, and a massive advance,' said Paul Beame, a computer scientist at the University of Washington. 'It's a little bit less of a surprise that it's Ryan doing this.' Space to Roam Williams, 46, has an open, expressive face and a hint of gray in his hair. His office, which looks out over the colorful spires of MIT's Stata Center, is another illustration of the creative use of space. A pair of yoga mats have transformed a window ledge into a makeshift reading nook, and the desk is tucked into an oddly shaped corner, freeing up room for a couch facing a large whiteboard brimming with mathematical scribblings. MIT is a long way from Williams' childhood home in rural Alabama, where there was no shortage of space. He grew up on a 50-acre farm and first saw a computer at age 7, when his mother drove him across the county for a special academic enrichment class. He recalled being transfixed by a simple program for generating a digital fireworks display. 'It was taking a random color and sending it in a random direction from the middle of the monitor,' Williams said. 'You couldn't have predicted what picture you're going to get.' The world of computers seemed a wild and wonderful playground, full of infinite possibilities. Young Williams was hooked. 'I was writing programs to myself, on paper, to be run on a future computer,' he said. 'My parents didn't really know what to do with me.' Williams' office, like his new result, makes creative use of space. Photograph: Katherine Taylor for Quanta Magazine As he grew older, Williams advanced from imaginary computers to real ones. For his last two years of high school, he transferred to the Alabama School of Math and Science, a prestigious public boarding school, where he first encountered the theoretical side of computer science. 'I realized that there was a wider world of things out there, and that there was a way to think mathematically about computers,' he said. 'That's what I wanted to do.' Williams was especially drawn to a branch of theoretical computer science called computational complexity theory. It deals with the resources (such as time and space) that are needed to solve computational problems such as sorting lists or factoring numbers. Most problems can be solved by many different algorithms, each with its own demands on time and space. Complexity theorists sort problems into categories, called complexity classes, based on the resource demands of the best algorithms for solving them—that is, the algorithms that run fastest or use the least amount of space. But how do you make the study of computational resources mathematically rigorous? You won't get far if you try to analyze time and space by comparing minutes to megabytes. To make any progress, you need to start with the right definitions. Getting Resourceful Those definitions emerged from the work of Juris Hartmanis, a pioneering computer scientist who had experience making do with limited resources. He was born in 1928 into a prominent Latvian family, but his childhood was disrupted by the outbreak of World War II. Occupying Soviet forces arrested and executed his father, and after the war Hartmanis finished high school in a refugee camp. He went on to university, where he excelled even though he couldn't afford textbooks. 'I compensated by taking very detailed notes in lectures,' he recalled in a 2009 interview. 'There is a certain advantage to [having] to improvise and overcome difficulties.' Hartmanis immigrated to the United States in 1949, and worked a series of odd jobs—building agricultural machinery, manufacturing steel and even serving as a butler—while studying mathematics at the University of Kansas City. He went on to a spectacularly successful career in the young field of theoretical computer science. In 1960, while working at the General Electric research laboratory in Schenectady, New York, Hartmanis met Richard Stearns, a graduate student doing a summer internship. In a pair of groundbreaking papers they established precise mathematical definitions for time and space. These definitions gave researchers the language they needed to compare the two resources and sort problems into complexity classes. In the 1960s, Juris Hartmanis established the definitions that computer scientists use to analyze space and time. Courtesy of the Division of Rare and Manuscript Collections, Cornell University Library One of the most important classes goes by the humble name 'P.' Roughly speaking, it encompasses all problems that can be solved in a reasonable amount of time. An analogous complexity class for space is dubbed 'PSPACE.' The relationship between these two classes is one of the central questions of complexity theory. Every problem in P is also in PSPACE, because fast algorithms just don't have enough time to fill up much space in a computer's memory. If the reverse statement were also true, the two classes would be equivalent: Space and time would have comparable computational power. But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren't in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn't as forgiving—once it passes, you can't get it back. 'The intuition is just so simple,' Williams said. 'You can reuse space, but you can't reuse time.' But intuition isn't good enough for complexity theorists: They want rigorous proof. To prove that PSPACE is larger than P, researchers would have to show that for some problems in PSPACE, fast algorithms are categorically impossible. Where would they even start? A Space Odyssey As it happened, they started at Cornell University, where Hartmanis moved in 1965 to head the newly established computer science department. Under his leadership it quickly became a center of research in complexity theory, and in the early 1970s, a pair of researchers there, John Hopcroft and Wolfgang Paul, set out to establish a precise link between time and space. 'I thought about it, and I was like, 'Well, that just simply can't be true.'' Hopcroft and Paul knew that to resolve the P versus PSPACE problem, they'd have to prove that you can't do certain computations in a limited amount of time. But it's hard to prove a negative. Instead, they decided to flip the problem on its head and explore what you can do with limited space. They hoped to prove that algorithms given a certain space budget can solve all the same problems as algorithms with a slightly larger time budget. That would indicate that space is at least slightly more powerful than time—a small but necessary step toward showing that PSPACE is larger than P. With that goal in mind, they turned to a method that complexity theorists call simulation, which involves transforming existing algorithms into new ones that solve the same problems, but with different amounts of space and time. To understand the basic idea, imagine you're given a fast algorithm for alphabetizing your bookshelf, but it requires you to lay out your books in dozens of small piles. You might prefer an approach that takes up less space in your apartment, even if it takes longer. A simulation is a mathematical procedure you could use to get a more suitable algorithm: Feed it the original, and it'll give you a new algorithm that saves space at the expense of time. Algorithm designers have long studied these space-time trade-offs for specific tasks like sorting. But to establish a general relationship between time and space, Hopcroft and Paul needed something more comprehensive: a space-saving simulation procedure that works for every algorithm, no matter what problem it solves. They expected this generality to come at a cost. A universal simulation can't exploit the details of any specific problem, so it probably won't save as much space as a specialized simulation. But when Hopcroft and Paul started their work, there were no known universal methods for saving space at all. Even saving a small amount would be progress. The breakthrough came in 1975, after Hopcroft and Paul teamed up with a young researcher named Leslie Valiant. The trio devised a universal simulation procedure that always saves a bit of space. No matter what algorithm you give it, you'll get an equivalent one whose space budget is slightly smaller than the original algorithm's time budget. 'Anything you can do in so much time, you can also do in slightly less space,' Valiant said. It was the first major step in the quest to connect space and time. In 1975, Leslie Valiant helped prove that space is a slightly more powerful computational resource than time. Photograph: Katherine Taylor for Quanta Magazine But then progress stalled, and complexity theorists began to suspect that they'd hit a fundamental barrier. The problem was precisely the universal character of Hopcroft, Paul and Valiant's simulation. While many problems can be solved with much less space than time, some intuitively seemed like they'd need nearly as much space as time. If so, more space-efficient universal simulations would be impossible. Paul and two other researchers soon proved that they are indeed impossible, provided you make one seemingly obvious assumption: Different chunks of data can't occupy the same space in memory at the same time. 'Everybody took it for granted that you cannot do better,' Wigderson said. Paul's result suggested that resolving the P versus PSPACE problem would require abandoning simulation altogether in favor of a different approach, but nobody had any good ideas. That was where the problem stood for 50 years—until Williams finally broke the logjam. First, he had to get through college. Complexity Classes In 1996, the time came for Williams to apply to colleges. He knew that pursuing complexity theory would take him far from home, but his parents made it clear that the West Coast and Canada were out of the question. Among his remaining options, Cornell stood out to him for its prominent place in the history of his favorite discipline. 'This guy Hartmanis defined the time and space complexity classes,' he recalled thinking. 'That was important for me.' Williams was admitted to Cornell with generous financial aid and arrived in the fall of 1997, planning to do whatever it took to become a complexity theorist himself. His single-mindedness stuck out to his fellow students. 'He was just super-duper into complexity theory,' said Scott Aaronson, a computer scientist at the University of Texas, Austin, who overlapped with Williams at Cornell. Williams grew interested in the relationship between space and time as an undergraduate but never found an opportunity to work on it until last year. Photograph: Katherine Taylor for Quanta Magazine But by sophomore year, Williams was struggling to keep up in courses that emphasized mathematical rigor over intuition. After he got a middling grade in a class on the theory of computing, the teacher suggested he consider other careers. But Williams wouldn't give up his dream. He doubled down and enrolled in a graduate theory course, hoping that a stellar grade in the harder class would look impressive on his grad school applications. The professor teaching that graduate course was Hartmanis, by then an elder statesman in the field. Williams began attending Hartmanis' office hours every week, where he was almost always the only student who showed up. His tenacity paid off: he earned an A in the course, and Hartmanis agreed to advise him on an independent research project the following semester. Their weekly meetings continued throughout Williams' time in college. Hartmanis encouraged him to cultivate an individual approach to complexity research and gently steered him away from dead ends. 'I was deeply influenced by him then,' Williams said. 'I guess I still am now.' 'If you get any mathematical result which is the best thing in 50 years, you must be doing something right.' But despite earning a coveted graduate research fellowship from the National Science Foundation, Williams was rejected by every doctoral program he applied to. On hearing the news, Hartmanis phoned a colleague, then turned around and congratulated Williams on getting accepted into a yearlong master's program at Cornell. A year later Williams again applied to various doctoral programs, and with that extra research experience under his belt, he found success. Williams continued working in complexity theory in grad school and the years that followed. In 2010, four years after receiving his doctorate, he proved a milestone result—a small step, but the largest in decades, toward solving the most famous question in theoretical computer science, about the nature of hard problems. That result cemented Williams' reputation, and he went on to write dozens of other papers on different topics in complexity theory. P versus PSPACE wasn't one of them. Williams had been fascinated by the problem since he first encountered it in college. He'd even supplemented his computer science curriculum with courses in logic and philosophy, seeking inspiration from other perspectives on time and space, to no avail. 'It's always been in the back of my mind,' Williams said. 'I just couldn't think of anything interesting enough to say about it.' Last year, he finally found the opportunity he'd been waiting for. Squishy Pebbles The story of Williams' new result started with progress on a different question about memory in computation: What problems can be solved with extremely limited space? In 2010, the pioneering complexity theorist Stephen Cook and his collaborators invented a task, called the tree evaluation problem, that they proved would be impossible for any algorithm with a space budget below a specific threshold. But there was a loophole. The proof relied on the same commonsense assumption that Paul and his colleagues had made decades earlier: Algorithms can't store new data in space that's already full. For over a decade, complexity theorists tried to close that loophole. Then, in 2023, Cook's son James and a researcher named Ian Mertz blew it wide open, devising an algorithm that solved the tree evaluation problem using much less space than anyone thought possible. The elder Cook's proof had assumed that bits of data were like pebbles that have to occupy separate places in an algorithm's memory. But it turns out that's not the only way to store data. 'We can actually think about these pebbles as things that we can squish a little bit on top of each other,' Beame said. James Cook (left) and Ian Mertz recently devised a new algorithm that solved a specific problem using much less space than anyone thought possible. Photographs: Colin Morris; Michal Koucký Cook and Mertz's algorithm roused the curiosity of many researchers, but it wasn't clear that it had any applications beyond the tree evaluation problem. 'Nobody saw how central it is to time versus space itself,' Wigderson said. Ryan Williams was the exception. In spring 2024, a group of students gave a presentation about the Cook and Mertz paper as their final project in a class he was teaching. Their enthusiasm inspired him to take a closer look. As soon as he did, an idea jumped out at him. Cook and Mertz's method, he realized, was really a general-purpose tool for reducing space usage. Why not use it to power a new universal simulation linking time and space—like the one designed by Hopcroft, Paul and Valiant, but better? That classic result was a way to transform any algorithm with a given time budget into a new algorithm with a slightly smaller space budget. Williams saw that a simulation based on squishy pebbles would make the new algorithm's space usage much smaller—roughly equal to the square root of the original algorithm's time budget. That new space-efficient algorithm would also be much slower, so the simulation was not likely to have practical applications. But from a theoretical point of view, it was nothing short of revolutionary. For 50 years, researchers had assumed it was impossible to improve Hopcroft, Paul and Valiant's universal simulation. Williams' idea—if it worked—wouldn't just beat their record—it would demolish it. 'I thought about it, and I was like, 'Well, that just simply can't be true,'' Williams said. He set it aside and didn't come back to it until that fateful day in July, when he tried to find the flaw in the argument and failed. After he realized that there was no flaw, he spent months writing and rewriting the proof to make it as clear as possible. At the end of February, Williams finally put the finished paper online. Cook and Mertz were as surprised as everyone else. 'I had to go take a long walk before doing anything else,' Mertz said. Valiant got a sneak preview of Williams' improvement on his decades-old result during his morning commute. For years, he's taught at Harvard University, just down the road from Williams' office at MIT. They'd met before, but they didn't know they lived in the same neighborhood until they bumped into each other on the bus on a snowy February day, a few weeks before the result was public. Williams described his proof to the startled Valiant and promised to send along his paper. 'I was very, very impressed,' Valiant said. 'If you get any mathematical result which is the best thing in 50 years, you must be doing something right.' PSPACE: The Final Frontier With his new simulation, Williams had proved a positive result about the computational power of space: Algorithms that use relatively little space can solve all problems that require a somewhat larger amount of time. Then, using just a few lines of math, he flipped that around and proved a negative result about the computational power of time: At least a few problems can't be solved unless you use more time than space. That second, narrower result is in line with what researchers expected. The weird part is how Williams got there, by first proving a result that applies to all algorithms, no matter what problems they solve. 'I still have a hard time believing it,' Williams said. 'It just seems too good to be true.' Williams used Cook and Mertz's technique to establish a stronger link between space and time—the first progress on that problem in 50 years. Photograph: Katherine Taylor for Quanta Magazine Phrased in qualitative terms, Williams' second result may sound like the long-sought solution to the P versus PSPACE problem. The difference is a matter of scale. P and PSPACE are very broad complexity classes, while Williams' results work at a finer level. He established a quantitative gap between the power of space and the power of time, and to prove that PSPACE is larger than P, researchers will have to make that gap much, much wider. That's a daunting challenge, akin to prying apart a sidewalk crack with a crowbar until it's as wide as the Grand Canyon. But it might be possible to get there by using a modified version of Williams' simulation procedure that repeats the key step many times, saving a bit of space each time. It's like a way to repeatedly ratchet up the length of your crowbar—make it big enough, and you can pry open anything. That repeated improvement doesn't work with the current version of the algorithm, but researchers don't know whether that's a fundamental limitation. 'It could be an ultimate bottleneck, or it could be a 50-year bottleneck,' Valiant said. 'Or it could be something which maybe someone can solve next week.' If the problem is solved next week, Williams will be kicking himself. Before he wrote the paper, he spent months trying and failing to extend his result. But even if such an extension is not possible, Williams is confident that more space exploration is bound to lead somewhere interesting—perhaps progress on an entirely different problem. 'I can never prove precisely the things that I want to prove,' he said. 'But often, the thing I prove is way better than what I wanted.' Editor's note: Scott Aaronson is a member of Quanta Magazine's advisory board. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Satellite Images Show How Much Lake Mead Has Shrunk in 25 Years
Satellite Images Show How Much Lake Mead Has Shrunk in 25 Years

Newsweek

time06-07-2025

  • Newsweek

Satellite Images Show How Much Lake Mead Has Shrunk in 25 Years

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Satellite images illustrate how water levels in Lake Mead have fallen in the past 25 years. Why It Matters Lake Mead supplies vital water to millions in Nevada, Arizona, California, and parts of Mexico. However, declining water levels could jeopardize city water systems, farm irrigation, and hydroelectric power output. Lake Mead is the largest reservoir in the U.S., capable of storing approximately 29 million acre-feet of water. It is closely followed Lake Powell, with a capacity of around 25 million acre-feet. Recent forecasts lowered runoff into Lake Powell to just 55 percent of average, down from an earlier estimate of 67 percent, due to an underwhelming winter snowpack. Lake Mead, which receives flows from Lake Powell, hit critically low levels during the summer of 2022, following years of drought. What To Know Satellite imagery captured by Landsat 7, a joint mission of the United States Geological Survey (USGS) and NASA, show a stark contrast in Lake Mead's water levels between July 1999 and May 2024, with the reservoir appearing noticeably smaller in the latter image. Landsat 7 image of the Las Vegas area and Lake Mead, acquired on July 4, 1999. Landsat 7 image of the Las Vegas area and Lake Mead, acquired on July 4, 1999. USGS Landsat 7 image of the Las Vegas area and Lake Mead, acquired on May 28, 2024. Landsat 7 image of the Las Vegas area and Lake Mead, acquired on May 28, 2024. USGS Images provided to Newsweek by Maxar Technologies further highlight Lake Mead's current levels compared to historical ones with the "bathtub ring" effect in display. Recent imagery provided by Maxar Technologies shows Lake Mead in 2025, with the "bathtub ring" visible along the coastline. Recent imagery provided by Maxar Technologies shows Lake Mead in 2025, with the "bathtub ring" visible along the coastline. Satellite image ©2025 Maxar Technologies Colin Williams, a mineral resources program coordinator with the USGS, told Newsweek previously that this ring around the lake is the result of calcium carbonate found in the Colorado River water. As the lake's water level fluctuates, these minerals cling to the surrounding rocks. When Lake Mead's water level drops, calcium carbonate is left behind on the rocks, forming the visible bathtub ring. "Lake Mead is full at 1,229 feet above sea level, so the bathtub ring disappears when the lake level is near that point," Williams said at the time. At the time of writing, Lake Mead's water levels were 1,054.85 feet mean sea level, 174.15 feet below its full pool of 1,229, according to Lakes Online, an online resource for lake and reservoir information. What People Are Saying Sharon Megdal, Director of the University of Arizona Water Resources Research Center, told Newsweek: "This time period has been dominated by lower annual natural river flows than those of the prior century. In addition, deliveries to water users in the Lower Basin, coupled with 1944 Treaty deliveries to Mexico have exceeded inflows, causing significant decline in the amount of water in storage since 2000. "Despite many interventions to prop up water levels in the reservoir, "Mother Nature's flows have not been generous," Megdal added. What Happens Next A recent study suggested that boosting wastewater recycling to 40 percent in the Colorado River Basin could conserve about 900,000 acre-feet of water annually—enough to supply nearly two million households. Meanwhile, the states that depend on the Colorado River have been in talks to establish new water-sharing agreements by 2026.

AI helps couple get pregnant after 18 years of trying — by finding infertile man's ‘hidden sperm'
AI helps couple get pregnant after 18 years of trying — by finding infertile man's ‘hidden sperm'

New York Post

time04-07-2025

  • New York Post

AI helps couple get pregnant after 18 years of trying — by finding infertile man's ‘hidden sperm'

Could high-tech goggles be the key to spotting secret swimmers? A couple is expecting their first child after nearly 20 years of trying, thanks to artificial intelligence that found 'hidden' sperm in a man who was once thought to be shooting blanks. The futuristic treatment, currently only available in New York, could offer fresh hope to thousands of men who've long been told they'd never father a biological child. Advertisement 4 Studies show that male infertility affects about 10% to 15% of US males who are trying to conceive. Christoph Burgstedt – Most healthy semen is teeming with millions of sperm — but up to 15% of infertile men have azoospermia, where there's virtually nothing to be found. 'A semen sample can appear totally normal, but when you look under the microscope, you discover just a sea of cellular debris, with no sperm visible,' Dr. Zev Williams, director of the Columbia University Fertility Center, said in a press release. Until now, men with azoospermia had few ways to overcome their missing swimmers. Advertisement 'The options have typically been either to use donor sperm or to try undergoing a painful surgery where a portion of the testes is actually removed and they look in the testes to try to find sperm,' Williams told Frustrated by the lack of effective solutions, Williams and his team turned their gaze to the galaxies. Borrowing AI technology used by astrophysicists to detect faraway stars and planets, the researchers spent five years developing the STAR system — short for Sperm Tracking and Recovery — to search for life of a different kind. Advertisement And it works: When tested on a sample that embryologists had spent two days combing through with no success, STAR found 44 sperm in just one hour. 4 Williams and his team were inspired by the high-tech methods that astrophysicists are using in space. Columbia Doctors 'We're using the same technologies that are used to search for life in the universe to help create new life right here on earth,' Williams said. In March 2025, a woman using the pseudonym Rosie became the first to get pregnant using the STAR method — after nearly 19 years of trying to conceive with her husband, who has azoospermia. Advertisement 'There really was nothing else out there,' Rosie, 38, said in an interview with Time magazine. 'Especially because I am running quite a few years ahead of where we should be [for fertility]. I'm not that old, but in fertility years — egg-wise — I was reaching my end.' For her husband, the process was surprisingly simple; all he had to do was provide a sperm sample. Researchers then scanned the sample with high-powered imaging, capturing more than 8 million images in under an hour. Using AI, they were able to detect three viable sperm cells. Once spotted, the tiny swimmers were swiftly extracted by a robot, avoiding damage from traditional methods like centrifugation, which spins the sample and can ruin the cells. 4 The STAR method eliminates the need for other painful, and sometimes risky, procedures to extract sperm. nito – 'Imagine searching for a single needle hidden among a thousand haystacks scattered across ten football fields — and finding it in under two hours,' Williams told The Bump. 'That's the level of precision and speed delivered by the STAR system.' Advertisement After extraction, the sperm can be used right away for in vitro fertilization (IVF) or frozen for future attempts. In Rosie's case, doctors had her eggs successfully fertilized within two hours of collecting her husband's sample. A few days later, the embryos were transferred to her uterus. Now five months pregnant, Rosie says it still feels surreal. 'I still wake up in the morning and can't believe if this is true or not,' she told Time. The baby is due in December. Advertisement 4 A growing number of people are starting their families with assisted reproductive technology. weyo – The STAR system is currently exclusive to Columbia University Fertility Center, where Williams said several other patients are already in the 'banking stage.' He told CNN the full process to find, isolate and freeze sperm costs just under $3,000. The average cost of IVF in the US ranges from $12,400 to $15,000 but can exceed $30,000 when including medications and genetic testing, according to GoodRx. Advertisement While the new tech offers hope, some experts are skeptical. 'At face value, this looks promising, but as with any new technology in medicine, especially in reproductive care, we need to follow the data and study it further,' said Robert Brannigan, president-elect of the American Society for Reproductive Medicine, in an interview with the Washington Post. The development of the STAR system comes amid a global rise in male infertility. One study found that sperm counts in Western men plunged 52.4% between 1973 and 2011. Scientists are still working to pinpoint the cause, but suspect environmental exposures and lifestyle factors like obesity, poor diet and physical inactivity are playing a role. Advertisement As infertility climbs, more couples are turning to assisted reproduction like IVF and the STAR system for a shot at parenthood. 'With our method, many men who were told they had no chance at a biological child now have that chance,' Williams said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store