
A New Law of Nature Attempts to Explain the Complexity of the Universe
The original version of this story appeared in Quanta Magazine.
In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they?
Many answers to Fermi's 'paradox' have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don't appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception.
A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics—the law that dictates an inevitable rise in entropy, a measure of disorder. If they're right, complex and intelligent life should be widespread.
In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter—living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function.
This hypothesis, formulated by the mineralogist Robert Hazen and the astrobiologist Michael Wong of the Carnegie Institution in Washington, DC, along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not 'complete' in the sense of supplying all we need to comprehend natural phenomena; rather, evolution—biological or otherwise—introduces functions and novelties that could not even in principle be predicted from physics alone. 'I'm so glad they've done what they've done,' said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. 'They've made these questions legitimate.'
Michael Wong, an astrobiologist at the Carnegie Institution in Washington, DC. Photograph: Katherine Cain/Carnegie Science
Others argue that extending evolutionary ideas about function to non-living systems is an overreach. The quantitative value that measures information in this new approach is not only relative—it changes depending on context—it's impossible to calculate. For this and other reasons, critics have charged that the new theory cannot be tested, and therefore is of little use.
The work taps into an expanding debate about how biological evolution fits within the normal framework of science. The theory of Darwinian evolution by natural selection helps us to understand how living things have changed in the past. But unlike most scientific theories, it can't predict much about what is to come. Might embedding it within a meta-law of increasing complexity let us glimpse what the future holds? Making Meaning
The story begins in 2003, when the biologist Jack Szostak published a short article in Nature proposing the concept of functional information. Szostak—who six years later would get a Nobel Prize for unrelated work—wanted to quantify the amount of information or complexity that biological molecules like proteins or DNA strands embody. Classical information theory, developed by the telecommunications researcher Claude Shannon in the 1940s and later elaborated by the Russian mathematician Andrey Kolmogorov, offers one answer. Per Kolmogorov, the complexity of a string of symbols (such as binary 1s and 0s) depends on how concisely one can specify that sequence uniquely.
For example, consider DNA, which is a chain of four different building blocks called nucleotides. Α strand composed only of one nucleotide, repeating again and again, has much less complexity—and, by extension, encodes less information—than one composed of all four nucleotides in which the sequence seems random (as is more typical in the genome).
Jack Szostak proposed a way to quantify information in biological systems. Photograph: HHMI
But Szostak pointed out that Kolmogorov's measure of complexity neglects an issue crucial to biology: how biological molecules function.
In biology, sometimes many different molecules can do the same job. Consider RNA molecules, some of which have biochemical functions that can easily be defined and measured. (Like DNA, RNA is made up of sequences of nucleotides.) In particular, short strands of RNA called aptamers securely bind to other molecules.
Let's say you want to find an RNA aptamer that binds to a particular target molecule. Can lots of aptamers do it, or just one? If only a single aptamer can do the job, then it's unique, just as a long, seemingly random sequence of letters is unique. Szostak said that this aptamer would have a lot of what he called 'functional information.' Illustration: Irene Pérez for Quanta Magazine
If many different aptamers can perform the same task, the functional information is much smaller. So we can calculate the functional information of a molecule by asking how many other molecules of the same size can do the same task just as well.
Szostak went on to show that in a case like this, functional information can be measured experimentally. He made a bunch of RNA aptamers and used chemical methods to identify and isolate the ones that would bind to a chosen target molecule. He then mutated the winners a little to seek even better binders and repeated the process. The better an aptamer gets at binding, the less likely it is that another RNA molecule chosen at random will do just as well: The functional information of the winners in each round should rise. Szostak found that the functional information of the best-performing aptamers got ever closer to the maximum value predicted theoretically. Selected for Function
Hazen came across Szostak's idea while thinking about the origin of life—an issue that drew him in as a mineralogist, because chemical reactions taking place on minerals have long been suspected to have played a key role in getting life started. 'I concluded that talking about life versus nonlife is a false dichotomy,' Hazen said. 'I felt there had to be some kind of continuum—there has to be something that's driving this process from simpler to more complex systems.' Functional information, he thought, promised a way to get at the 'increasing complexity of all kinds of evolving systems.'
In 2007 Hazen collaborated with Szostak to write a computer simulation involving algorithms that evolve via mutations. Their function, in this case, was not to bind to a target molecule, but to carry out computations. Again they found that the functional information increased spontaneously over time as the system evolved.
There the idea languished for years. Hazen could not see how to take it any further until Wong accepted a fellowship at the Carnegie Institution in 2021. Wong had a background in planetary atmospheres, but he and Hazen discovered they were thinking about the same questions. 'From the very first moment that we sat down and talked about ideas, it was unbelievable,' Hazen said.
Robert Hazen, a mineralogist at the Carnegie Institution in Washington, DC. Photograph: Courtesy of Robert Hazen
'I had got disillusioned with the state of the art of looking for life on other worlds,' Wong said. 'I thought it was too narrowly constrained to life as we know it here on Earth, but life elsewhere may take a completely different evolutionary trajectory. So how do we abstract far enough away from life on Earth that we'd be able to notice life elsewhere even if it had different chemical specifics, but not so far that we'd be including all kinds of self-organizing structures like hurricanes?'
The pair soon realized that they needed expertise from a whole other set of disciplines. 'We needed people who came at this problem from very different points of view, so that we all had checks and balances on each other's prejudices,' Hazen said. 'This is not a mineralogical problem; it's not a physics problem, or a philosophical problem. It's all of those things.'
They suspected that functional information was the key to understanding how complex systems like living organisms arise through evolutionary processes happening over time. 'We all assumed the second law of thermodynamics supplies the arrow of time,' Hazen said. 'But it seems like there's a much more idiosyncratic pathway that the universe takes. We think it's because of selection for function—a very orderly process that leads to ordered states. That's not part of the second law, although it's not inconsistent with it either.'
Looked at this way, the concept of functional information allowed the team to think about the development of complex systems that don't seem related to life at all.
At first glance, it doesn't seem a promising idea. In biology, function makes sense. But what does 'function' mean for a rock?
All it really implies, Hazen said, is that some selective process favors one entity over lots of other potential combinations. A huge number of different minerals can form from silicon, oxygen, aluminum, calcium, and so on. But only a few are found in any given environment. The most stable minerals turn out to be the most common. But sometimes less stable minerals persist because there isn't enough energy available to convert them to more stable phases.
'Information itself might be a vital parameter of the cosmos, similar to mass, charge, and energy.'
This might seem trivial, like saying that some objects exist while other ones don't, even if they could in theory. But Hazen and Wong have shown that, even for minerals, functional information has increased over the course of Earth's history. Minerals evolve toward greater complexity (though not in the Darwinian sense). Hazen and colleagues speculate that complex forms of carbon such as graphene might form in the hydrocarbon-rich environment of Saturn's moon Titan—another example of an increase in functional information that doesn't involve life.
It's the same with chemical elements. The first moments after the Big Bang were filled with undifferentiated energy. As things cooled, quarks formed and then condensed into protons and neutrons. These gathered into the nuclei of hydrogen, helium, and lithium atoms. Only once stars formed and nuclear fusion happened within them did more complex elements like carbon and oxygen form. And only when some stars had exhausted their fusion fuel did their collapse and explosion in supernovas create heavier elements such as heavy metals. Steadily, the elements increased in nuclear complexity.
Wong said their work implies three main conclusions.
First, biology is just one example of evolution. 'There is a more universal description that drives the evolution of complex systems.' Illustration: Irene Pérez for Quanta Magazine
Second, he said, there might be 'an arrow in time that describes this increasing complexity,' similar to the way the second law of thermodynamics, which describes the increase in entropy, is thought to create a preferred direction of time.
Finally, Wong said, 'information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.'
In the work Hazen and Szostak conducted on evolution using artificial-life algorithms, the increase in functional information was not always gradual. Sometimes it would happen in sudden jumps. That echoes what is seen in biological evolution. Biologists have long recognized transitions where the complexity of organisms increases abruptly. One such transition was the appearance of organisms with cellular nuclei (around 1.8 billion to 2.7 billion years ago). Then there was the transition to multicellular organisms (around 2 billion to 1.6 billion years ago), the abrupt diversification of body forms in the Cambrian explosion (540 million years ago), and the appearance of central nervous systems (around 600 million to 520 million years ago). The arrival of humans was arguably another major and rapid evolutionary transition.
Evolutionary biologists have tended to view each of these transitions as a contingent event. But within the functional-information framework, it seems possible that such jumps in evolutionary processes (whether biological or not) are inevitable.
In these jumps, Wong pictures the evolving objects as accessing an entirely new landscape of possibilities and ways to become organized, as if penetrating to the 'next floor up.' Crucially, what matters—the criteria for selection, on which continued evolution depends—also changes, plotting a wholly novel course. On the next floor up, possibilities await that could not have been guessed before you reached it.
For example, during the origin of life it might initially have mattered that proto-biological molecules would persist for a long time—that they'd be stable. But once such molecules became organized into groups that could catalyze one another's formation—what Kauffman has called autocatalytic cycles—the molecules themselves could be short-lived, so long as the cycles persisted. Now it was dynamical, not thermodynamic, stability that mattered. Ricard Solé of the Santa Fe Institute thinks such jumps might be equivalent to phase transitions in physics, such as the freezing of water or the magnetization of iron: They are collective processes with universal features, and they mean that everything changes, everywhere, all at once. In other words, in this view there's a kind of physics of evolution—and it's a kind of physics we know about already. The Biosphere Creates Its Own Possibilities
The tricky thing about functional information is that, unlike a measure such as size or mass, it is contextual: It depends on what we want the object to do, and what environment it is in. For instance, the functional information for an RNA aptamer binding to a particular molecule will generally be quite different from the information for binding to a different molecule.
Yet finding new uses for existing components is precisely what evolution does. Feathers did not evolve for flight, for example. This repurposing reflects how biological evolution is jerry-rigged, making use of what's available.
Kauffman argues that biological evolution is thus constantly creating not just new types of organisms but new possibilities for organisms, ones that not only did not exist at an earlier stage of evolution but could not possibly have existed. From the soup of single-celled organisms that constituted life on Earth 3 billion years ago, no elephant could have suddenly emerged—this required a whole host of preceding, contingent but specific innovations.
However, there is no theoretical limit to the number of uses an object has. This means that the appearance of new functions in evolution can't be predicted—and yet some new functions can dictate the very rules of how the system evolves subsequently. 'The biosphere is creating its own possibilities,' Kauffman said. 'Not only do we not know what will happen, we don't even know what can happen.' Photosynthesis was such a profound development; so were eukaryotes, nervous systems and language. As the microbiologist Carl Woese and the physicist Nigel Goldenfeld put it in 2011, 'We need an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. Thus, we end up with an infinite hierarchy.'
The physicist Paul Davies of Arizona State University agrees that biological evolution 'generates its own extended possibility space which cannot be reliably predicted or captured via any deterministic process from prior states. So life evolves partly into the unknown.'
'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms.'
Mathematically, a 'phase space' is a way of describing all possible configurations of a physical system, whether it's as comparatively simple as an idealized pendulum or as complicated as all the atoms comprising the Earth. Davies and his co-workers have recently suggested that evolution in an expanding accessible phase space might be formally equivalent to the 'incompleteness theorems' devised by the mathematician Kurt Gödel. Gödel showed that any system of axioms in mathematics permits the formulation of statements that can't be shown to be true or false. We can only decide such statements by adding new axioms.
Davies and colleagues say that, as with Gödel's theorem, the key factor that makes biological evolution open-ended and prevents us from being able to express it in a self-contained and all-encompassing phase space is that it is self-referential: The appearance of new actors in the space feeds back on those already there to create new possibilities for action. This isn't the case for physical systems, which, even if they have, say, millions of stars in a galaxy, are not self-referential.
'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms,' said Marcus Heisler, a plant developmental biologist at the University of Sydney and co-author of the incompleteness paper. This connection between biological evolution and the issue of noncomputability, Davies said, 'goes right to the heart of what makes life so magical.'
Is biology special, then, among evolutionary processes in having an open-endedness generated by self-reference? Hazen thinks that in fact once complex cognition is added to the mix—once the components of the system can reason, choose, and run experiments 'in their heads'—the potential for macro-micro feedback and open-ended growth is even greater. 'Technological applications take us way beyond Darwinism,' he said. A watch gets made faster if the watchmaker is not blind. Back to the Bench
If Hazen and colleagues are right that evolution involving any kind of selection inevitably increases functional information—in effect, complexity—does this mean that life itself, and perhaps consciousness and higher intelligence, is inevitable in the universe? That would run counter to what some biologists have thought. The eminent evolutionary biologist Ernst Mayr believed that the search for extraterrestrial intelligence was doomed because the appearance of humanlike intelligence is 'utterly improbable.' After all, he said, if intelligence at a level that leads to cultures and civilizations were so adaptively useful in Darwinian evolution, how come it only arose once across the entire tree of life?
Mayr's evolutionary point possibly vanishes in the jump to humanlike complexity and intelligence, whereupon the whole playing field is utterly transformed. Humans attained planetary dominance so rapidly (for better or worse) that the question of when it will happen again becomes moot. Illustration: Irene Pérez for Quanta Magazine
But what about the chances of such a jump happening in the first place? If the new 'law of increasing functional information' is right, it looks as though life, once it exists, is bound to get more complex by leaps and bounds. It doesn't have to rely on some highly improbable chance event.
What's more, such an increase in complexity seems to imply the appearance of new causal laws in nature that, while not incompatible with the fundamental laws of physics governing the smallest component parts, effectively take over from them in determining what happens next. Arguably we see this already in biology: Galileo's (apocryphal) experiment of dropping two masses from the Leaning Tower of Pisa no longer has predictive power when the masses are not cannonballs but living birds.
Together with the chemist Lee Cronin of the University of Glasgow, Sara Walker of Arizona State University has devised an alternative set of ideas to describe how complexity arises, called assembly theory. In place of functional information, assembly theory relies on a number called the assembly index, which measures the minimum number of steps required to make an object from its constituent ingredients.
'Laws for living systems must be somewhat different than what we have in physics now,' Walker said, 'but that does not mean that there are no laws.' But she doubts that the putative law of functional information can be rigorously tested in the lab. 'I am not sure how one could say [the theory] is right or wrong, since there is no way to test it objectively,' she said. 'What would the experiment look for? How would it be controlled? I would love to see an example, but I remain skeptical until some metrology is done in this area.'
Hazen acknowledges that, for most physical objects, it is impossible to calculate functional information even in principle. Even for a single living cell, he admits, there's no way of quantifying it. But he argues that this is not a sticking point, because we can still understand it conceptually and get an approximate quantitative sense of it. Similarly, we can't calculate the exact dynamics of the asteroid belt because the gravitational problem is too complicated—but we can still describe it approximately enough to navigate spacecraft through it.
Wong sees a potential application of their ideas in astrobiology. One of the curious aspects of living organisms on Earth is that they tend to make a far smaller subset of organic molecules than they could make given the basic ingredients. That's because natural selection has picked out some favored compounds. There's much more glucose in living cells, for example, than you'd expect if molecules were simply being made either randomly or according to their thermodynamic stability. So one potential signature of lifelike entities on other worlds might be similar signs of selection outside what chemical thermodynamics or kinetics alone would generate. (Assembly theory similarly predicts complexity-based biosignatures.)
There might be other ways of putting the ideas to the test. Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational 'artificial life.' Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues.
Hazen's team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. 'People are approaching us because they are desperate to find a model to explain their system,' Hazen said.
But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It's hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe.
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WIRED
7 hours ago
- WIRED
A ‘Grand Unified Theory' of Math Just Got a Little Bit Closer
Jul 27, 2025 7:00 AM By extending the scope of a key insight behind Fermat's Last Theorem, four mathematicians have made great strides toward building a unifying theory of mathematics. Illustration: Nash Weerasekera for Quanta Magazine The original version of this story appeared in Quanta Magazine. In 1994, an earthquake of a proof shook up the mathematical world. The mathematician Andrew Wiles had finally settled Fermat's Last Theorem, a central problem in number theory that had remained open for over three centuries. The proof didn't just enthral mathematicians—it made the front page of The New York Times. But to accomplish it, Wiles (with help from the mathematician Richard Taylor) first had to prove a more subtle intermediate statement—one with implications that extended beyond Fermat's puzzle. This intermediate proof involved showing that an important kind of equation called an elliptic curve can always be tied to a completely different mathematical object called a modular form. Wiles and Taylor had essentially unlocked a portal between disparate mathematical realms, revealing that each looks like a distorted mirror image of the other. If mathematicians want to understand something about an elliptic curve, Wiles and Taylor showed, they can move into the world of modular forms, find and study their object's mirror image, then carry their conclusions back with them. This connection between worlds, called 'modularity,' didn't just enable Wiles to prove Fermat's Last Theorem. Mathematicians soon used it to make progress on all sorts of previously intractable problems. Modularity also forms the foundation of the Langlands program, a sweeping set of conjectures aimed at developing a 'grand unified theory' of mathematics. If the conjectures are true, then all sorts of equations beyond elliptic curves will be similarly tethered to objects in their mirror realm. Mathematicians will be able to jump between the worlds as they please to answer even more questions. But proving the correspondence between elliptic curves and modular forms has been incredibly difficult. Many researchers thought that establishing some of these more complicated correspondences would be impossible. Now, a team of four mathematicians has proved them wrong. In February, the quartet finally succeeded in extending the modularity connection from elliptic curves to more complicated equations called abelian surfaces. The team—Frank Calegari of the University of Chicago, George Boxer and Toby Gee of Imperial College London, and Vincent Pilloni of the French National Center for Scientific Research—proved that every abelian surface belonging to a certain major class can always be associated to a modular form. Toby Gee, Frank Calegari, and Vincent Pilloni, along with George Boxer (not pictured), spent nearly a decade on the proof. Photographs: Courtesy of Toby Gee; Jayne Ion; MC 'We mostly believe that all the conjectures are true, but it's so exciting to see it actually realized,' said Ana Caraiani, a mathematician at Imperial College London. 'And in a case that you really thought was going to be out of reach.' It's just the beginning of a hunt that will take years—mathematicians ultimately want to show modularity for every abelian surface. But the result can already help answer many open questions, just as proving modularity for elliptic curves opened up all sorts of new research directions. Through the Looking Glass The elliptic curve is a particularly fundamental type of equation that uses just two variables— x and y . If you graph its solutions, you'll see what appear to be simple curves. But these solutions are interrelated in rich and complicated ways, and they show up in many of number theory's most important questions. The Birch and Swinnerton-Dyer conjecture, for instance—one of the toughest open problems in math, with a $1 million reward for whoever proves it first—is about the nature of solutions to elliptic curves. Elliptic curves can be hard to study directly. So sometimes mathematicians prefer to approach them from a different angle. That's where modular forms come in. A modular form is a highly symmetric function that appears in an ostensibly separate area of mathematical study called analysis. Because they exhibit so many nice symmetries, modular forms can be easier to work with. At first, these objects seem as though they shouldn't be related. But Taylor and Wiles' proof revealed that every elliptic curve corresponds to a specific modular form. They have certain properties in common—for instance, a set of numbers that describes the solutions to an elliptic curve will also crop up in its associated modular form. Mathematicians can therefore use modular forms to gain new insights into elliptic curves. But mathematicians think Taylor and Wiles' modularity theorem is just one instance of a universal fact. There's a much more general class of objects beyond elliptic curves. And all of these objects should also have a partner in the broader world of symmetric functions like modular forms. This, in essence, is what the Langlands program is all about. An elliptic curve has only two variables— x and y —so it can be graphed on a flat sheet of paper. But if you add another variable, z , you get a curvy surface that lives in three-dimensional space. This more complicated object is called an abelian surface, and as with elliptic curves, its solutions have an ornate structure that mathematicians want to understand. It seemed natural that abelian surfaces should correspond to more complicated types of modular forms. But the extra variable makes them much harder to construct and their solutions much harder to find. Proving that they, too, satisfy a modularity theorem seemed completely out of reach. 'It was a known problem not to think about, because people have thought about it and got stuck,' Gee said. But Boxer, Calegari, Gee, and Pilloni wanted to try. Finding a Bridge All four mathematicians were involved in research on the Langlands program, and they wanted to prove one of these conjectures for 'an object that actually turns up in real life, rather than some weird thing,' Calegari said. Not only do abelian surfaces show up in real life—the real life of a mathematician, that is—but proving a modularity theorem about them would open new mathematical doors. 'There are lots of things you can do if you have this statement that you have no chance of doing otherwise,' Calegari said. 'After a coffee, we would always joke that we had to go back to the mine.' The mathematicians started working together in 2016, hoping to follow the same steps that Taylor and Wiles had in their proof about elliptic curves. But every one of those steps was much more complicated for abelian surfaces. So they focused on a particular type of abelian surface, called an ordinary abelian surface, that was easier to work with. For any such surface, there's a set of numbers that describes the structure of its solutions. If they could show that the same set of numbers could also be derived from a modular form, they'd be done. The numbers would serve as a unique tag, allowing them to pair each of their abelian surfaces with a modular form. The problem was that while these numbers are straightforward to compute for a given abelian surface, mathematicians don't know how to construct a modular form with the exact same tag. Modular forms are simply too difficult to build when the requirements are so constrained. 'The objects you're looking for, you don't really know they exist,' Pilloni said. Instead, the mathematicians showed that it would be enough to construct a modular form whose numbers matched those of the abelian surface in a weaker sense. The modular form's numbers only had to be equivalent in the realm of what's known as clock arithmetic. Imagine a clock: If the hour hand starts at 10 and four hours pass, the clock will point to 2. But clock arithmetic can be done with any number, not just (as in the case of real-world clocks) the number 12. Boxer, Calegari, Gee, and Pilloni only needed to show that their two sets of numbers matched when they used a clock that goes up to 3. This meant that, for a given abelian surface, the mathematicians had more flexibility when it came to building the associated modular form. But even this proved too difficult. Then they stumbled on a trove of modular forms whose corresponding numbers were easy to calculate—so long as they defined their numbers according to a clock that goes up to 2. But the abelian surface needed one that goes up to 3. The mathematicians had an idea of how to roughly bridge these two different clocks. But they didn't know how to make the connection airtight so they could find a true match for the abelian surface in the world of modular forms. Then a new piece of mathematics appeared that turned out to be just what they needed. Lue Pan's work in a seemingly disparate area of number theory turned out to be essential. Photograph: Will Crow/ Princeton University Surprise Help In 2020, a number theorist named Lue Pan posted a proof about modular forms that didn't initially seem connected to the quartet's problem. But they soon recognized that the techniques he'd developed were surprisingly relevant. 'I didn't expect that,' Pan said. After years of regular meetings, mostly on Zoom, the mathematicians started to make progress adapting Pan's techniques, but major hurdles remained. Then, in the summer of 2023, Boxer, Gee, and Pilloni saw a conference in Bonn, Germany, as the perfect opportunity to come together. The only problem was that Calegari was supposed to travel to China at the same time to give a talk. But a difficult visit to the Chinese consulate in Chicago made him reconsider. 'Eight hours later, my visa was rejected and my car was towed,' he said. He decided to scrap the China talk and join his collaborators in Germany. Gee secured the team a room in the basement of the Hausdorff Research Institute, where they were unlikely to be disturbed by itinerant mathematicians. There, they spent an entire week working on Pan's theorem, one 12-hour day after the next, only coming up to ground level occasionally for caffeine. 'After a coffee, we would always joke that we had to go back to the mine,' Pilloni said. The grind paid off. 'There were many twists to come later,' Calegari said, 'but at the end of that week I thought we more or less had it.' It took another year and a half to turn Calegari's conviction into a 230-page proof, which they posted online in February. Putting all the pieces together, they'd proved that any ordinary abelian surface has an associated modular form. Their new portal could one day be as powerful as Taylor and Wiles' result, revealing more about abelian surfaces than anyone thought possible. But first, the team will have to extend their result to non-ordinary abelian surfaces. They've teamed up with Pan to continue the hunt. 'Ten years from now, I'd be surprised if we haven't found almost all of them,' Gee said. The work has also allowed mathematicians to formulate new conjectures—such as an analogue of the Birch and Swinnerton-Dyer conjecture that involves abelian surfaces instead of elliptic curves. 'Now we at least know that the analogue makes sense' for these ordinary surfaces, said Andrew Sutherland, a mathematician at the Massachusetts Institute of Technology. 'Previously we did not know that.' 'Lots of things that I had dreamed we would be able to one day prove are now within reach because of this theorem,' he added. 'It changes things.' Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


Washington Post
8 hours ago
- Washington Post
Ancient river landscapes under Antarctica may may stabilize ice sheet
Landscapes left behind by ancient rivers and buried beneath the Antarctic ice may affect the rate of ice loss, researchers report in Nature Geoscience. The team used radio echo sounding, a technique that measures ice thickness using radar, to study the East Antarctic Ice Sheet between Princess Elizabeth Land and George V Land in Antarctica. Parts of the ice sheet are thought to be particularly susceptible to climate change because the land beneath it contains huge troughs that let warming ocean water reach the ice, causing rapid shrinking.


Gizmodo
8 hours ago
- Gizmodo
This Dinosaur Probably Chirped Like a Bird
Scientists have discovered a dinosaur that might have chirped like a bird, a finding that suggests the evolutionary origins of birdsong may be far more ancient than we previously thought. In a paper published last week in the journal PeerJ, an international team of researchers describes a 163-million-year-old fossil found in northeastern China's Hebei Province. The fossil dinosaur, which they've dubbed Pulaosaurus qinglong, measures just 28 inches (72 centimeters) and is largely complete, giving researchers an unusually detailed look at its anatomy, including its surprisingly birdlike throat. 'Even when you have a dinosaur skeleton preserved, you don't always have these isolated bones preserved with other skull elements,' Xing Xu, a paleontologist at the Chinese Academy of Sciences in Beijing and an author of the paper, told The New York Times. 'They're very thin bones, very delicate and hard to preserve.' In vertebrates, the vocal organs protect the airway and can produce sounds, including basic noises like hisses, groans, and grunts. In most living reptiles, these structures are made of cartilage and are relatively simple. Birds, however, have delicate, bony, flexible vocal organs that can make more complicated and diverse sounds. Pulaosuarus' throat seems somewhere between the two. Its vocal structures appear to be similar to that of another dinosaur, Pinacosaurus, a kind of ankylosaur with a large, bony larynx that may have been flexible enough to produce birdlike chirps and tweets, the researchers told the Times. The researchers believe that Pulaosaurus lived during the late Jurassic period and belonged to the same group of creatures that would later give rise to 'duck-billed' dinosaurs like hadrosaurs. But Pulaosaurus and Pinacosaurus are separated by millions of years of evolution, and neither belong to the group of dinosaurs that eventually produced birds. While it's possible they developed their vocal features independently, the presence of similar structures in such distinct species indicates that dinosaurs' ancient ancestors may have been pretty chirpy. That means the origins of birdsong could lie in creatures that lived more than 230 million years ago, but it remains a mystery as to how or when modern birds' voice box, called the syrinx, evolved—or if any dinosaur shared their incredible vocal prowess.