logo
#

Latest news with #QuantaMagazine

The Hunt for a Fundamental Theory of Quantum Gravity
The Hunt for a Fundamental Theory of Quantum Gravity

WIRED

time4 days ago

  • Science
  • WIRED

The Hunt for a Fundamental Theory of Quantum Gravity

Jul 20, 2025 7:00 AM Black hole and Big Bang singularities break our best theory of gravity. A trilogy of theorems hints that physicists must go to the ends of space and time to find a fix. ILLUSTRATION: MARK BELAN FOR QUANTA MAGAZINE The original version of this story appeared in Quanta Magazine . Two blind spots torture physicists: the birth of the universe and the center of a black hole. The former may feel like a moment in time and the latter a point in space, but in both cases the normally interwoven threads of space and time seem to stop short. These mysterious points are known as singularities. Singularities are predictions of Albert Einstein's general theory of relativity. According to this theory, clumps of matter or energy curve the space-time fabric toward themselves, and this curvature induces the force of gravity. Pack enough stuff into a small enough spot, and Einstein's equations seem to predict that space-time will curve infinitely steeply there, such that gravity grows infinitely strong. Most physicists don't believe, however, that Einstein's theory says much about what really happens at these points. Rather, singularities are widely seen as 'mathematical artifacts,' as Hong Liu, a physicist at the Massachusetts Institute of Technology, put it, not objects that 'occur in any physical universe.' They are where general relativity malfunctions. The singularities are expected to vanish in a more fundamental theory of gravity that Einstein's space-time picture merely approximates—a theory of quantum gravity. But as physicists take steps toward that truer and more complete theory by merging general relativity and quantum physics, singularities are proving hard to erase. The British mathematical physicist Roger Penrose won the Nobel Prize in Physics for proving in the 1960s that singularities would inevitably occur in an empty universe made up entirely of space-time. More recent research has extended this insight into more realistic circumstances. One paper established that a universe with quantum particles would also feature singularities, although it only considered the case where the particles don't bend the space-time fabric at all. Then, earlier this year, a physicist proved that these blemishes exist even in theoretical universes where quantum particles do slightly nudge space-time itself—that is, universes quite a bit like our own. This trilogy of proofs challenges physicists to confront the possibility that singularities may be more than mere mathematical mirages. They hint that our universe may in fact contain points where space-time frays so much that it becomes unrecognizable. No object can pass, and clocks tick to a halt. The singularity theorems invite researchers to grapple with the nature of these points and pursue a more fundamental theory that can clarify what might continue if time truly stops. Space-Time's Fatal Flaws Karl Schwarzschild first discovered an arrangement of space-time with a singularity in 1916, just months after Einstein published general relativity. The bizarre features of the 'Schwarzschild solution' took years for physicists to understand. Space-time assumes a shape analogous to a whirlpool with walls that swirl more and more steeply as you go farther in; at the bottom, the curvature of space-time is infinite. The vortex is inescapable; it has a spherical boundary that traps anything falling inside, even light rays. It took decades for physicists to accept that these inconceivable objects, eventually dubbed black holes, might actually exist. The British mathematical physicist Roger Penrose proved that given two simple assumptions, space-time must end at points called singularities. COURTESY OF PUBLIC DOMAIN J. Robert Oppenheimer and Hartland Snyder calculated in 1939 that if a perfectly spherical star gravitationally collapses to a point, its matter will become so dense that it will stretch space-time into a singularity. But real stars bubble and churn, especially while imploding, so physicists wondered whether their nonspherical shapes would stop them from forming singularities. Penrose eliminated the need for geometric perfection in 1965. His landmark proof relied on two assumptions. First, you need a 'trapped surface' inside of which light can never escape. If you cover this surface in light bulbs and switch them on, their light rays will fall inward faster than they can travel outward. Crucially, this shell of light will shrink regardless of whether it started out as a perfect sphere, a dimpled golf ball, or something more misshapen. Second, space-time should always curve in such a way that light rays bend toward each other but never diverge. In short, gravity should be attractive, which is the case so long as energy is never negative. With these two stipulations, Penrose proved the mortality of at least one of the trapped light rays. Its otherwise eternal journey through space and time must terminate in a singularity, a point where the space-time fabric ceases to exist, where there is no future for the light ray to travel into. This was a new definition of a singularity, distinct from the infinite curvature of the Schwarzschild solution. Its generality enabled Penrose to prove in three scant pages of math that, under his two assumptions, singularities will inevitably form. Caption: A hand-drawn figure in Penrose's 1965 paper proving the singularity theorem shows the collapse of space-time to form a singularity. The paper has been called 'the most important paper in general relativity' since Einstein's. ILLUSTRATION: Roger Penrose, Physical Review Letters, American Physical Society 'Penrose's paper was probably the most important paper in general relativity ever written, other than Einstein's original paper,' said Geoff Penington, a physicist at the University of California, Berkeley. Stephen Hawking soon extended Penrose's argument to the early universe, proving that a cosmos described by general relativity must have sprung from a singular point during the Big Bang. This cosmological singularity resembles a black hole in that, if you imagine rewinding the history of the universe, light rays will run into a wall at the beginning of time. Over the years, physicists have accumulated heaps of evidence that black holes exist, and that the universe began with an event that looks very much like a Big Bang. But do these phenomena truly represent space-time singularities? Many physicists find the actual existence of such points unthinkable. When you try to calculate the fate of a particle approaching the singularity, general relativity glitches and gives impossible, infinite answers. 'The singularity means a lack of predictability,' Liu said. 'Your theory just breaks down.' But the particle in the real world must have a fate of some sort. So a more universal theory that can predict that fate—very likely a quantum theory—must take over. General relativity is a classical theory, meaning that space-time takes on one, and only one, shape at every moment. In contrast, matter is quantum mechanical, meaning it can have multiple possible states at once—a feature known as superposition. Since space-time reacts to the matter in it, theorists expect that any matter particles in a superposition of occupying two different locations should force space-time into a superposition of two distortions. That is, space-time and gravity should also follow quantum rules. But physicists haven't yet worked out what those rules are. Into the Onion Theorists approach their quest for a quantum theory of gravity the way they might peel an onion: layer by layer. Each layer represents a theory of a universe that imperfectly approximates the real one. The deeper you go, the more of the interplay between quantum matter and space-time you can capture. The German physicist-soldier Karl Schwarzschild calculated the shape that space-time takes around a massive point. Years later, physicists realized that this geometry contains a singularity. COURTESY OF PUBLIC DOMAIN Penrose worked in the outermost layer of the onion. He used the general theory of relativity and ignored quantumness entirely. In effect, he proved that the space-time fabric has singularities when it is completely devoid of any quantum matter. Physicists aspire to someday reach the onion's core. In it, they'll find a theory describing both space-time and matter in all their quantum glory. This theory would have no blind spots—all calculations should yield meaningful results. But what about the middle layers? Could physicists resolve Penrose's singularities by moving to something a little more quantum, and therefore a little more realistic? 'It was the obvious speculation, that somehow quantum effects should fix the singularity,' Penington said. They first tried to do so in the late 2000s. The assumption that had confined Penrose's proof to the outermost layer was that energy is never negative. That's true in everyday, classical situations, but not in quantum mechanics. Energy goes negative, at least momentarily, in quantum phenomena such as the Casimir effect, where (experiments show) two metal plates attract each other in a vacuum. And negative energies play a role in the way black holes are thought to radiate particles, eventually 'evaporating' entirely. All the deeper, quantum layers of the onion would feature this exotic energetic behavior. The physicist who peeled the top layer was Aron Wall, then based at the University of Maryland and now at the University of Cambridge. To cut into the quantum realm and abandon Penrose's energy assumption, Wall latched on to a theoretical discovery made in the 1970s by Jacob Bekenstein. Bekenstein knew that for any given region of space, the contents of the region grow more mixed up as time goes on. In other words, entropy, a measure of this mixing, tends to increase, a rule known as the second law of thermodynamics. While considering a region that contains a black hole, the physicist realized that the entropy comes from two sources. There's the standard source—the number of ways that quantum particles in the space around the black hole could be arranged. But the black hole has entropy too, and the amount depends on the black hole's surface area. So the total entropy of the region is a sum: the surface area of the black hole plus the entropy of nearby quantum stuff. This observation became known as the 'generalized' second law. Wall 'made it his mission to understand the generalized second law,' said Raphael Bousso, a physicist at Berkeley. 'He was thinking about it in much clearer and much better ways than everybody else on the planet.' Reaching the quantum layers of the onion would mean accommodating negative energy and the presence of quantum particles. To do so, Wall reasoned that he could take any surface area in general relativity and add to it the entropy of those particles, as the generalized second law suggested. Penrose's proof of his singularity theorem had involved the trapped surface. So Wall upgraded it to a 'quantum trapped surface.' And when he reworked Penrose's singularity theorem in this way, it held. Singularities form even in the presence of quantum particles. Wall published his findings in 2010. In 2010, Aron Wall, now at the University of Cambridge, revamped Penrose's proof to show that singularities exist in a world where space-time has no quantum properties but is filled with particles that do. PHOTOGRAPH: NICOLE WALL 'Aron's paper was a seminal breakthrough in combining quantum mechanics and gravity in a more precise way,' Penington said. Having peeled back the classical outer layer of the onion, where energy is always positive, Wall reached a lightly quantum layer—a context physicists call semiclassical. In a semiclassical world, space-time guides the journeys of quantum particles, but it cannot react to their presence. A semiclassical black hole will radiate particles, for instance, since that's a consequence of how particles experience a space-time warped into a black hole shape. But the space-time—the black hole itself—will never actually shrink in size even as the radiation leaks energy into the void for all eternity. That's almost, but not exactly, what happens in the real universe. You could watch a black hole radiate particles for a century without seeing it shrink a single nanometer. But if you could watch for longer—many trillions upon trillions of years—you would see the black hole waste away to nothing. The next onion layer beckoned. Dialing Up the Quantumness Bousso recently revisited Wall's proof and found that he could cut a little deeper. What about the world where black holes shrink as they radiate? In this scenario, the space-time fabric can react to quantum particles. Using more refined mathematical machinery developed by Wall and others since 2010, Bousso found that, despite the intensified quantumness of his scenario, singularities continue to exist. He posted his paper, which has not yet been peer-reviewed, in January. The world of Bousso's new theorem still departs from our universe in notable ways. For mathematical convenience, he assumed that there's an unlimited variety of particles—an unrealistic assumption that makes some physicists wonder whether this third layer matches reality (with its 17 or so known particles) any better than the second layer does. 'We don't have an infinite number of quantum fields,' said Edgar Shaghoulian, a physicist at the University of California, Santa Cruz. Still, for some experts, Bousso's work delivers a satisfying denouement to the Penrose and Wall singularity story, despite its unrealistic abundance of particles. It establishes that singularities can't be avoided, even in space-times with mild reactions to quantum matter. 'Just by adding small quantum corrections, you can't prevent the singularity,' Penington said. Wall and Bousso's work 'answers that pretty definitively.' The Real Singularity But Bousso's theorem still doesn't guarantee that singularities must form in our universe. Some physicists hold out hope that the dead ends do somehow go away. What seems like a singularity could actually connect to somewhere else. In the case of a black hole, perhaps those light rays end up in another universe. And a lack of a Big Bang singularity might imply that our universe began with a 'Big Bounce.' The idea is that a previous universe, as it collapsed under the pull of gravity, somehow dodged the formation of a singularity and instead bounced into a period of expansion. Physicists who are developing bounce theories often work in the second layer of the onion, using semiclassical physics that exploits negative-energy quantum effects to get around the singularity required by the Penrose and Hawking theorems. In light of the newer theorems, they will now need to swallow the uncomfortable truth that their theories violate the generalized second law as well. One physicist pursuing bounces, Surjeet Rajendran of Johns Hopkins University, says he is undaunted. He points out that not even the generalized second law is gospel truth. Rejecting it would make singularities avoidable and continuations of space-time possible. Singularity skeptics can also appeal to the theory at the core of the onion, where space-time behaves in truly quantum ways, such as taking on superpositions. There, nothing can be taken for granted. It becomes hard to define the concept of area, for instance, so it's not clear what form the second law should take, and therefore the new theorems won't hold. Bousso and like-minded physicists, however, suspect that a highly quantum arena with no notion of area is tantamount to a dead-end for a light ray, and therefore that something Penrose would recognize as a singularity should persist in the core theory and in our universe. The beginning of the cosmos and the hearts of black holes would truly mark edges of the map where clocks can't tick and space stops. 'Inside of black holes, I am positive there is some notion of singularity,' said Netta Engelhardt, a physicist at MIT who has worked with Wall. In that case, the still-unknown fundamental theory of quantum gravity would not kill singularities but demystify them. This truer theory would allow physicists to ask questions and calculate meaningful answers, but the language of those questions and answers would change dramatically. Space-time quantities like position, curvature and duration might be useless for describing a singularity. There, where time ends, other quantities or concepts might have to take their place. 'If you had to make me guess,' Penington said, 'whatever quantum state describes the singularity itself does not have a notion of time.' Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

For Algorithms, Memory Is a Far More Powerful Resource Than Time
For Algorithms, Memory Is a Far More Powerful Resource Than Time

WIRED

time13-07-2025

  • Science
  • WIRED

For Algorithms, Memory Is a Far More Powerful Resource Than Time

Jul 13, 2025 7:00 AM One computer scientist's 'stunning' proof is the first progress in 50 years on one of the most famous questions in computer science. Illustration: Irene Pérez/Quanta Magazine The original version of this story appeared in Quanta Magazine . One July afternoon in 2024, Ryan Williams set out to prove himself wrong. Two months had passed since he'd hit upon a startling discovery about the relationship between time and memory in computing. It was a rough sketch of a mathematical proof that memory was more powerful than computer scientists believed: A small amount would be as helpful as a lot of time in all conceivable computations. That sounded so improbable that he assumed something had to be wrong, and he promptly set the proof aside to focus on less crazy ideas. Now, he'd finally carved out time to find the error. That's not what happened. After hours of poring over his argument, Williams couldn't find a single flaw. 'I just thought I was losing my mind,' said Williams, a theoretical computer scientist at the Massachusetts Institute of Technology. For the first time, he began to entertain the possibility that maybe, just maybe, memory really was as powerful as his work suggested. Over the months that followed, he fleshed out the details, scrutinized every step, and solicited feedback from a handful of other researchers. In February, he finally posted his proof online, to widespread acclaim. 'It's amazing. It's beautiful,' said Avi Wigderson, a theoretical computer scientist at the Institute for Advanced Study in Princeton, New Jersey. As soon as he heard the news, Wigderson sent Williams a congratulatory email. Its subject line: 'You blew my mind.' Time and memory (also called space) are the two most fundamental resources in computation: Every algorithm takes some time to run and requires some space to store data while it's running. Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their run time, and researchers had long assumed there's no way to do better. Williams' proof established a mathematical procedure for transforming any algorithm—no matter what it does—into a form that uses much less space. Ryan Williams stunned his colleagues with a milestone proof about the relationship between time and space in computation. Photograph: Katherine Taylor for Quanta Magazine What's more, this result—a statement about what you can compute given a certain amount of space—also implies a second result, about what you cannot compute in a certain amount of time. This second result isn't surprising in itself: Researchers expected it to be true, but they had no idea how to prove it. Williams' solution, based on his sweeping first result, feels almost cartoonishly excessive, akin to proving a suspected murderer guilty by establishing an ironclad alibi for everyone else on the planet. It could also offer a new way to attack one of the oldest open problems in computer science. 'It's a pretty stunning result, and a massive advance,' said Paul Beame, a computer scientist at the University of Washington. 'It's a little bit less of a surprise that it's Ryan doing this.' Space to Roam Williams, 46, has an open, expressive face and a hint of gray in his hair. His office, which looks out over the colorful spires of MIT's Stata Center, is another illustration of the creative use of space. A pair of yoga mats have transformed a window ledge into a makeshift reading nook, and the desk is tucked into an oddly shaped corner, freeing up room for a couch facing a large whiteboard brimming with mathematical scribblings. MIT is a long way from Williams' childhood home in rural Alabama, where there was no shortage of space. He grew up on a 50-acre farm and first saw a computer at age 7, when his mother drove him across the county for a special academic enrichment class. He recalled being transfixed by a simple program for generating a digital fireworks display. 'It was taking a random color and sending it in a random direction from the middle of the monitor,' Williams said. 'You couldn't have predicted what picture you're going to get.' The world of computers seemed a wild and wonderful playground, full of infinite possibilities. Young Williams was hooked. 'I was writing programs to myself, on paper, to be run on a future computer,' he said. 'My parents didn't really know what to do with me.' Williams' office, like his new result, makes creative use of space. Photograph: Katherine Taylor for Quanta Magazine As he grew older, Williams advanced from imaginary computers to real ones. For his last two years of high school, he transferred to the Alabama School of Math and Science, a prestigious public boarding school, where he first encountered the theoretical side of computer science. 'I realized that there was a wider world of things out there, and that there was a way to think mathematically about computers,' he said. 'That's what I wanted to do.' Williams was especially drawn to a branch of theoretical computer science called computational complexity theory. It deals with the resources (such as time and space) that are needed to solve computational problems such as sorting lists or factoring numbers. Most problems can be solved by many different algorithms, each with its own demands on time and space. Complexity theorists sort problems into categories, called complexity classes, based on the resource demands of the best algorithms for solving them—that is, the algorithms that run fastest or use the least amount of space. But how do you make the study of computational resources mathematically rigorous? You won't get far if you try to analyze time and space by comparing minutes to megabytes. To make any progress, you need to start with the right definitions. Getting Resourceful Those definitions emerged from the work of Juris Hartmanis, a pioneering computer scientist who had experience making do with limited resources. He was born in 1928 into a prominent Latvian family, but his childhood was disrupted by the outbreak of World War II. Occupying Soviet forces arrested and executed his father, and after the war Hartmanis finished high school in a refugee camp. He went on to university, where he excelled even though he couldn't afford textbooks. 'I compensated by taking very detailed notes in lectures,' he recalled in a 2009 interview. 'There is a certain advantage to [having] to improvise and overcome difficulties.' Hartmanis immigrated to the United States in 1949, and worked a series of odd jobs—building agricultural machinery, manufacturing steel and even serving as a butler—while studying mathematics at the University of Kansas City. He went on to a spectacularly successful career in the young field of theoretical computer science. In 1960, while working at the General Electric research laboratory in Schenectady, New York, Hartmanis met Richard Stearns, a graduate student doing a summer internship. In a pair of groundbreaking papers they established precise mathematical definitions for time and space. These definitions gave researchers the language they needed to compare the two resources and sort problems into complexity classes. In the 1960s, Juris Hartmanis established the definitions that computer scientists use to analyze space and time. Courtesy of the Division of Rare and Manuscript Collections, Cornell University Library One of the most important classes goes by the humble name 'P.' Roughly speaking, it encompasses all problems that can be solved in a reasonable amount of time. An analogous complexity class for space is dubbed 'PSPACE.' The relationship between these two classes is one of the central questions of complexity theory. Every problem in P is also in PSPACE, because fast algorithms just don't have enough time to fill up much space in a computer's memory. If the reverse statement were also true, the two classes would be equivalent: Space and time would have comparable computational power. But complexity theorists suspect that PSPACE is a much larger class, containing many problems that aren't in P. In other words, they believe that space is a far more powerful computational resource than time. This belief stems from the fact that algorithms can use the same small chunk of memory over and over, while time isn't as forgiving—once it passes, you can't get it back. 'The intuition is just so simple,' Williams said. 'You can reuse space, but you can't reuse time.' But intuition isn't good enough for complexity theorists: They want rigorous proof. To prove that PSPACE is larger than P, researchers would have to show that for some problems in PSPACE, fast algorithms are categorically impossible. Where would they even start? A Space Odyssey As it happened, they started at Cornell University, where Hartmanis moved in 1965 to head the newly established computer science department. Under his leadership it quickly became a center of research in complexity theory, and in the early 1970s, a pair of researchers there, John Hopcroft and Wolfgang Paul, set out to establish a precise link between time and space. 'I thought about it, and I was like, 'Well, that just simply can't be true.'' Hopcroft and Paul knew that to resolve the P versus PSPACE problem, they'd have to prove that you can't do certain computations in a limited amount of time. But it's hard to prove a negative. Instead, they decided to flip the problem on its head and explore what you can do with limited space. They hoped to prove that algorithms given a certain space budget can solve all the same problems as algorithms with a slightly larger time budget. That would indicate that space is at least slightly more powerful than time—a small but necessary step toward showing that PSPACE is larger than P. With that goal in mind, they turned to a method that complexity theorists call simulation, which involves transforming existing algorithms into new ones that solve the same problems, but with different amounts of space and time. To understand the basic idea, imagine you're given a fast algorithm for alphabetizing your bookshelf, but it requires you to lay out your books in dozens of small piles. You might prefer an approach that takes up less space in your apartment, even if it takes longer. A simulation is a mathematical procedure you could use to get a more suitable algorithm: Feed it the original, and it'll give you a new algorithm that saves space at the expense of time. Algorithm designers have long studied these space-time trade-offs for specific tasks like sorting. But to establish a general relationship between time and space, Hopcroft and Paul needed something more comprehensive: a space-saving simulation procedure that works for every algorithm, no matter what problem it solves. They expected this generality to come at a cost. A universal simulation can't exploit the details of any specific problem, so it probably won't save as much space as a specialized simulation. But when Hopcroft and Paul started their work, there were no known universal methods for saving space at all. Even saving a small amount would be progress. The breakthrough came in 1975, after Hopcroft and Paul teamed up with a young researcher named Leslie Valiant. The trio devised a universal simulation procedure that always saves a bit of space. No matter what algorithm you give it, you'll get an equivalent one whose space budget is slightly smaller than the original algorithm's time budget. 'Anything you can do in so much time, you can also do in slightly less space,' Valiant said. It was the first major step in the quest to connect space and time. In 1975, Leslie Valiant helped prove that space is a slightly more powerful computational resource than time. Photograph: Katherine Taylor for Quanta Magazine But then progress stalled, and complexity theorists began to suspect that they'd hit a fundamental barrier. The problem was precisely the universal character of Hopcroft, Paul and Valiant's simulation. While many problems can be solved with much less space than time, some intuitively seemed like they'd need nearly as much space as time. If so, more space-efficient universal simulations would be impossible. Paul and two other researchers soon proved that they are indeed impossible, provided you make one seemingly obvious assumption: Different chunks of data can't occupy the same space in memory at the same time. 'Everybody took it for granted that you cannot do better,' Wigderson said. Paul's result suggested that resolving the P versus PSPACE problem would require abandoning simulation altogether in favor of a different approach, but nobody had any good ideas. That was where the problem stood for 50 years—until Williams finally broke the logjam. First, he had to get through college. Complexity Classes In 1996, the time came for Williams to apply to colleges. He knew that pursuing complexity theory would take him far from home, but his parents made it clear that the West Coast and Canada were out of the question. Among his remaining options, Cornell stood out to him for its prominent place in the history of his favorite discipline. 'This guy Hartmanis defined the time and space complexity classes,' he recalled thinking. 'That was important for me.' Williams was admitted to Cornell with generous financial aid and arrived in the fall of 1997, planning to do whatever it took to become a complexity theorist himself. His single-mindedness stuck out to his fellow students. 'He was just super-duper into complexity theory,' said Scott Aaronson, a computer scientist at the University of Texas, Austin, who overlapped with Williams at Cornell. Williams grew interested in the relationship between space and time as an undergraduate but never found an opportunity to work on it until last year. Photograph: Katherine Taylor for Quanta Magazine But by sophomore year, Williams was struggling to keep up in courses that emphasized mathematical rigor over intuition. After he got a middling grade in a class on the theory of computing, the teacher suggested he consider other careers. But Williams wouldn't give up his dream. He doubled down and enrolled in a graduate theory course, hoping that a stellar grade in the harder class would look impressive on his grad school applications. The professor teaching that graduate course was Hartmanis, by then an elder statesman in the field. Williams began attending Hartmanis' office hours every week, where he was almost always the only student who showed up. His tenacity paid off: he earned an A in the course, and Hartmanis agreed to advise him on an independent research project the following semester. Their weekly meetings continued throughout Williams' time in college. Hartmanis encouraged him to cultivate an individual approach to complexity research and gently steered him away from dead ends. 'I was deeply influenced by him then,' Williams said. 'I guess I still am now.' 'If you get any mathematical result which is the best thing in 50 years, you must be doing something right.' But despite earning a coveted graduate research fellowship from the National Science Foundation, Williams was rejected by every doctoral program he applied to. On hearing the news, Hartmanis phoned a colleague, then turned around and congratulated Williams on getting accepted into a yearlong master's program at Cornell. A year later Williams again applied to various doctoral programs, and with that extra research experience under his belt, he found success. Williams continued working in complexity theory in grad school and the years that followed. In 2010, four years after receiving his doctorate, he proved a milestone result—a small step, but the largest in decades, toward solving the most famous question in theoretical computer science, about the nature of hard problems. That result cemented Williams' reputation, and he went on to write dozens of other papers on different topics in complexity theory. P versus PSPACE wasn't one of them. Williams had been fascinated by the problem since he first encountered it in college. He'd even supplemented his computer science curriculum with courses in logic and philosophy, seeking inspiration from other perspectives on time and space, to no avail. 'It's always been in the back of my mind,' Williams said. 'I just couldn't think of anything interesting enough to say about it.' Last year, he finally found the opportunity he'd been waiting for. Squishy Pebbles The story of Williams' new result started with progress on a different question about memory in computation: What problems can be solved with extremely limited space? In 2010, the pioneering complexity theorist Stephen Cook and his collaborators invented a task, called the tree evaluation problem, that they proved would be impossible for any algorithm with a space budget below a specific threshold. But there was a loophole. The proof relied on the same commonsense assumption that Paul and his colleagues had made decades earlier: Algorithms can't store new data in space that's already full. For over a decade, complexity theorists tried to close that loophole. Then, in 2023, Cook's son James and a researcher named Ian Mertz blew it wide open, devising an algorithm that solved the tree evaluation problem using much less space than anyone thought possible. The elder Cook's proof had assumed that bits of data were like pebbles that have to occupy separate places in an algorithm's memory. But it turns out that's not the only way to store data. 'We can actually think about these pebbles as things that we can squish a little bit on top of each other,' Beame said. James Cook (left) and Ian Mertz recently devised a new algorithm that solved a specific problem using much less space than anyone thought possible. Photographs: Colin Morris; Michal Koucký Cook and Mertz's algorithm roused the curiosity of many researchers, but it wasn't clear that it had any applications beyond the tree evaluation problem. 'Nobody saw how central it is to time versus space itself,' Wigderson said. Ryan Williams was the exception. In spring 2024, a group of students gave a presentation about the Cook and Mertz paper as their final project in a class he was teaching. Their enthusiasm inspired him to take a closer look. As soon as he did, an idea jumped out at him. Cook and Mertz's method, he realized, was really a general-purpose tool for reducing space usage. Why not use it to power a new universal simulation linking time and space—like the one designed by Hopcroft, Paul and Valiant, but better? That classic result was a way to transform any algorithm with a given time budget into a new algorithm with a slightly smaller space budget. Williams saw that a simulation based on squishy pebbles would make the new algorithm's space usage much smaller—roughly equal to the square root of the original algorithm's time budget. That new space-efficient algorithm would also be much slower, so the simulation was not likely to have practical applications. But from a theoretical point of view, it was nothing short of revolutionary. For 50 years, researchers had assumed it was impossible to improve Hopcroft, Paul and Valiant's universal simulation. Williams' idea—if it worked—wouldn't just beat their record—it would demolish it. 'I thought about it, and I was like, 'Well, that just simply can't be true,'' Williams said. He set it aside and didn't come back to it until that fateful day in July, when he tried to find the flaw in the argument and failed. After he realized that there was no flaw, he spent months writing and rewriting the proof to make it as clear as possible. At the end of February, Williams finally put the finished paper online. Cook and Mertz were as surprised as everyone else. 'I had to go take a long walk before doing anything else,' Mertz said. Valiant got a sneak preview of Williams' improvement on his decades-old result during his morning commute. For years, he's taught at Harvard University, just down the road from Williams' office at MIT. They'd met before, but they didn't know they lived in the same neighborhood until they bumped into each other on the bus on a snowy February day, a few weeks before the result was public. Williams described his proof to the startled Valiant and promised to send along his paper. 'I was very, very impressed,' Valiant said. 'If you get any mathematical result which is the best thing in 50 years, you must be doing something right.' PSPACE: The Final Frontier With his new simulation, Williams had proved a positive result about the computational power of space: Algorithms that use relatively little space can solve all problems that require a somewhat larger amount of time. Then, using just a few lines of math, he flipped that around and proved a negative result about the computational power of time: At least a few problems can't be solved unless you use more time than space. That second, narrower result is in line with what researchers expected. The weird part is how Williams got there, by first proving a result that applies to all algorithms, no matter what problems they solve. 'I still have a hard time believing it,' Williams said. 'It just seems too good to be true.' Williams used Cook and Mertz's technique to establish a stronger link between space and time—the first progress on that problem in 50 years. Photograph: Katherine Taylor for Quanta Magazine Phrased in qualitative terms, Williams' second result may sound like the long-sought solution to the P versus PSPACE problem. The difference is a matter of scale. P and PSPACE are very broad complexity classes, while Williams' results work at a finer level. He established a quantitative gap between the power of space and the power of time, and to prove that PSPACE is larger than P, researchers will have to make that gap much, much wider. That's a daunting challenge, akin to prying apart a sidewalk crack with a crowbar until it's as wide as the Grand Canyon. But it might be possible to get there by using a modified version of Williams' simulation procedure that repeats the key step many times, saving a bit of space each time. It's like a way to repeatedly ratchet up the length of your crowbar—make it big enough, and you can pry open anything. That repeated improvement doesn't work with the current version of the algorithm, but researchers don't know whether that's a fundamental limitation. 'It could be an ultimate bottleneck, or it could be a 50-year bottleneck,' Valiant said. 'Or it could be something which maybe someone can solve next week.' If the problem is solved next week, Williams will be kicking himself. Before he wrote the paper, he spent months trying and failing to extend his result. But even if such an extension is not possible, Williams is confident that more space exploration is bound to lead somewhere interesting—perhaps progress on an entirely different problem. 'I can never prove precisely the things that I want to prove,' he said. 'But often, the thing I prove is way better than what I wanted.' Editor's note: Scott Aaronson is a member of Quanta Magazine's advisory board. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Mathematicians Say There's a Number So Big, It's Literally the Edge of Human Knowledge
Mathematicians Say There's a Number So Big, It's Literally the Edge of Human Knowledge

Yahoo

time11-07-2025

  • Yahoo

Mathematicians Say There's a Number So Big, It's Literally the Edge of Human Knowledge

"Hearst Magazines and Yahoo may earn commission or revenue on some items through these links." Here's what you'll learn when you read this story: The Busy Beaver number, or BB(n), represents a mathematical problem that tries to calculate the longest possible run-time of a Turing machine recording 1s and 0s on an infinitely long tape for various sets of instructions called states. While the first three BB(n)s equal 1, 6, and 21, respectively, it takes 107 steps to get to BB(4), and BB(5) produces a staggering 17 trillion possible Turing machines with a 47,176,870 steps being the answer. Now, as mathematicians attempt to wrap their minds around answer BB(6), they're beginning to realize that even expressing the unfathomably large number, likely bigger than the number of atoms in the universe, is itself a problem. For most people, the term 'busy beaver' brings to mind a tireless worker or (for the biologists among us) an absolutely vital ecosystem engineer. However, for mathematicians, 'busy beaver' takes on a similar-yet-unique meaning. True to the moniker's original intent, the idea represents a lot of work, but it's work pointed at a question. As computer scientists Christopher Moore puts it in a video for Quanta Magazine, 'What's the longest, most-complicated thing [a computer] can do, and then stop?' In other words, what's the longest function a computer can run that does not just run forever, stuck in an infinite loop? The solution to this question is called the Busy Beaver number, or BB(n), where the n represents a number of instructions called 'states' that a set of computers—specifically, a type of simple computer called a Turing machine—has to follow. Each state produces a certain number of programs, and each program gets its own Turing machine, so things get complicated fast. BB(1), which has just 1 state, necessitates the use of 25 Turing machines. For decades, many mathematicians believed that solving the Busy Beaver number to four states was the upper limit, but a group of experts managed to confirmed the BB(5) solution in 2024 (on Discord, of all places). Now, participants in that same Busy Beaver Challenge are learning fascinating truths about the next frontier—BB(6)—and how it just might represent the very edge of mathematics, according to a new report by New Scientist. First, a brief explanation. The aforementioned BB(1), which is the simplest version of the BB(n) problem, uses just one set of rules and produces only two outcomes—infinitely moving across the tape, or stopping at the first number. Because 1 is the most amount of steps that any of the 25 Turing machines of BB(1) will complete before finishing its program (known as halting), the answer to BB(1) is 1. As the number of states increases, so do the steps and the number of Turing machines needed to run the programs, meaning that each subsequent BB(n) is exponentially more taxing to solve. BB(2) and BB(3) are 6 and 21 respectively, but BB(4) is 107 and takes seven billion different Turing machines to solve. Granted, many of these machines continue on indefinitely and can be discarded, but many do not. The Busy Beaver number was first formulated by the Hungarian mathematician Tiber Radó in 1962, and 12 years passed before computer scientist Allen Brady determined that BB(4) runs for 107 steps before halting. For decades, this seemed like the absolute limit of what was discernible, but then mathematicians solved BB(5) in 2024 after sifting through 17 trillion (with a t) possible Turing machines. The answer? An astounding 47,176,870 steps. Quanta Magazine has an excellent explainer about how this was achieved. But finally solving BB(5) presented the next obvious question: What about BB(6)? Of course, adding just one more rule makes the problem beyond super exponentially harder, as BB(6) is estimated to require 60 quadrillion Turing machines. 'The Busy Beaver problem gives you a very concrete scale for pondering the frontier of mathematical knowledge,' computer scientist Tristan Stérin, who helped start the Busy Beaver Challenge in 2022, told New Scientist. In a new post, anonymous user 'mxdys'—who was instrumental in finally confirming BB(5)—wrote that the answer to BB(6) is likely so unfathomably large that the number itself likely needs its own explanation, as it's likely too big to describe via exponentiation. Instead, it relies on tetration (written as, say, yx, as opposed to exponentiation's xy) in which where the exponent is also iterated, creating a tower of exponents. As Scott Aaronson, an American computer scientist who helped define BB(5), notes on his blog, that means 1510 can be thought of as '10 to the 10 to the 10 and so on 15 times.' As mxdys notes, BB(6) is at least 2 tetrated to the 2 tetrated to the 2 tetrated to the 9. One mathematician speaking with New Scientist said that it's likely that the number of all atoms in the universe would look 'puny' by comparison. While these large numbers boggle the mind, they also tell mathematicians about the limitations of the foundation of modern mathematics—known as Zermelo–Fraenkel set theory (ZFC)—as well slippery mathematical concepts like the Collatz conjecture. It's unlikely that mathematicians will ever solve BB(6), but if the Busy Beaver Challenge is any evidence, that fact likely won't stop them from trying. You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life?

Science Quiz: Gotta Go Fast
Science Quiz: Gotta Go Fast

Scientific American

time11-07-2025

  • Science
  • Scientific American

Science Quiz: Gotta Go Fast

Allison Parshall is an associate editor at Scientific American covering mind and brain. She writes the magazine's Contributors column and weekly online Science Quizzes. As a multimedia journalist, she contributes to Scientific American 's podcast Science Quickly. Parshall's work has also appeared in Quanta Magazine and Inverse. She graduated from New York University's Arthur L. Carter Journalism Institute with a master's degree in science, health and environmental reporting. She has a bachelor's degree in psychology from Georgetown University.

How the Binding of Two Brain Molecules Creates Memories That Last a Lifetime
How the Binding of Two Brain Molecules Creates Memories That Last a Lifetime

WIRED

time06-07-2025

  • Health
  • WIRED

How the Binding of Two Brain Molecules Creates Memories That Last a Lifetime

Jul 6, 2025 2:00 AM An interaction between two proteins points to a molecular basis for memory. But how do memories last when the molecules that form them turn over within days, weeks, or months? Illustration: Carlos Arrojo for Quanta Magazine The original version of this story appeared in Quanta Magazine . When Todd Sacktor was about to turn 3, his 4-year-old sister died of leukemia. 'An empty bedroom next to mine. A swing set with two seats instead of one,' he said, recalling the lingering traces of her presence in the house. 'There was this missing person—never spoken of—for which I had only one memory.' That memory, faint but enduring, was set in the downstairs den of their home. A young Sacktor asked his sister to read him a book, and she brushed him off: 'Go ask your mother.' Sacktor glumly trudged up the stairs to the kitchen. It's remarkable that, more than 60 years later, Sacktor remembers this fleeting childhood moment at all. The astonishing nature of memory is that every recollection is a physical trace, imprinted into brain tissue by the molecular machinery of neurons. How the essence of a lived moment is encoded and later retrieved remains one of the central unanswered questions in neuroscience. Sacktor became a neuroscientist in pursuit of an answer. At the State University of New York Downstate in Brooklyn, he studies the molecules involved in maintaining the neuronal connections underlying memory. The question that has always held his attention was first articulated in 1984 by the famed biologist Francis Crick: How can memories persist for years, even decades, when the body's molecules degrade and are replaced in a matter of days, weeks or, at most, months? In 2024, working alongside a team that included his longtime collaborator André Fenton, a neuroscientist at New York University, Sacktor offered a potential explanation in a paper published in Science Advances . The researchers discovered that a persistent bond between two proteins is associated with the strengthening of synapses, which are the connections between neurons. Synaptic strengthening is thought to be fundamental to memory formation. As these proteins degrade, new ones take their place in a connected molecular swap that maintains the bond's integrity and, therefore, the memory. In 1984, Francis Crick described a biological conundrum: Memories last years, while most molecules degrade in days or weeks. 'How then is memory stored in the brain so that its trace is relatively immune to molecular turnover?' he wrote in Nature. Photograph: National Library of Medicine/Science Source The researchers present 'a very convincing case' that 'the interaction between these two molecules is needed for memory storage,' said Karl Peter Giese, a neurobiologist at King's College London who was not involved with the work. The findings offer a compelling response to Crick's dilemma, reconciling the discordant timescales to explain how ephemeral molecules maintain memories that last a lifetime. Molecular Memory Early in his career, Sacktor made a discovery that would shape the rest of his life. After studying under the molecular memory pioneer James Schwartz at Columbia University, he opened his own lab at SUNY Downstate to search for a molecule that might help explain how long-term memories persist. The molecule he was looking for would be in the brain's synapses. In 1949, the psychologist Donald Hebb proposed that repeatedly activating neurons strengthens the connections between them, or, as the neurobiologist Carla Shatz later put it: 'Cells that fire together, wire together.' In the decades since, many studies have suggested that the stronger the connection between neurons that hold memories, the better the memories persist. In the early 1990s, in a dish in his lab, Sacktor stimulated a slice of a rat's hippocampus—a small region of the brain linked to memories of events and places, such as the interaction Sacktor had with his sister in the den—to activate neural pathways in a way that mimicked memory encoding and storage. Then he searched for any molecular changes that had taken place. Every time he repeated the experiment, he saw elevated levels of a certain protein within the synapses. 'By the fourth time, I was like, this is it,' he said. It was protein kinase M zeta, or PKMζ for short. As the rats' hippocampal tissue was stimulated, synaptic connections strengthened and levels of PKMζ increased. By the time he published his findings in 1993, he was convinced that PKMζ was crucial for memory. Todd Sacktor has devoted his career to pursuing the molecular nature of memory. Photograph: SUNY Downstate Health Sciences University Over the next two decades, he would go on to build a body of work showing that PKMζ's presence helps maintain memories long after their initial formation. When Sacktor blocked the molecule's activity an hour after a memory was formed, he saw that synaptic strengthening was reversed. This discovery suggested that PKMζ was 'necessary and sufficient' to preserve a memory over time, he wrote in Nature Neuroscience in 2002. In contrast, hundreds of other localized molecules impacted synaptic strengthening only if disrupted within a few minutes of a memory's formation. It appeared to be a singular molecular key to long-term memory. To test his hypothesis in live animals, he teamed up with Fenton, who worked at SUNY Downstate at the time and had experience training lab animals and running behavioral experiments. In 2006, the duo published their first paper showing that blocking PKMζ could erase rats' memories a day or a month after they had formed. This suggested that the persistent activity of PKMζ is required to maintain a memory. The paper was a bombshell. Sacktor and Fenton's star protein PKMζ gained widespread attention, and labs around the world found that blocking it could erase various types of memories, including those related to fear and taste. PKMζ seemed like a sweeping explanation for how memories form and are maintained at the molecular level. But then their hypothesis lost momentum. Other researchers genetically engineered mice to lack PKMζ, and in 2013, two independent studies showed that these mice could still form memories. This cast doubt on the protein's role and brought much of the ongoing research to a halt. Sacktor and Fenton were undeterred. 'We knew we had to figure it out,' Sacktor said. In 2016, they published a rebuttal, demonstrating that in the absence of PKMζ, mice recruit a backup mechanism, involving another molecule, to strengthen synapses. The existence of a compensatory molecule wasn't a surprise. 'The biological system is not such that you lose one molecule and everything goes. That's very rare,' Giese said. But identifying this compensatory molecule prompted a new question: How did it know where to go to replace PKMζ? It would take Sacktor and Fenton nearly another decade to find out. The Maintenance Bond A classic test of a molecule's importance is to block it and see what breaks. Determined to pin down PKMζ's role once and for all, Sacktor and Fenton set out to design a way to disrupt it more precisely than ever before. They developed a new molecule to inhibit the activity of PKMζ. It 'worked beautifully,' Sacktor said. But it wasn't clear how. One day in 2020, Matteo Bernabo, a graduate student from a collaborating lab at McGill University, was presenting findings related to the PKMζ inhibitor when a clue emerged from the audience. 'I suggested that it worked by blocking the PKMζ's interaction with KIBRA,' recalled Wayne Sossin, a neuroscientist at McGill. KIBRA is a scaffolding protein. Like an anchor, it holds other proteins in place inside a synapse. In the brain, it is abundant in regions associated with learning and memory. 'It's not a protein that a lot of people work on,' Sossin said, but there is considerable 'independent evidence that KIBRA has something to do with memory'—and even that it is associated with PKMζ. Most research has focused on KIBRA's role in cancer. 'In the nervous system,' he said, 'there are only three or four of us [studying it].' Sacktor and Fenton joined them. André Fenton and his team found that an interaction between two proteins is key to keeping memory intact over time. Photograph: Lisa Robinson To find out if KIBRA and PKMζ work together in response to synaptic activity, the researchers used a technique that makes interacting proteins glow. When they applied electrical pulses to hippocampal slices, glowing dots of evidence appeared: Following bursts of synaptic activity that produced long-term synaptic strengthening, a multitude of KIBRA-PKMζ complexes formed, and they were persistent. Then the team tested the bond during real memory formation by giving mice a drug to disrupt the formation of these complexes. They saw that the mice's synaptic strength and task memory were lost—and that once the drug wore off, the erased memory did not return, but the mice could acquire and remember new memories once again. But are the KIBRA-PKMζ complexes needed to maintain memory over the long term? To find out, the researchers disrupted the complex four weeks after a memory was formed. Doing so did indeed wipe out the memory. This suggested that the interaction between KIBRA and PKMζ is crucial not only for forming memories, but also for keeping them intact over time. Illustration: Carlos Arrojo for Quanta Magazine 'It's the persistent association between two proteins that maintains the memory, rather than a protein that lasts by itself for the lifetime of the memory,' said Panayiotis Tsokas, a neuroscientist working with Sacktor and lead author on the new Science Advances paper. The KIBRA and PKMζ proteins stabilize each other by forming a bond. That way, when a protein degrades and needs to be replaced, the other remains in place. The bond itself and its location at the specific synapses that were activated during learning are preserved, allowing a new partner to slot itself in, perpetuating the alliance over time. Individually, PKMζ and KIBRA don't last a lifetime—but by binding to each other, they help ensure your memories might. The discovery addresses the conundrum first identified by Crick, namely how memories persist despite the relatively short lifetimes of all biological molecules. 'There had to be a very, very interesting answer, an elegant answer, for how this could come about,' Fenton said. 'And that elegant answer is the KIBRA-PKMζ interacting story.' This work also answers a question that researchers had put on the shelf. Sacktor's earlier study showed that increasing levels of PKMζ strengthened synapses and memories. But how did the molecule know where to go within the neuron? 'We figured, well, one day, maybe we'll understand that,' Sacktor said. Now, the researchers think that KIBRA acts as a synaptic tag that guides PKMζ. If true, this would help explain how only the specific synapses involved in a particular physical memory trace are strengthened, when a neuron may have thousands of synapses that connect it to various other cells. 'These experiments very nicely show that KIBRA is necessary for maintaining the activity of PKMζ at the synapse,' said David Glanzman, a neurobiologist at the University of California, Los Angeles, who was not involved in the study. However, he cautioned that this doesn't necessarily translate to maintaining memory because synaptic strengthening is not the only model for how memory works. Glanzman's own past research on sea slugs at first appeared to show that disrupting a molecule analogous to PKMζ erases memory. 'Originally, I said it was erased,' Glanzman said, 'but later experiments showed we could bring the memory back.' These findings prompted him to reconsider whether memory is truly stored as changes in the strength of synaptic connections. Glanzman, who has worked for 40 years under the synaptic model, is a recent proponent of an alternative view called the molecular-encoding model, which posits that molecules inside a neuron store memories. While he has no doubt that synaptic strengthening follows memory formation, and that PKMζ plays a major role in this process, he remains unsure if the molecule also stores the memory itself. Still, Glanzman emphasized that this study addresses some of the challenges of the synaptic model, such as molecular turnover and synapse targeting, by 'providing evidence that KIBRA and PKMζ form a complex that is synapse-specific and persists longer than either individual molecule.' Although Sacktor and Fenton believe that this protein pair is fundamental to memory, they know that there may be other factors yet to be discovered that help memories persist. Just as PKMζ led them to KIBRA, the complex might lead them further still. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store