Latest news with #complexity


Forbes
11-07-2025
- Business
- Forbes
How Systems Thinking Helps Leaders Avoid Bad Decisions
Confused businessman staring at complexity. It's remarkable how smart, experienced executives systematically make decisions that backfire. They apply industrial-age logic to a hyperconnected systems-age world. They break down complex problems, optimize individual components, and expect predictable outcomes. They believe that perfect information leads to perfect decisions. There are no shortage of examples illustrating these failures. Boeing's engineers solved a specific aerodynamics problem with elegant precision—yet 346 people died in crashes soon afterwards. Amazon executed a routine system update flawlessly—yet accidentally paralyzed Netflix, Slack, and thousands of their own services for seven hours. And, Uber has optimized urban transportation with ruthless efficiency—yet experienced such strong community backlash that they were barred from some cities. These aren't isolated failures or execution problems. They're the inevitable result of applying analytical thinking to systemic challenges. When everything connects to everything else, traditional decision-making approaches don't just miss the mark—they create the very chaos leaders are desperately trying to prevent. Here are three reasons why smart executives keep making bad decisions, and how systems thinking can break this costly cycle. Reason 1: The Myth of the 'Right Answer' Walk into almost any boardroom and you'll hear the seductive language of certainty: "best practices," "proven solutions," "data-driven decisions." Executives pay hefty fees for consulting firms that peddle definitive answers to complex challenges, which offers an illusion of control. stressed businesswoman pouring over data This pursuit of perfect solutions seduces executives into believing that more data and better analysis will yield the right answer. Yet, most complex problems cannot be solved by simply analyzing more data. Most business problems involve numerous considerations, many of which cannot be seen or measured. Consider Boeing's 737 MAX crisis. When engineers discovered the plane's tendency to pitch upward during extreme maneuvering, they treated this as a pure engineering problem and developed the Maneuvering Characteristics Augmentation System (MCAS). The solution was elegant and precise. It was also catastrophically flawed. Only when two planes crashed within months of each other, killing 346 people and grounding the entire 737 MAX fleet for 20 months, did senior executives recognize that the issue wasn't just bad engineering. They also faced weak backup systems, inadequate training protocols, cultural pressure to compete with Airbus, and overconfidence in technological fixes—all of which interacted in ways that Boeing's optimization-focused approach couldn't anticipate. Build adaptive capacity instead. In interconnected systems, there are no perfect solutions—only better and worse interventions. Every action creates ripple effects that cannot be fully predicted or controlled. The executives winning in today's environment have abandoned the search for perfect answers and instead developed the capacity to sense, experiment, and adapt. Microsoft's transformation under Satya Nadella exemplifies this shift. When Nadella became CEO in 2014, he didn't look for the perfect strategic answer. Instead, he made a series of interconnected bets—cloud computing, artificial intelligence, strategic partnerships with former competitors like Apple. None of these moves had guaranteed returns, but together they positioned Microsoft to thrive in an uncertain future. The result? A 12-fold increase in market capitalization during his tenure as CEO. Reason 2: The Obsession with Control The second myth plaguing modern business is the assumption that complex systems can be controlled through tighter management processes. This industrial-age thinking treats organizations like machines—predictable, controllable, and responsive to top-down commands. Reality is far messier. Tightly integrated systems means that small changes can produce massive, unintended consequences. This is true in technical systems, such as computers, and even more true in systems that involve people, which are even more unpredictable than computers. The 2021 Amazon Web Services outage provides a stark illustration. At 7:30 AM on December 7, Amazon executed a routine system update flawlessly, yet triggered a cascading failure that paralyzed Netflix, Slack, and thousands of unrelated services for seven hours. What seemed like a simple technical adjustment in one part of AWS's system rippled through interconnected networks in ways no amount of planning could have anticipated. Embrace strategic flexibility instead. When leaders respond to complexity by adding more tightly monitored and controlling processes, they actually make systems more brittle and fragile. The harder they squeeze, the more likely the system will break under pressure. Instead of controlling outcomes, effective leaders create conditions for better solutions to emerge. They give people authority, tools, and resources to adapt to their situations. These executives foster coordination, not close monitoring. They empower people to make their own decisions and the time and resources to do so. This managerial slack, so that unexpected events do not completely unravel the business. When Danish energy firm Ørsted (formerly DONG Energy) decided to transform from a coal-intensive utility to an offshore wind leader. Executives set a bold ambition. In 2008, the company generated 85% of its energy from coal. In 2009, they committed to flip this ratio, generating 85% from renewable sources by 2040. Rather than creating a precise and rigid 30-year plan, they built flexibility and learning into the system. They allowed leaders to respond to changes in the environment. For example, they made a massive upfront investment in 500 wind turbines—more than were operating offshore globally—to build supply chain capabilities. They also sold oil and gas assets to create financial slack. They also brought in institutional investors for long-term financing, giving more strategic flexibility. The result: a 350% increase in valuation and an 86% reduction in carbon emissions by 2023—hitting their target 17 years ahead of schedule. Reason 3: Short-Term Thinking Perhaps the most dangerous myth is the relentless pursuit of quarterly results and operational efficiency. This doctrine, enshrined in business schools and reinforced by capital markets, creates a vicious cycle where leaders sacrifice innovation, resilience, and competitive positioning for the illusion of predictable returns. My research consistently demonstrates this dynamic. In one study with Caroline Flammer, we found that firms adopting long-term incentive plans invested more heavily in R&D and stakeholder engagement, financially outperforming their peers after two years across multiple metrics. Another study with Natalia Ortiz de-Mandojana shows that firms with a long-term orientation experienced higher revenues and better survival rates. Adopt dual time horizons instead. Short-term thinking is important as executives can respond to immediate threats and opportunities. Long-term thinking is important as it contextualizes these pressing events within the firm's strategic context. In interconnected systems, businesses face a constant barrage of information and must decide when the signals are vital and or when they are simply a distraction from their long-term ambitions. Sustainable competitive advantage comes from understanding and investing in relationships that create value over time. Consider how Patagonia has built a $1 billion outdoor apparel business by explicitly rejecting short-term optimization. The company's "Don't Buy This Jacket" campaign and commitment to environmental activism seem antithetical to growth, yet they've created fierce customer loyalty and premium pricing power that traditional marketing could never achieve. The companies thriving today are those that that can maintain dual time horizons. They keep one eye on the short term, so they know which short-term events require their attention and which ones to ignore. The other eye is focused on the long term, so they are not derailed by short-term distractions. Maintaining a dual time horizon paradoxically build stronger organizations that deliver consistently higher short-term and long-term returns. Such an approach requires fundamentally different mental models, metrics, and governance structures than those designed for industrial-era business. Embracing Systems Thinking Seeing patterns in simplicity Uber's meteoric rise from a $5 million startup to a $3.5 billion juggernaut in just four years appeared to validate Silicon Valley's favorite playbook: identify a problem, build a solution, scale fast, and let the market sort out the details. To executives watching from the sidelines, Uber's success looked like a masterclass in disruptive innovation. But beneath the headlines of exponential growth lay a different story—one that reveals why our most trusted approaches to business decision-making are dangerously obsolete in today's hyperconnected world. The ride-hailing giant that promised to reduce traffic actually increased congestion by 50% in San Francisco from 2010 and 2016. The platform designed to create opportunity for drivers instead trapped them in a gig economy with no safety net. The innovation meant to complement urban mobility ended up cannibalizing public transportation, particularly harming low-income communities removing their access to affordable transit options. After years of regulatory battles, driver protests, and public relations disasters, the company began adapting to local conditions, integrating with public transit, and addressing worker concerns. These changes may have slowed expansion in the short term, but they've created a more sustainable, profitable business model that contributes to more resilient systems—for both the company and its communities. The executives who will succeed in this systems age are not those who apply industrial-age logic, seeking simple cause-and-effect relationships. Instead, they seek to understand the deeper patterns that shape business outcomes. They've shifted from the traditional plan-do-check-act approach to business to one that is agile and adaptive. This requires governance structures that distribute decisions throughout the organization, build flexibility to adapt, and foster experimentation for long-term gains without losing sight of short-term realities. Systems thinking isn't a management fad—these ideas have been around for decades. What's new is the urgent need to respond to a fundamentally different business environment that's more interconnected than ever before. Systems thinking isn't merely about altruism—it's essential for survival.


Forbes
11-07-2025
- Business
- Forbes
Three Reasons Why Smart Executives Make Bad Decisions — And How Systems Thinking Fixes Them
Confused businessman staring at complexity. It's remarkable how smart, experienced executives systematically make decisions that backfire. They apply industrial-age logic to a hyperconnected systems-age world. They break down complex problems, optimize individual components, and expect predictable outcomes. They believe that perfect information leads to perfect decisions. There are no shortage of examples illustrating these failures. Boeing's engineers solved a specific aerodynamics problem with elegant precision—yet 346 people died in crashes soon afterwards. Amazon executed a routine system update flawlessly—yet accidentally paralyzed Netflix, Slack, and thousands of their own services for seven hours. And, Uber has optimized urban transportation with ruthless efficiency—yet experienced such strong community backlash that they were barred from some cities. These aren't isolated failures or execution problems. They're the inevitable result of applying analytical thinking to systemic challenges. When everything connects to everything else, traditional decision-making approaches don't just miss the mark—they create the very chaos leaders are desperately trying to prevent. Here are three reasons why smart executives keep making bad decisions, and how systems thinking can break this costly cycle. Reason 1: The Myth of the 'Right Answer' Walk into almost any boardroom and you'll hear the seductive language of certainty: "best practices," "proven solutions," "data-driven decisions." Executives pay hefty fees for consulting firms that peddle definitive answers to complex challenges, which offers an illusion of control. stressed businesswoman pouring over data This pursuit of perfect solutions seduces executives into believing that more data and better analysis will yield the right answer. Yet, most complex problems cannot be solved by simply analyzing more data. Most business problems involve numerous considerations, many of which cannot be seen or measured. Consider Boeing's 737 MAX crisis. When engineers discovered the plane's tendency to pitch upward during extreme maneuvering, they treated this as a pure engineering problem and developed the Maneuvering Characteristics Augmentation System (MCAS). The solution was elegant and precise. It was also catastrophically flawed. Only when two planes crashed within months of each other, killing 346 people and grounding the entire 737 MAX fleet for 20 months, did senior executives recognize that the issue wasn't just bad engineering. They also faced weak backup systems, inadequate training protocols, cultural pressure to compete with Airbus, and overconfidence in technological fixes—all of which interacted in ways that Boeing's optimization-focused approach couldn't anticipate. Build adaptive capacity instead. In interconnected systems, there are no perfect solutions—only better and worse interventions. Every action creates ripple effects that cannot be fully predicted or controlled. The executives winning in today's environment have abandoned the search for perfect answers and instead developed the capacity to sense, experiment, and adapt. Microsoft's transformation under Satya Nadella exemplifies this shift. When Nadella became CEO in 2014, he didn't look for the perfect strategic answer. Instead, he made a series of interconnected bets—cloud computing, artificial intelligence, strategic partnerships with former competitors like Apple. None of these moves had guaranteed returns, but together they positioned Microsoft to thrive in an uncertain future. The result? A 12-fold increase in market capitalization during his tenure as CEO. Reason 2: The Obsession with Control The second myth plaguing modern business is the assumption that complex systems can be controlled through tighter management processes. This industrial-age thinking treats organizations like machines—predictable, controllable, and responsive to top-down commands. Reality is far messier. Tightly integrated systems means that small changes can produce massive, unintended consequences. This is true in technical systems, such as computers, and even more true in systems that involve people, which are even more unpredictable than computers. The 2021 Amazon Web Services outage provides a stark illustration. At 7:30 AM on December 7, Amazon executed a routine system update flawlessly, yet triggered a cascading failure that paralyzed Netflix, Slack, and thousands of unrelated services for seven hours. What seemed like a simple technical adjustment in one part of AWS's system rippled through interconnected networks in ways no amount of planning could have anticipated. Embrace strategic flexibility instead. When leaders respond to complexity by adding more tightly monitored and controlling processes, they actually make systems more brittle and fragile. The harder they squeeze, the more likely the system will break under pressure. Instead of controlling outcomes, effective leaders create conditions for better solutions to emerge. They give people authority, tools, and resources to adapt to their situations. These executives foster coordination, not close monitoring. They empower people to make their own decisions and the time and resources to do so. This managerial slack, so that unexpected events do not completely unravel the business. When Danish energy firm Ørsted (formerly DONG Energy) decided to transform from a coal-intensive utility to an offshore wind leader. Executives set a bold ambition. In 2008, the company generated 85% of its energy from coal. In 2009, they committed to flip this ratio, generating 85% from renewable sources by 2040. Rather than creating a precise and rigid 30-year plan, they built flexibility and learning into the system. They allowed leaders to respond to changes in the environment. For example, they made a massive upfront investment in 500 wind turbines—more than were operating offshore globally—to build supply chain capabilities. They also sold oil and gas assets to create financial slack. They also brought in institutional investors for long-term financing, giving more strategic flexibility. The result: a 350% increase in valuation and an 86% reduction in carbon emissions by 2023—hitting their target 17 years ahead of schedule. Reason 3: Short-Term Thinking Perhaps the most dangerous myth is the relentless pursuit of quarterly results and operational efficiency. This doctrine, enshrined in business schools and reinforced by capital markets, creates a vicious cycle where leaders sacrifice innovation, resilience, and competitive positioning for the illusion of predictable returns. My research consistently demonstrates this dynamic. In one study with Caroline Flammer, we found that firms adopting long-term incentive plans invested more heavily in R&D and stakeholder engagement, financially outperforming their peers after two years across multiple metrics. Another study with Natalia Ortiz de-Mandojana shows that firms with a long-term orientation experienced higher revenues and better survival rates. Adopt dual time horizons instead. Short-term thinking is important as executives can respond to immediate threats and opportunities. Long-term thinking is important as it contextualizes these pressing events within the firm's strategic context. In interconnected systems, businesses face a constant barrage of information and must decide when the signals are vital and or when they are simply a distraction from their long-term ambitions. Sustainable competitive advantage comes from understanding and investing in relationships that create value over time. Consider how Patagonia has built a $1 billion outdoor apparel business by explicitly rejecting short-term optimization. The company's "Don't Buy This Jacket" campaign and commitment to environmental activism seem antithetical to growth, yet they've created fierce customer loyalty and premium pricing power that traditional marketing could never achieve. The companies thriving today are those that that can maintain dual time horizons. They keep one eye on the short term, so they know which short-term events require their attention and which ones to ignore. The other eye is focused on the long term, so they are not derailed by short-term distractions. Maintaining a dual time horizon paradoxically build stronger organizations that deliver consistently higher short-term and long-term returns. Such an approach requires fundamentally different mental models, metrics, and governance structures than those designed for industrial-era business. Embracing Systems Thinking Seeing patterns in simplicity Uber's meteoric rise from a $5 million startup to a $3.5 billion juggernaut in just four years appeared to validate Silicon Valley's favorite playbook: identify a problem, build a solution, scale fast, and let the market sort out the details. To executives watching from the sidelines, Uber's success looked like a masterclass in disruptive innovation. But beneath the headlines of exponential growth lay a different story—one that reveals why our most trusted approaches to business decision-making are dangerously obsolete in today's hyperconnected world. The ride-hailing giant that promised to reduce traffic actually increased congestion by 50% in San Francisco from 2010 and 2016. The platform designed to create opportunity for drivers instead trapped them in a gig economy with no safety net. The innovation meant to complement urban mobility ended up cannibalizing public transportation, particularly harming low-income communities removing their access to affordable transit options. After years of regulatory battles, driver protests, and public relations disasters, the company began adapting to local conditions, integrating with public transit, and addressing worker concerns. These changes may have slowed expansion in the short term, but they've created a more sustainable, profitable business model that contributes to more resilient systems—for both the company and its communities. The executives who will succeed in this systems age are not those who apply industrial-age logic, seeking simple cause-and-effect relationships. Instead, they seek to understand the deeper patterns that shape business outcomes. They've shifted from the traditional plan-do-check-act approach to business to one that is agile and adaptive. This requires governance structures that distribute decisions throughout the organization, build flexibility to adapt, and foster experimentation for long-term gains without losing sight of short-term realities. Systems thinking isn't a management fad—these ideas have been around for decades. What's new is the urgent need to respond to a fundamentally different business environment that's more interconnected than ever before. Systems thinking isn't merely about altruism—it's essential for survival. This is the first in a series exploring how systems thinking can transform business decision-making. In future articles, I'll examine practical frameworks for developing systems intelligence and real-world applications across industries.


Gulf Business
09-06-2025
- Business
- Gulf Business
Beyond the risk register: Why future-ready leadership demands strategic discomfort
Image: Supplied The year is 2029, and vertical farming has become a symbol of national resilience in the Middle East. Governments had have poured billions into hydroponic megafarms. Food security indices have climbed and export deals rolled in. The region has also been hailed globally as a pioneer of agricultural innovation – a place where technology had triumphed over land scarcity and climate stress. And then it all collapses. A fungal microbe, exploiting the genetic uniformity of hydroponic crops, mutated in a single facility, sweeps through the region's interconnected systems. Within six weeks, 40 per cent of regional crop output is lost. Emergency imports are then scrambled at record costs. What seemed like a shining example of resilience is exposed as dangerously brittle. The risk was known. The signals were there – just not heard, or not heeded. This isn't a possible 'future' story about agriculture. It spotlights leadership under complexity. From pandemic blindspots to supply chain fragilities and climate volatility to AI backlash, organisations across every sector continue to be surprised by visible and often documented disruptions that were ultimately sidelined. Known, but ignored Why does this keep happening? Not because the risks are invisible but because they're inconvenient, ambiguous, or don't fit the dominant narrative. In environments that reward momentum and performance, there is often little appetite for the slow work of horizon scanning or scenario stress-testing – especially when things appear to be going well. Risks that are uncomfortable or unfamiliar are easily dismissed as fringe. And when success stories dominate, dissenting signals – especially weak ones – struggle to break through. The vertical farming collapse followed this exact pattern. Early warnings were buried in obscure journals, dismissed as edge-case thinking. There was no lack of intelligence. But attention was highly selective. The illusion of the list Many organisations believe that because a risk appears on a register, it is being managed. But listing a risk and engaging with it are two very different things. Take the World Economic Forum's Global Risks Report . Each year, it publishes a heat map identifying the most severe and likely risks facing the world over the next decade. Climate volatility. Biodiversity loss. Emerging infectious diseases. Cybercrime. Water crises. Year after year, these threats are mapped, flagged, and even color-coded – often with 'blobs' so large they're impossible to miss. And yet the most common organisational response is to file these risks under 'context', rather than integrate them into core planning. They are acknowledged, but rarely rehearsed. The problem isn't the heat map. The problem is what happens after. The mere appearance of a threat on a list can create a false sense of preparedness – a box ticked, a risk 'covered'. Risk registers often serve as a checklist – useful for reporting, but misleading when it comes to real readiness. Rarely do leadership teams ask: What would we actually do if this happened tomorrow? And most registers fail to consider how risks interact. A CEO scandal, shifting consumer ethics, a tech system failure, and policy fragmentation – individually manageable, perhaps. Together? Catastrophic. Strategic foresight starts where the risk register ends – not with what's on the list but with how those risks might collide. From risk registers to risk realism So, what does it take to build a future-ready organisation in a time of converging disruption? We propose three shifts: Expand peripheral vision: Build structured capacity to detect early signals from the margins – in scientific literature, startup ecosystems, citizen movements, and niche media. Weak signals are often the earliest indicators of system shifts. Unless you design for it, they won't rise through the usual filters. Institutionalise strategic discomfort: Challenge internal optimism regularly. Build in moments to stress-test assumptions and rehearse disruption. Reward people who challenge prevailing wisdom, not just those who confirm it. Map risk interdependencies: Move beyond lists. Use systems thinking to explore how risks could combine. Model chain reactions and secondary effects. Ask not just 'What could go wrong?', but 'What could go wrong together ?' Future-readiness is a cultural trait Foresight isn't about crystal balls or radical prediction. It's about readiness for uncertainty – and a willingness to engage the uncomfortable. The most resilient organisations aren't those that see the future clearly but those that build the muscles to adapt to futures they can't fully see. That begins with humility, curiosity, and the courage to ask: What might we be missing? This demands a cultural shift. One that values critical inquiry over certainty. Signals over noise. And reflection over reaction. In the aftermath of every high-profile shock – from pandemics to tech crashes – leaders demand tighter regulation, faster protocols, and better reporting. But those alone won't build adaptive capacity. Because in every one of these cases, there were warnings. The failure was not one of ignorance – but of attention. Foresight failed because it asked the system to be uncomfortable – and the system declined. Three questions every board should be asking Which of our success stories might be blinding us to emerging fragilities? What signals are we currently incentivised to ignore? If three of our 'low-impact' risks hit at once – what would break first? If your strategy doesn't create space for doubt, it's not a strategy – it's a narrative. If your risk register doesn't provoke discomfort, it's incomplete. And if your future looks smooth and linear, it's probably fiction. Doris Viljoen is a director at the Institute for Futures Research at Read:


WIRED
08-06-2025
- Science
- WIRED
A New Law of Nature Attempts to Explain the Complexity of the Universe
Jun 8, 2025 7:00 AM A novel suggestion that complexity increases over time, not just in living organisms but in the nonliving world, promises to rewrite notions of time and evolution. Illustration: Irene Pérez for Quanta Magazine The original version of this story appeared in Quanta Magazine. In 1950 the Italian physicist Enrico Fermi was discussing the possibility of intelligent alien life with his colleagues. If alien civilizations exist, he said, some should surely have had enough time to expand throughout the cosmos. So where are they? Many answers to Fermi's 'paradox' have been proposed: Maybe alien civilizations burn out or destroy themselves before they can become interstellar wanderers. But perhaps the simplest answer is that such civilizations don't appear in the first place: Intelligent life is extremely unlikely, and we pose the question only because we are the supremely rare exception. A new proposal by an interdisciplinary team of researchers challenges that bleak conclusion. They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics—the law that dictates an inevitable rise in entropy, a measure of disorder. If they're right, complex and intelligent life should be widespread. In this new view, biological evolution appears not as a unique process that gave rise to a qualitatively distinct form of matter—living organisms. Instead, evolution is a special (and perhaps inevitable) case of a more general principle that governs the universe. According to this principle, entities are selected because they are richer in a kind of information that enables them to perform some kind of function. This hypothesis, formulated by the mineralogist Robert Hazen and the astrobiologist Michael Wong of the Carnegie Institution in Washington, DC, along with a team of others, has provoked intense debate. Some researchers have welcomed the idea as part of a grand narrative about fundamental laws of nature. They argue that the basic laws of physics are not 'complete' in the sense of supplying all we need to comprehend natural phenomena; rather, evolution—biological or otherwise—introduces functions and novelties that could not even in principle be predicted from physics alone. 'I'm so glad they've done what they've done,' said Stuart Kauffman, an emeritus complexity theorist at the University of Pennsylvania. 'They've made these questions legitimate.' Michael Wong, an astrobiologist at the Carnegie Institution in Washington, DC. Photograph: Katherine Cain/Carnegie Science Others argue that extending evolutionary ideas about function to non-living systems is an overreach. The quantitative value that measures information in this new approach is not only relative—it changes depending on context—it's impossible to calculate. For this and other reasons, critics have charged that the new theory cannot be tested, and therefore is of little use. The work taps into an expanding debate about how biological evolution fits within the normal framework of science. The theory of Darwinian evolution by natural selection helps us to understand how living things have changed in the past. But unlike most scientific theories, it can't predict much about what is to come. Might embedding it within a meta-law of increasing complexity let us glimpse what the future holds? Making Meaning The story begins in 2003, when the biologist Jack Szostak published a short article in Nature proposing the concept of functional information. Szostak—who six years later would get a Nobel Prize for unrelated work—wanted to quantify the amount of information or complexity that biological molecules like proteins or DNA strands embody. Classical information theory, developed by the telecommunications researcher Claude Shannon in the 1940s and later elaborated by the Russian mathematician Andrey Kolmogorov, offers one answer. Per Kolmogorov, the complexity of a string of symbols (such as binary 1s and 0s) depends on how concisely one can specify that sequence uniquely. For example, consider DNA, which is a chain of four different building blocks called nucleotides. Α strand composed only of one nucleotide, repeating again and again, has much less complexity—and, by extension, encodes less information—than one composed of all four nucleotides in which the sequence seems random (as is more typical in the genome). Jack Szostak proposed a way to quantify information in biological systems. Photograph: HHMI But Szostak pointed out that Kolmogorov's measure of complexity neglects an issue crucial to biology: how biological molecules function. In biology, sometimes many different molecules can do the same job. Consider RNA molecules, some of which have biochemical functions that can easily be defined and measured. (Like DNA, RNA is made up of sequences of nucleotides.) In particular, short strands of RNA called aptamers securely bind to other molecules. Let's say you want to find an RNA aptamer that binds to a particular target molecule. Can lots of aptamers do it, or just one? If only a single aptamer can do the job, then it's unique, just as a long, seemingly random sequence of letters is unique. Szostak said that this aptamer would have a lot of what he called 'functional information.' Illustration: Irene Pérez for Quanta Magazine If many different aptamers can perform the same task, the functional information is much smaller. So we can calculate the functional information of a molecule by asking how many other molecules of the same size can do the same task just as well. Szostak went on to show that in a case like this, functional information can be measured experimentally. He made a bunch of RNA aptamers and used chemical methods to identify and isolate the ones that would bind to a chosen target molecule. He then mutated the winners a little to seek even better binders and repeated the process. The better an aptamer gets at binding, the less likely it is that another RNA molecule chosen at random will do just as well: The functional information of the winners in each round should rise. Szostak found that the functional information of the best-performing aptamers got ever closer to the maximum value predicted theoretically. Selected for Function Hazen came across Szostak's idea while thinking about the origin of life—an issue that drew him in as a mineralogist, because chemical reactions taking place on minerals have long been suspected to have played a key role in getting life started. 'I concluded that talking about life versus nonlife is a false dichotomy,' Hazen said. 'I felt there had to be some kind of continuum—there has to be something that's driving this process from simpler to more complex systems.' Functional information, he thought, promised a way to get at the 'increasing complexity of all kinds of evolving systems.' In 2007 Hazen collaborated with Szostak to write a computer simulation involving algorithms that evolve via mutations. Their function, in this case, was not to bind to a target molecule, but to carry out computations. Again they found that the functional information increased spontaneously over time as the system evolved. There the idea languished for years. Hazen could not see how to take it any further until Wong accepted a fellowship at the Carnegie Institution in 2021. Wong had a background in planetary atmospheres, but he and Hazen discovered they were thinking about the same questions. 'From the very first moment that we sat down and talked about ideas, it was unbelievable,' Hazen said. Robert Hazen, a mineralogist at the Carnegie Institution in Washington, DC. Photograph: Courtesy of Robert Hazen 'I had got disillusioned with the state of the art of looking for life on other worlds,' Wong said. 'I thought it was too narrowly constrained to life as we know it here on Earth, but life elsewhere may take a completely different evolutionary trajectory. So how do we abstract far enough away from life on Earth that we'd be able to notice life elsewhere even if it had different chemical specifics, but not so far that we'd be including all kinds of self-organizing structures like hurricanes?' The pair soon realized that they needed expertise from a whole other set of disciplines. 'We needed people who came at this problem from very different points of view, so that we all had checks and balances on each other's prejudices,' Hazen said. 'This is not a mineralogical problem; it's not a physics problem, or a philosophical problem. It's all of those things.' They suspected that functional information was the key to understanding how complex systems like living organisms arise through evolutionary processes happening over time. 'We all assumed the second law of thermodynamics supplies the arrow of time,' Hazen said. 'But it seems like there's a much more idiosyncratic pathway that the universe takes. We think it's because of selection for function—a very orderly process that leads to ordered states. That's not part of the second law, although it's not inconsistent with it either.' Looked at this way, the concept of functional information allowed the team to think about the development of complex systems that don't seem related to life at all. At first glance, it doesn't seem a promising idea. In biology, function makes sense. But what does 'function' mean for a rock? All it really implies, Hazen said, is that some selective process favors one entity over lots of other potential combinations. A huge number of different minerals can form from silicon, oxygen, aluminum, calcium, and so on. But only a few are found in any given environment. The most stable minerals turn out to be the most common. But sometimes less stable minerals persist because there isn't enough energy available to convert them to more stable phases. 'Information itself might be a vital parameter of the cosmos, similar to mass, charge, and energy.' This might seem trivial, like saying that some objects exist while other ones don't, even if they could in theory. But Hazen and Wong have shown that, even for minerals, functional information has increased over the course of Earth's history. Minerals evolve toward greater complexity (though not in the Darwinian sense). Hazen and colleagues speculate that complex forms of carbon such as graphene might form in the hydrocarbon-rich environment of Saturn's moon Titan—another example of an increase in functional information that doesn't involve life. It's the same with chemical elements. The first moments after the Big Bang were filled with undifferentiated energy. As things cooled, quarks formed and then condensed into protons and neutrons. These gathered into the nuclei of hydrogen, helium, and lithium atoms. Only once stars formed and nuclear fusion happened within them did more complex elements like carbon and oxygen form. And only when some stars had exhausted their fusion fuel did their collapse and explosion in supernovas create heavier elements such as heavy metals. Steadily, the elements increased in nuclear complexity. Wong said their work implies three main conclusions. First, biology is just one example of evolution. 'There is a more universal description that drives the evolution of complex systems.' Illustration: Irene Pérez for Quanta Magazine Second, he said, there might be 'an arrow in time that describes this increasing complexity,' similar to the way the second law of thermodynamics, which describes the increase in entropy, is thought to create a preferred direction of time. Finally, Wong said, 'information itself might be a vital parameter of the cosmos, similar to mass, charge and energy.' In the work Hazen and Szostak conducted on evolution using artificial-life algorithms, the increase in functional information was not always gradual. Sometimes it would happen in sudden jumps. That echoes what is seen in biological evolution. Biologists have long recognized transitions where the complexity of organisms increases abruptly. One such transition was the appearance of organisms with cellular nuclei (around 1.8 billion to 2.7 billion years ago). Then there was the transition to multicellular organisms (around 2 billion to 1.6 billion years ago), the abrupt diversification of body forms in the Cambrian explosion (540 million years ago), and the appearance of central nervous systems (around 600 million to 520 million years ago). The arrival of humans was arguably another major and rapid evolutionary transition. Evolutionary biologists have tended to view each of these transitions as a contingent event. But within the functional-information framework, it seems possible that such jumps in evolutionary processes (whether biological or not) are inevitable. In these jumps, Wong pictures the evolving objects as accessing an entirely new landscape of possibilities and ways to become organized, as if penetrating to the 'next floor up.' Crucially, what matters—the criteria for selection, on which continued evolution depends—also changes, plotting a wholly novel course. On the next floor up, possibilities await that could not have been guessed before you reached it. For example, during the origin of life it might initially have mattered that proto-biological molecules would persist for a long time—that they'd be stable. But once such molecules became organized into groups that could catalyze one another's formation—what Kauffman has called autocatalytic cycles—the molecules themselves could be short-lived, so long as the cycles persisted. Now it was dynamical, not thermodynamic, stability that mattered. Ricard Solé of the Santa Fe Institute thinks such jumps might be equivalent to phase transitions in physics, such as the freezing of water or the magnetization of iron: They are collective processes with universal features, and they mean that everything changes, everywhere, all at once. In other words, in this view there's a kind of physics of evolution—and it's a kind of physics we know about already. The Biosphere Creates Its Own Possibilities The tricky thing about functional information is that, unlike a measure such as size or mass, it is contextual: It depends on what we want the object to do, and what environment it is in. For instance, the functional information for an RNA aptamer binding to a particular molecule will generally be quite different from the information for binding to a different molecule. Yet finding new uses for existing components is precisely what evolution does. Feathers did not evolve for flight, for example. This repurposing reflects how biological evolution is jerry-rigged, making use of what's available. Kauffman argues that biological evolution is thus constantly creating not just new types of organisms but new possibilities for organisms, ones that not only did not exist at an earlier stage of evolution but could not possibly have existed. From the soup of single-celled organisms that constituted life on Earth 3 billion years ago, no elephant could have suddenly emerged—this required a whole host of preceding, contingent but specific innovations. However, there is no theoretical limit to the number of uses an object has. This means that the appearance of new functions in evolution can't be predicted—and yet some new functions can dictate the very rules of how the system evolves subsequently. 'The biosphere is creating its own possibilities,' Kauffman said. 'Not only do we not know what will happen, we don't even know what can happen.' Photosynthesis was such a profound development; so were eukaryotes, nervous systems and language. As the microbiologist Carl Woese and the physicist Nigel Goldenfeld put it in 2011, 'We need an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. Thus, we end up with an infinite hierarchy.' The physicist Paul Davies of Arizona State University agrees that biological evolution 'generates its own extended possibility space which cannot be reliably predicted or captured via any deterministic process from prior states. So life evolves partly into the unknown.' 'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms.' Mathematically, a 'phase space' is a way of describing all possible configurations of a physical system, whether it's as comparatively simple as an idealized pendulum or as complicated as all the atoms comprising the Earth. Davies and his co-workers have recently suggested that evolution in an expanding accessible phase space might be formally equivalent to the 'incompleteness theorems' devised by the mathematician Kurt Gödel. Gödel showed that any system of axioms in mathematics permits the formulation of statements that can't be shown to be true or false. We can only decide such statements by adding new axioms. Davies and colleagues say that, as with Gödel's theorem, the key factor that makes biological evolution open-ended and prevents us from being able to express it in a self-contained and all-encompassing phase space is that it is self-referential: The appearance of new actors in the space feeds back on those already there to create new possibilities for action. This isn't the case for physical systems, which, even if they have, say, millions of stars in a galaxy, are not self-referential. 'An increase in complexity provides the future potential to find new strategies unavailable to simpler organisms,' said Marcus Heisler, a plant developmental biologist at the University of Sydney and co-author of the incompleteness paper. This connection between biological evolution and the issue of noncomputability, Davies said, 'goes right to the heart of what makes life so magical.' Is biology special, then, among evolutionary processes in having an open-endedness generated by self-reference? Hazen thinks that in fact once complex cognition is added to the mix—once the components of the system can reason, choose, and run experiments 'in their heads'—the potential for macro-micro feedback and open-ended growth is even greater. 'Technological applications take us way beyond Darwinism,' he said. A watch gets made faster if the watchmaker is not blind. Back to the Bench If Hazen and colleagues are right that evolution involving any kind of selection inevitably increases functional information—in effect, complexity—does this mean that life itself, and perhaps consciousness and higher intelligence, is inevitable in the universe? That would run counter to what some biologists have thought. The eminent evolutionary biologist Ernst Mayr believed that the search for extraterrestrial intelligence was doomed because the appearance of humanlike intelligence is 'utterly improbable.' After all, he said, if intelligence at a level that leads to cultures and civilizations were so adaptively useful in Darwinian evolution, how come it only arose once across the entire tree of life? Mayr's evolutionary point possibly vanishes in the jump to humanlike complexity and intelligence, whereupon the whole playing field is utterly transformed. Humans attained planetary dominance so rapidly (for better or worse) that the question of when it will happen again becomes moot. Illustration: Irene Pérez for Quanta Magazine But what about the chances of such a jump happening in the first place? If the new 'law of increasing functional information' is right, it looks as though life, once it exists, is bound to get more complex by leaps and bounds. It doesn't have to rely on some highly improbable chance event. What's more, such an increase in complexity seems to imply the appearance of new causal laws in nature that, while not incompatible with the fundamental laws of physics governing the smallest component parts, effectively take over from them in determining what happens next. Arguably we see this already in biology: Galileo's (apocryphal) experiment of dropping two masses from the Leaning Tower of Pisa no longer has predictive power when the masses are not cannonballs but living birds. Together with the chemist Lee Cronin of the University of Glasgow, Sara Walker of Arizona State University has devised an alternative set of ideas to describe how complexity arises, called assembly theory. In place of functional information, assembly theory relies on a number called the assembly index, which measures the minimum number of steps required to make an object from its constituent ingredients. 'Laws for living systems must be somewhat different than what we have in physics now,' Walker said, 'but that does not mean that there are no laws.' But she doubts that the putative law of functional information can be rigorously tested in the lab. 'I am not sure how one could say [the theory] is right or wrong, since there is no way to test it objectively,' she said. 'What would the experiment look for? How would it be controlled? I would love to see an example, but I remain skeptical until some metrology is done in this area.' Hazen acknowledges that, for most physical objects, it is impossible to calculate functional information even in principle. Even for a single living cell, he admits, there's no way of quantifying it. But he argues that this is not a sticking point, because we can still understand it conceptually and get an approximate quantitative sense of it. Similarly, we can't calculate the exact dynamics of the asteroid belt because the gravitational problem is too complicated—but we can still describe it approximately enough to navigate spacecraft through it. Wong sees a potential application of their ideas in astrobiology. One of the curious aspects of living organisms on Earth is that they tend to make a far smaller subset of organic molecules than they could make given the basic ingredients. That's because natural selection has picked out some favored compounds. There's much more glucose in living cells, for example, than you'd expect if molecules were simply being made either randomly or according to their thermodynamic stability. So one potential signature of lifelike entities on other worlds might be similar signs of selection outside what chemical thermodynamics or kinetics alone would generate. (Assembly theory similarly predicts complexity-based biosignatures.) There might be other ways of putting the ideas to the test. Wong said there is more work still to be done on mineral evolution, and they hope to look at nucleosynthesis and computational 'artificial life.' Hazen also sees possible applications in oncology, soil science and language evolution. For example, the evolutionary biologist Frédéric Thomas of the University of Montpellier in France and colleagues have argued that the selective principles governing the way cancer cells change over time in tumors are not like those of Darwinian evolution, in which the selection criterion is fitness, but more closely resemble the idea of selection for function from Hazen and colleagues. Hazen's team has been fielding queries from researchers ranging from economists to neuroscientists, who are keen to see if the approach can help. 'People are approaching us because they are desperate to find a model to explain their system,' Hazen said. But whether or not functional information turns out to be the right tool for thinking about these questions, many researchers seem to be converging on similar questions about complexity, information, evolution (both biological and cosmic), function and purpose, and the directionality of time. It's hard not to suspect that something big is afoot. There are echoes of the early days of thermodynamics, which began with humble questions about how machines work and ended up speaking to the arrow of time, the peculiarities of living matter, and the fate of the universe. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


The Independent
04-06-2025
- Business
- The Independent
The hidden cost of tech complexity – and what you can do about it
Freshworks is a Business Reporter client Most tech solutions promise simplicity but deliver chaos, costing time, decisions and connection – it's time for change. As companies grow, they often move fast. New markets, new customers, new demands. But growth tends to bring a flood of quick tech purchases – each solving a specific problem, each adding another layer. Before long, the very tools meant to enable speed begin to slow everything down. It's a familiar trap: complexity creeps in quietly. A duplicate process here, a siloed system there, and suddenly teams are misaligned, data is fragmented and performance suffers. Complexity is the enemy of scale As a tech leader with experience across the sector, I've seen this pattern repeat across industries and continents. Businesses of all sizes end up fighting the same invisible force: fragmentation. Teams operate from conflicting versions of the truth. Manual handoffs and makeshift integrations clog up workflows. And tech investments stall before delivering value. And it's not just operational. Fragmented systems slow down operations and obscure visibility. When your support desk, product analytics, customer database and financial systems can't communicate effectively, you're essentially making decisions without real insight. Take customer retention. If your support platform can't surface relevant in-app behaviour or billing anomalies, your team can't intervene at critical moments. That's not just a missed support ticket – it's a lost customer. Worse, it may signal dozens more if warning signs aren't shared across departments. Good intentions, bad outcomes Ironically, fragmentation often stems from good intentions. Departments adopt specialised tools to solve local challenges. But without a coherent architecture or integration strategy, organisations end up with tech stacks that resemble patchwork quilts and intelligent automation falls flat. It's what Stanford researchers Bob Sutton and Huggy Rao, authors of The Friction Project, call 'addition bias' – the instinct to add features, tools or steps instead of removing them. In their study of global brands, this tendency increased friction and slowed performance. Simplifiers, they found, often faced resistance, while adders, those who added complexity, were rewarded – even when performance suffered. Too often, organisations are sold bloated platforms packed with unused features, marketed as 'added value' but delivering the opposite. Implementations drag on for months, results take years, and the very tools meant to empower teams end up complicating their work. Meanwhile, the real cost is paid by employees, who now spend their time navigating systems rather than solving problems. AI only works if it's connected Artificial intelligence has enormous potential to accelerate business. But that promise breaks down fast without integration. Disconnected systems can't fuel automation and half-built workflows create more work – not less. But when applied strategically, AI delivers real results. Finance teams can analyse costs and optimise spending in real time. Support teams can use AI-powered agents to handle routine support tasks. Engineering can automate troubleshooting. HR can screen candidates more efficiently. And the payoff is clear: 98 per cent of employees are already getting time back in their workday thanks to AI – reinvesting it in higher-value efforts such as boosting productivity (71 per cent), coaching others (67 per cent) and tackling more creative or complex challenges (66 per cent). When AI is properly integrated across functions, it doesn't just streamline operations. It empowers people. Escape the cycle: a strategic path to uncomplicating systems The good news? It's possible to break the cycle. Here's how forward-thinking organisations are simplifying by design: Inventory everything. Map every tool across departments. You can't fix what you can't see. Use workflow automation to identify data gaps, redundancies and ownership. Prioritise integration. Evaluate platforms for open APIs and native integrations. Tools that don't integrate easily should raise red flags. Unify your data. Create a single source of truth for customer information – whether via a centralised platform or a modern data unification layer. Ensure every team works from shared insights. Designate integration leaders. Empower individuals or teams to connect departments, break silos and ensure systems integrate strategically, not reactively. Collaboration tools help align efforts. Think in platforms, not point solutions. Consolidate where it makes sense. Choose platforms that support multiple workflows – not only for current needs but also for future direction. Simplicity as a competitive edge Customer experiences are powered by the systems employees use every day. That's why tech leaders must focus on alignment, not just implementation. Sustainable speed doesn't come from scattered bursts of progress. It comes from unified momentum. In any context – business, productivity or daily operations – complexity breeds inefficiency, higher costs and slower decisions. Simplicity unlocks focus, clarity and results. For teams, unnecessary complexity causes stress and burnout. Simplicity fuels effectiveness. So the question leaders should be asking isn't whether they can afford to simplify. It's whether they can afford not to. At Freshworks, we believe simplicity isn't a sacrifice. It's a competitive edge. It's time to uncomplicate and get maximum value from your tech stack.