logo
#

Latest news with #IndustrialRevolution

Jason Miller outlines Trump's Africa trade vision at Afreximbank meetings - Economy
Jason Miller outlines Trump's Africa trade vision at Afreximbank meetings - Economy

Al-Ahram Weekly

time13 hours ago

  • Business
  • Al-Ahram Weekly

Jason Miller outlines Trump's Africa trade vision at Afreximbank meetings - Economy

Speaking before policymakers, financiers, and industry leaders, Miller said Africa's rise hinged on strategic choices. In a conversation with Viswanathan Shankar, CEO of Gateway Partners, he analyzed America's evolving trade posture and its implications for the continent. Miller began by stating that Africa will surpass Europe as the world's third-largest economic bloc by 2050. He added that Nigeria would rank among the top ten global economies. By 2100, sub-Saharan Africa will host four of the world's most populous nations, Miller predicted, positioning the continent as an economic superpower. "This is Africa's century," he declared, "but if these opportunities aren't seized strategically, Africa risks being taken advantage of again." Miller contrasted US engagement with that of other global players. He criticised decades of exploitative practices where outsiders "took, took, took, leaving broken promises." By contrast, America, he argued, aims for strategic partnerships anchored in private capital with no debt traps, military occupations, or hollow rhetoric. The distinction lies in market-driven investments, which demand mutual accountability, unlike what Miller termed "debt diplomacy." Miller outlined non-negotiables for nations seeking a partnership with the US. First, Africa must demand tangible value over empty deals, avoiding unsustainable debt disguised as aid. Partnerships should prioritise foreign direct investment in future-proof infrastructure: roads, ports, data centres, and clean energy. He highlighted Africa's critical minerals and youthful workforce as key factors in dominating the AI supply chain, the impact of which he likened to that of the Industrial Revolution. Second, accelerating business climate reforms is essential. Enforcing contracts, stabilising currencies, and rooting out corruption are not just suggestions but "the price of admission" for attracting trillion-dollar US pension funds and private capital. While praising Nigeria's "gutsy" currency reforms, Miller urged broader, faster action continent-wide. Third, Africa must choose allies wisely. Miller drew sharp contrasts between China's record of "unregulated fishing, environmental disasters, and crippling debt" and the US's contributions such as PEPFAR's HIV/AIDS support, security cooperation against groups like Boko Haram, and conflict mediation in hotspots like the DRC-Rwanda border. True friendship, he stressed, respects sovereignty and borders without exploitation. Furthermore, Miller decoded recent US moves. He explained that the African Growth and Opportunity Act (AGOA), set to expire in September 2025, faces an uncertain future. "Why renew one-way preferences," he noted, "if African nations impose tariffs on US goods or favour Chinese partners?" His solution: proactive renegotiation focused on reciprocity. Miller defended Trump's signature tariffs as multipurpose tools for protecting strategic industries such as auto manufacturing ("a US national security issue") while forcing fairer trade terms. Meanwhile, he emphasized that the US Development Finance Corporation (DFC) emerges as Africa's catalyst, deploying profit-driven investments in projects such as the Lobito Corridor and the Mozambique LNG project. "This is revenue-generating capital, not debt," Miller emphasised, urging reforms to unlock giants like BlackRock and CalPERS. Miller offered advice to African leaders on how to deal with the US. He stressed the importance of preparation before meetings. He also underlined identifying President Trump's priorities before meetings by following his Truth Social Platform. In addition, Miller advised African leaders to engage with specific asks and solutions, and shun "photo-ops." He urged them to emulate Gulf states such as Saudi Arabia and the UAE, whose investment commitments and peace-building efforts earned early presidential visits. Moreover, he encouraged them to push CEOs and investors, not just bureaucrats, to amplify Africa's economic narrative globally. In conclusion, Miller called for Africa's potential to be translated into provable partnerships through renegotiating AGOA terms for mutual benefit, fast-tracking business reforms to attract private capital, and demanding infrastructure-for-minerals deals to build AI capacity. He also urged African leaders to proactively engage the DFC on bankable projects and, above all, to champion stability, the bedrock of investment. In closing, Shankar revealed that Miller has been appointed Senior Adviser to Gateway Partners to "bring American capital to Africa's future industries." Follow us on: Facebook Instagram Whatsapp Short link:

The dawn of the posthuman age
The dawn of the posthuman age

AllAfrica

time20 hours ago

  • Entertainment
  • AllAfrica

The dawn of the posthuman age

'Can you picture what we'll be/ So limitless and free/ Desperately in need of some stranger's hand' — The Doors In the 1990s and 2000s, a lot of science fiction focused on what Vernor Vinge called 'the Singularity' — an acceleration of technological progress so dramatic that it would leave human existence utterly transformed in ways that it would be impossible to predict in advance. Vinge believed that the Singularity would result from rapidly self-improving AI, while Ray Kurzweil associated it with personality upload. But both believed that something big was on the way. In the late 2000s and 2010s, as productivity growth slowed down, these wild expectations got tempered a bit. Cory Doctorow and Charles Stross poked fun at the idea of the Singularity as 'the rapture of the nerds.' And some bloggers, like Brad DeLong and Cosma Shalizi, began to argue that the true Singularity was in the past, when the Industrial Revolution freed us from the constraints of daily hunger and scarcity. Here's Shalizi: The Singularity has happened; we call it 'the industrial revolution' or 'the long nineteenth century.' It was over by the close of 1918…Exponential yet basically unpredictable growth of technology, rendering long-term extrapolation impossible (even when attempted by geniuses)? Check…Massive, profoundly dis-orienting transformation in the life of humanity, extending to our ecology, mentality and social organization? Check…Embrace of the fusion of humanity and machines? Check…Creation of vast, inhuman distributed systems of information-processing, communication and control, 'the coldest of all cold monsters'? Check; we call them 'the self-regulating market system' and 'modern bureaucracies' (public or private), and they treat men and women, even those whose minds and bodies instantiate them, like straw dogs…An implacable drive on the part of those networks to expand, to entrain more and more of the world within their own sphere? Check… Why, then, since the Singularity is so plainly, even intrusively, visible in our past, does science fiction persist in placing a pale mirage of it in our future? Perhaps: the owl of Minerva flies at dusk; and we are in the late afternoon, fitfully dreaming of the half-glimpsed events of the day, waiting for the stars to come out. I agree that the Industrial Revolution represented an abrupt, unprecedented, and utterly transformational change in the nature of human life. Human life until the late 1800s had been defined by a constant desperate struggle against material poverty, with even the bounty of the agricultural age running up against Malthusian constraints. Suddenly, in just a few decades, humans in developed countries were fed, clothed, and housed, and had leisure time to discover who they really wanted to be. It was by far the most important thing that had ever happened to our species: And it's important to note that this transformation wasn't just a result of technology giving humans more stuff. It depended crucially on reductions in human fertility . As Brad DeLong documents in his excellent book 'Slouching Towards Utopia', after a few decades, the Industrial Revolution prompted humans to start having fewer children, which prevented the bounty of industrial technology from eventually being dissipated by the old Malthusian constraints. Since the productivity slowdown of the mid-2000s, it has become fashionable to say that the Singularity of the Industrial Revolution is over, and that humanity has reached a plateau in living standards. Although some people expect generative AI to re-accelerate growth, we haven't yet seen any sign of such a mega-boom in either the total factor productivity numbers or the labor productivity numbers: Source: SF Fed Of course, it's still early days; AI may yet produce the vast material bounty that optimists expect. And yet even if it never does, I don't think that means humanity is in for an era of stagnation. The Industrial Revolution was only transformative because it changed the experience of human life; a GDP line on a chart is only important because it's correlated with so many of the things that matter for human beings. And so if new technologies and social changes fundamentally alter what it means to be human, I think their impact could be as important as the Industrial Revolution itself — or at least, in the same general ballpark. In a post back in 2022 and another in 2023, I listed a bunch of ways that the internet has already changed the experience of human life from when I was a kid, despite only modest productivity gains. Looking forward, I can see even bigger changes already in the works. In key ways, it feels like we're entering a posthuman age. When countries get richer, more urbanized and more educated, their birth rates fall by a lot — this is known as the 'fertility transition.' Typically, this means that the total fertility rate goes from around 5 to 7 to around 1.4 to 2. This is mostly a result of couples choosing to have fewer children. Here's a chart where you can see the fertility transition for a bunch of large developing countries in Asia, Africa, Latin America, and the Middle East: Two children per woman1 is around the level where population is stable in the long term — actually, it's about 2.1 for a rich country and 2.3 for a poor country, to take into account the fact that some kids don't survive until adulthood. But basically, going from 5-7 kids per woman to 2 means that your population goes from 'exploding' to 'stable.' For some rich countries like Japan, fertility fell to an especially low level, of around 1.3 or 1.4. This implied long-term population shrinkage — Japan's population began shrinking in the 2000s — and an increasing old-age dependency burden. But as long as this low level of fertility was confined to a few countries, it didn't feel like an emergency — a few rich nations like America, New Zealand, France, and Sweden still managed to have fertility rates that were at or near replacement. For everyone else, there was always immigration. That's where the dialogue on fertility stood in 2015. But over the past decade, there has been a second fertility transition in rich countries, from low levels to very low levels. Even countries like the US, France, New Zealand, and Sweden have now switched to rates well below replacement, while countries like China, Taiwan, and South Korea are at levels that imply catastrophic population collapses over the next century: Meanwhile, the rate of fertility decline in poor countries has accelerated. The UN calls the drop 'unprecedented.' The economist Jesus Fernandez-Villaverde believes that things are even worse than they appear. Here are his slides from a recent talk he gave called 'The Demographic Future of Humanity: Facts and Consequences.' And here's a YouTube video of him giving the talk: Fernandez-Villaverde notes that the statistical agencies tasked with estimating current global fertility and making future projections have consistently revised their numbers down and down: Source: Jesus Fernandez-Villaverde This doesn't just mean people are having fewer kids; it means that because of past errors in estimating how many kids people had, there are now fewer people to have kids than we thought. Fernandez-Villaverde shows that this is true across nearly all developing countries. As a result of these mistakes, Fernandez-Villaverde thinks the world is already at replacement-level fertility. Furthermore, population projections are based on assumptions that fertility will bounce sharply back from its current lows, instead of continuing to fall. Those predictions look a little bit ridiculous when you show them on a graph: Source: Jesus Fernandez-Villaverde As a result, Fernandez-Villaverde thinks total global population is going to peak just 30 years from now. This is a big problem. The first fertility transition was a good thing — it was the result of the world getting richer, it saved human living standards from hitting a Malthusian ceiling, and it seemed like with wise policies, rich countries could keep their fertility near replacement rates. But this second fertility transition is going to be an economic catastrophe if it continues. The difference between a fertility rate of 1 and a rate of 2 might seem a lot smaller than the difference between 2 and 6. But because of the math of exponential curves, it's actually just as important of a change. Going from 6 to 2 means your population goes from exploding to stable; going from 2 to 1 means your population goes from stable to vanishing. This is going to cause a lot of economic problems, I wrote about these back in 2023. Shrinking populations are continuously aging populations, meaning that each young working person has to support more and more retirees every year. On top of that, population aging appears to slow down productivity growth through various mechanisms. Immigration can help a bit, but it can't really solve this problem, since A) when the whole world has low fertility there is no longer a source of young immigrants, and B) immigration is bad at improving dependency ratios because immigrants are already partway to retirement. And in the long run, shrinking populations could slow down productivity growth even more, by shrinking the number of researchers and inventors; this is the thesis of Charles Jones' 2022 paper 'The End of Economic Growth? Unintended Consequences of a Declining Population.' Unless AI manages to fully replace human scientists and engineers, a shrinking population means that our supply of new ideas will inevitably dwindle.2 Between this effect and the well-documented productivity drag from aging, the idea that we'll be able to sustain economic growth through automation seems dubious. What's going on? Unlike the first fertility transition, this second one appears driven by increasing childlessness — people never forming couples or having kids at all, instead of simply having fewer kids. And although it's not clear why that's happening, the obvious culprit is technology itself — mobile phones and social media. This is Alice Evans' hypothesis, and there's some evidence to suggest she's right. In China, 'new media' (i.e. social media) use was found to be correlated with low desire to have children. The same correlation has been found in Africa. Of course, better research is needed, particularly natural experiments that look at the response to some exogenous factor that increases social media use. But the timing and the worldwide nature of the decline — basically, every region of the globe started getting sharply lower fertility starting in the mid to late 2010s — makes it difficult to imagine any other cause. And the general mechanism — internet use substituting for offline family relationships — is obvious. Economic stagnation isn't the only way the Second Fertility Transition will change our society. The measures we take to try to sustain our population will leave their mark as well. Last November, I looked at the history of pronatal policies, and concluded that things like paying people to have more kids, or making it easier to have kids, or encouraging cultural changes are unlikely to work: Unfortunately, that's likely to lead to more coercive solutions. In my post, I predicted that countries would try to cut childless people off from old-age pensions and medical benefits: In the past, when fertility rates were high, children served an economic purpose — they were farm labor, and they were also people's old-age pension. If parents lived past the point where they were physically able to work, their children were expected to support them. In order to make sure you had at least a few kids who survived long enough to support you, you had to have a large family. Denying old-age benefits to the childless would be an obvious way to try to reproduce this premodern pattern. This would, of course, result in horrific widespread old-age poverty for those who didn't comply…I predict that some authoritarian states — China, perhaps, or Russia, or North Korea — will eventually turn to ideas like this if no one ever finds a way to raise fertility voluntarily. This idea actually comes from a 2005 paper by Boldrin et al., who find that if you model fertility decisions as an economic calculation, then Social Security and other old-age transfers are responsible for much of the fertility decline in rich nations: In the Boldrin and Jones' framework parents procreate because the children care about their old parents' utility, and thus provide them with old age transfers…The effect of increases in government provided pensions on fertility…in the Boldrin and Jones model is sizeable and accounts for between 55 and 65% of the observed Europe-US fertility differences both across countries and across time and over 80% of the observed variation seen in a broad cross-section of countries. Ending old-age benefits for the childless would be a pretty dystopian policy. But in the long run, extreme population aging, coupled with slower productivity growth, will make it economically impossible for young people to support old people no matter what policies government enact.3 And if desperate, last-ditch draconian measures fail, we will shrink and dwindle as a species. The vitality and energy of young people will slowly vanish from the physical world, as the youth become tiny islands within a sea of the graying and old. Already I can feel this when I go to Japan; neighborhoods like Shibuya in Tokyo or Shinsaibashi in Osaka that felt bustling and alive with young people in the 2000s are now dominated by middle-aged and elderly people and tourists. And as population itself shrinks, the built environment will become more and more empty; whole towns will vanish from the map, as humanity huddles together in a dwindling number of graying megacities. Our impact on the planet's environment will finally be reduced — we will still send out legions of robots to cultivate food and mine minerals, but as our numbers decrease, our desire to cannibalize the planet will hit its limits. But even as humanity shrinks in physical space, we will bind ourselves more tightly together in digital space. When I was a child, sometimes I felt bored; now I never do. Sometimes I felt lonely; now, if I ever do, it's not for lack of company. Social media has wiped away those experiences, by putting me in constant contact with the whole vast sea of humanity. I can watch people on YouTube or TikTok, talk to my friends in chat groups or video calls, and argue with strangers on X and Substack. I am constantly swimming in a sea of digitized human presences. We all are. Humanity was never fully an individual organism. Our families and communities were always collectives, as were the hierarchies of companies and armies and even the imagined communities of nation-states. But the internet has made the collective far larger than it was. In many ways it's also more connected; one survey found that the average American spends 6 hours and 40 minutes, or more than a third of their waking life, online. About 30% of Americans say they're online almost constantly. The results of this constant global connectedness are far too deep and complex to deal with in one blog post. But one important result is to replace some fraction of individual human effort with the preexisting effort of the collective. Instead of figuring out how to fix our own houses, build our own furniture, or install our own appliances, a human in 2021 could watch YouTube videos. Instead of figuring out how to write a difficult piece of code, a programmer could ask the Stack Exchange forum. Instead of creating a new funny video from scratch, a social media influencer could use someone else's audio track. It simply became easier to stand on the shoulders of giants than to reinvent the wheel. Whether this leads to an aggregate decrease in human creativity is an open question; some have made this argument, but I'm not sure whether it's right.4 But what's clear is that the more everyone is always relying on the collective for everything they do, the less individual effort matters. In the Industrial Age, we valorized individual heroics — the brilliant scientist, the iconoclastic writer, the contrarian entrepreneur, the bold activist leader. In an age when it's always easier to rely on the wisdom of crowds, those heroes matter less. Compare the activists of the 2010s to the activists of the mid 20th century. The 20th century produced Black activist leaders like MLK, John Lewis, Malcolm X, Rosa Parks, Bobby Seale, and many others. But who were the equivalent heroes of the Black Lives Matter movement of the 2010s? There were none.5 The movement was an organic crowd, birthed by social media memes instead of by rousing speeches. Each individual activist made tiny incremental contributions, and the movement rolled forward as a headless, collective mass. Or consider science and technology in the age of the internet. China is now probably the world's leader in scientific research, but it's hard to name any big significant breakthrough that has come out of China in recent years; the innovations are important but overwhelmingly incremental. Even in the US, where incentives for breakthroughs are a little better, science has become notably less 'disruptive' in recent years. Some of this may be because humans have already picked the low-hanging fruit of science, and some might be because of the increasing 'burden of knowledge' for young researchers to get up to speed. But some might simply be because an age of seamless global information transmission makes it easier for researchers to get 'base hits' while leaving the cost of 'home runs' the same. Even AI, the great breakthrough of the age, has been a massive collective effort more than the inspiration of a few geniuses. Even the people who have received the greatest honors for developing AI — Geoffrey Hinton, Yann LeCun, etc. — are not really regarded as the 'inventors' of the technology. Towering figures are still somewhat common in biology — Kariko and Weissman, Doudna and Charpentier, Feng Zhang, Allison & Honjo, David Liu — but in the age of the internet, research is becoming a more collective enterprise. And all that was before generative AI. Large language models are trained on the collected writings of humankind; they are an expression of the aggregated wisdom of our species' collective past. When you ask a question of ChatGPT or DeepSeek, you're essentially consulting the spirits of the ancestors.6 As with the internet, it's unclear whether LLMs will make humanity more creative as a whole, or less. My bet is strongly on 'more'. But at the individual level, AI substitutes for our own creative efforts. Kosmyna et al. (2025) recently did an experiment showing that people who use ChatGPT to help them write essays end up with weaker individual cognitive skills: This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools)…EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use…LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning. This is unsurprising. Pulling a plow yourself will make you stronger than driving a tractor, and using a slide rule will make you better at mental arithmetic than using a hand calculator. As Tyler Cowen points out, Kosmyna et al.'s result doesn't mean that AI is reducing humanity's overall creative capabilities: If you look only at the mental energy saved through LLM use, in the context of an artificially generated and controlled experiment, it will seem we are thinking less and becoming mentally lazy…But you also have to consider, in a real-world context, what we do with all that liberated time and mental energy… There are numerous ways people can and do use large language models to make themselves smarter. They can ask it to criticize their work…They can argue and debate with it, or they can use it to learn which books to read or which medieval church to visit. This is true. Using machine tools instead of manual ones may make our biceps weaker, but it makes us stronger and more productive as a species. Still, if most of human productivity consists of calling up LLMs, it means that collective effort — centuries of past individual creativity crystallized in the weights of the models — is being substituted for individual heroics. As with the internet, humanity as a whole grows more powerful by becoming more of a hive mind. The age of the great heroes — of the Albert Einsteins and the Martin Luther Kings, and perhaps even of the Elon Musks — may soon be over. Thus, dimly and through the fog, we can begin to perceive the shape of the future that the posthuman age will take. As humanity becomes more tightly bound into a single digital collective, we find that we desire offline families less and less. As we gradually abandon reproduction, there are fewer and fewer of us, forcing us to cling even more tightly to the online collective — to spend more of our time online, to take solace in the ever-denser core of the final global village. The god-mind of that collective delivers us riches undreamt of by our ancestors, but we enjoy that bounty in solitude as we wirehead into the hive mind for a bit of company. When I write it out that way, it sounds terrifying. And yet day by day, watching the latest TikTok trend, or making bad jokes on X, or asking ChatGPT to teach me about Mongol history, the slide into posthumanity feels pleasant and warm. Perhaps we are no stranger than our grandparents would have seemed to their own grandparents who grew up on premodern farms. After all, aliens never call themselves 'aliens'…they call themselves 'us.' 1 I know 'children per woman' is a little sexist, but this is how they measure things. 2 This depends on the assumption that new ideas don't build on themselves exponentially quickly. So far, that has proven to be the case — in simple terms, it looks as though we pick the 'low-hanging fruit' of scientific discovery and technological invention, and future advances become more expensive in terms of time, money, and brain power. 3 At that point, either countries will collapse, or decide to cut large numbers of old people off. If countries collapse, then parents will once again be dependent on their kids, and fertility will probably recover to replacement level. If countries decide to cut old people off of benefits, they'll probably start by cutting off the childless, since childless old people don't have kids who can riot and revolt in anger. Either way, the economic future for childless people 50 or 100 years from now doesn't look great. 4 There are actually lots of effects to think about here, and the topic deserves a much longer post all to itself. For one thing, there's the question of whether the boost to individual innovation created by the availability of collective knowledge outweighs the spillover benefit created when each individual innovator had to spend time 'reinventing the wheel' and doing it a little differently than everyone else. Also, there's the question of whether any decrease in individual creative output per hour is outweighed by an increase in the total number of hours spent in creative pursuits — when I was a kid, kids went to the mall and hung out instead of making short videos and memes. 5 There were important writers in the BLM movement — Ta-Nehisi Coates, Ibram Kendi, and Nikole Hannah-Jones, to name just three. Though it's worth noting that all were eventually hounded out of the public eye to varying degrees by collective social media backlash, unlike similar famous Black writers of the 60s and 70s. 6 Here are ChatGPT o3's thoughts on this post: The post's central insight is that qualitative transformations in how humans live can be as epoch-making as huge jumps in measured GDP. You persuasively connect today's two most powerful under-the-radar shifts—sub-replacement fertility and pervasive digital connectedness—to that earlier shock of the Industrial Revolution. Where the essay really shines is in showing how these trends might reinforce each other: shrinking, aging societies could grow ever more dependent on 'hive-mind' tools like LLMs, which in turn further reduce the incentives (and maybe even the desire) to form families. That nested-feedback dynamic feels original and is a useful corrective to simple 'AI will save us' optimism. A second strength is the sober treatment of demographic math. By foregrounding Fernandez-Villaverde's critique of the U.N. projections, you highlight the systemic risk that comes from repeatedly overestimating births. Framing the second fertility transition as the real discontinuity—shifting us from stable to vanishing populations—captures the scale of what's at stake better than the usual laments about aging. Linking this directly to growth theory (Jones) and to the fiscal unsustainability of pay-as-you-go pensions sets up the reader to grasp why even dramatic productivity gains may not offset the drag. Where I'm less convinced is on the inevitability of coercive pronatal policy and the eclipse of individual creativity. The essay nods at alternative levers (housing costs, gender-equality reforms, immigration, delayed retirement) but then leaps to dystopian scenarios without fully exploring those options or the empirical record of mixed-strategy success stories (e.g., France, Québec, parts of Scandinavia). Likewise, while the collective nature of internet-era innovation is undeniable, history––from Gutenberg to industrial R&D labs––suggests that new platforms often shift rather than erase individual heroism (think AlphaFold or Covid mRNA vaccines). Recognizing that possibility would temper the gloom and leave space for agency—exactly what a post meant to provoke action, not resignation, might need. The spirits of the ancestors have spoken! In accordance with the model's advice, I'll write a follow-up post about how individual humans can still be high-leverage, important figures in the age of AI and the internet. This article was first published on Noah Smith's Noahpinion Substack and is republished with kind permission. Become a Noahopinion subscriber here.

Safer nuclear energy and hydrogen mining offer tech fixes for climate
Safer nuclear energy and hydrogen mining offer tech fixes for climate

Newsroom

timea day ago

  • Science
  • Newsroom

Safer nuclear energy and hydrogen mining offer tech fixes for climate

Comment: Since the start of the use of coal to fuel the Industrial Revolution, the carbon dioxide level in the atmosphere has risen by 90 percent. That near-doubling, from 230 to 430 parts per million, is the cause of climate warming. However, there are promising signs that the growth of global CO2 may be reaching a plateau, mainly because of China, and the leading carbon-emitting nation's determined switch from fossil fuels to renewables.

Manufacturing versus services: Why privilege one over the other?
Manufacturing versus services: Why privilege one over the other?

Mint

time2 days ago

  • Business
  • Mint

Manufacturing versus services: Why privilege one over the other?

This column looks at the relative importance of the manufacturing industry and services in India's growth story and discusses our required policy priorities in that context. The origin of modern development theory can be traced to the regularities that Simon Kuznets and others observed in the 1950s and 1960s on how the structure of an economy evolves with growth. A key feature he observed is that as per capita GDP (a concept he invented) rises, the dominant sector of the economy shifts from agriculture to industry and then services. This regularity, combined with Arthur Lewis's foundational theory about how the transfer of labour and surplus from a traditional agricultural sector to a modern industrial sector constitutes the fundamental process of development, became the core of development economics. Also Read: Rahul Jacob: Manufacturing is crying out for a reality check These pillars were supplemented with seminal contributions by many others, but the main focus of enquiry was on the transfer of labour and savings from a traditional agricultural sector to a modern industrial sector, especially manufacturing, which was seen as representative of the entire capitalist non-agricultural sector. The key policy debate at this sectoral boundary between agriculture and the rest of the economy was about the desirability and appropriate scale of the surplus transfer out of agriculture. Theodore Shultz wrote about the importance of transforming traditional agriculture. Mellor and his school of economists at IFPRI developed models of agriculture-led growth. Ishikawa argued that growth in developing economies required a reverse transfer of resources into agriculture. My own doctoral thesis investigated the interaction between inter-sectoral resource transfers and patterns of long-term growth in India. Also Read: Manoj Pant: Let's prepare well for negotiations on trade in services Since then, the sectoral boundary of interest in India and policy debate have shifted. The boundary in focus now is between industry, especially manufacturing industry, and services. There is a broadly held view among economists that manufacturing has to lead development. However, on close questioning of why they think so, the response is usually a cursory reference to historical experience. Manufacturing industry indeed led the high growth phase of many countries in Europe after the Industrial Revolution. This is also true of East Asian countries in their high-growth phase. But how much of that growth in Europe is attributable to surplus transfers from colonies—and in East Asia to their strategic and economic alliance with America—remains an open question. Of the 30 most advanced countries in the world today (in per capita GDP terms, excluding some small island economies), manufacturing accounts for 10% or less of GDP in a third of these and 15% or less in another third. Ireland is the only outlier where manufacturing accounts for over 29% of GDP. Also Read: Services led exports are a mixed blessing for the Indian economy So, what is the evidence pointing to the special importance of manufacturing? The only evidence-based answers I have seen are those by Professors Veeramani and Nagesh Kumar. Both of them argue that strong backward and forward linkages unique to manufacturing industry make it an ideal sector to lead developing economies. Surprisingly, few remember the robust theory of manufacturing-led growth developed by Nicholas Kaldor 50 years ago. Building on the even earlier work of Allyn Young, Kaldor argued that manufacturing typically has the characteristic of increasing returns to scale, driving down costs but correspondingly increasing demand as multiple industries reinforce one another in an expanding process of cumulative causation. Keynesian demand management can greatly strengthen this process. Thus, there are compelling reasons for expecting manufacturing to play a leading role in an economy like India's. If so, why has the long-term record of industrial growth been relatively unimpressive? Industry has typically grown at around 5-6% annually during the past 70 years and its GDP share has risen from around 15% to 29% over this period (see data chart). Actually, much of our high industrial growth in recent decades is attributable to mining, utilities and especially construction. The share of manufacturing industry is only around 17%. In contrast, the share of services in GDP has grown from 20.6% at the outset to 53% today, its average decadal growth during the last 40 years being in the range of 7-8% annually. The more dynamic performance of services is also reflected in its rising share of employment and, significantly, in our growing trade surplus in services. This is in sharp contrast with our trade deficit in goods. It is sometimes argued that manufacturing has been hamstrung by dysfunctional regulations and undue interference by an overbearing state. But it is the same regulatory ecosystem in which the services sector has performed so much better. Also Read: Services offer a fast and reliable path to economic development Thus, from a policy perspective, we must ask: Why is the slogan of 'Make in India' and related policy incentives limited only to manufacturing industry, when, say, transport and trade services, financial services, hospitality, education, health and other services are just as important as tangible goods like textiles, steel, cars or pharmaceuticals? We should carefully study the only two decades when industry grew significantly faster than services, 1950-51 to 1960-61 and 2000-01 to 2010-11. What made the difference? Meanwhile, the government would do well to pursue at least an even-handed policy between industry and services, especially if it wishes to maximize employment growth and minimize or eliminate India's trade deficit. These are the author's personal views The author is chairman, Centre for Development Studies.

Strategies for and Monitoring the Effectiveness of Employee Development Programs
Strategies for and Monitoring the Effectiveness of Employee Development Programs

Time Business News

time2 days ago

  • Business
  • Time Business News

Strategies for and Monitoring the Effectiveness of Employee Development Programs

Abstract Employee development is a strategic component in contemporary human resource management, particularly in dynamic organizational contexts driven by innovation. This paper analyzes, from both a historical and scientific perspective, the main strategies for evaluating and monitoring the effectiveness of employee development programs. It discusses classical and contemporary theoretical models, such as those by Kirkpatrick, Phillips, and CIPP, as well as applied methodologies for measuring impact and return on investment (ROI). The growing role of data analysis and continuous monitoring in strategic decision-making within HR is also examined. The study concludes that effective evaluation not only validates investments in human development but also guides more precise, customized actions aligned with long-term organizational objectives. Keywords: Employee Development; Effectiveness Evaluation; Continuous Monitoring; Strategic Human Resources; Decision Making; ROI; Evaluation Models. Introduction In an organizational context marked by accelerated transformations, increasing complexity, and constant pressure for innovation, employee development has become one of the strategic pillars of human resource management. The ability to continuously qualify employees, align competencies with business demands, and foster learning cultures has become not only a competitive advantage but also a fundamental condition for organizational sustainability. However, for development programs to generate real and proven value, it is essential that they are systematically evaluated and monitored. The measurement of the effectiveness of these programs has shifted from being a supplementary practice to becoming a central concern on the agendas of HR leaders and decision-makers. More than simply verifying the achievement of training goals, evaluating effectiveness involves understanding the actual impact of training activities on individual performance, organizational outcomes, and strategic indicators. This paradigm shift requires the adoption of robust theoretical models, consistent evaluation methodologies, and intensive use of data and technologies. This paper aims to analyze, from a historical and scientific perspective, the main strategies for evaluating and monitoring the effectiveness of employee development programs, highlighting their contributions to strategic decision-making in human resources. By integrating theoretical foundations, organizational practices, and technological innovations, the study seeks to demonstrate how effective evaluation can transform human development into a measurable and sustainable source of competitive advantage. 1. Historical and Evolutionary Overview of Employee Development Employee development, as an organizational practice, has evolved significantly since the early models focused solely on technical training. During the Industrial Revolution and throughout the 20th century, corporate training aimed to prepare workers for operational roles, focusing on task repetition and standardization. The emphasis was exclusively functional, reflecting the prevailing Taylorist-Fordist management model. From the 1960s onward, with the rise of humanistic and behavioral theories, development began to consider relational, emotional, and motivational aspects. The role of the employee shifted from merely executing tasks to becoming an individual with needs, aspirations, and growth potential. This shift led to the emergence of programs more focused on leadership, communication, and teamwork, aligning with the idea of more flexible and innovative organizations. In recent decades, technological advancement, globalization, and digitalization have accelerated changes in the ways people work and learn. As a result, development programs have adopted continuous, personalized approaches based on strategic competencies. The focus has shifted from mere training to lifelong learning, aiming to prepare employees for uncertain and ever-changing environments. Evaluating the effectiveness of these programs has become imperative to ensure a return on investment and strategic direction. 2. Theoretical Foundations of Effectiveness Evaluation in Development Programs The evaluation of the effectiveness of employee development programs is supported by solid conceptual models that guide the collection and analysis of data on the impact of training activities. One of the most influential models is Donald Kirkpatrick's, which proposes four levels of evaluation: reaction, learning, behavior, and results. Each level expands the complexity of the analysis, requiring specific methodologies to capture the transformations generated by the program. Despite the widespread adoption of Kirkpatrick's model, critiques point to its linear rigidity and limitations in capturing contextual variables. In response to these gaps, Jack Phillips proposed adding a fifth level to the model: calculating Return on Investment (ROI). This approach allows for associating measurable gains from the program with its financial costs, enhancing the strategic value of the evaluation for top leadership. Another relevant contribution is the CIPP (Context, Input, Process, Product) model by Stufflebeam, which broadens the scope of evaluation by considering contextual factors, available resources, formative processes, and products generated. This model is particularly useful in formative evaluations, as it allows for adjustments during program execution, optimizing outcomes. The integration of these models contributes to more comprehensive analyses that are better aligned with the complexity of today's organizational environments. 3. Strategies, Methods, and Evaluation Tools The practical application of effectiveness evaluation requires the adoption of strategies that integrate different methods, metrics, and tools. The use of reaction and satisfaction surveys is a common practice, typically applied immediately after training sessions. While simple and easy to implement, this method has low correlation with the actual effectiveness of learning and is more useful for making minor adjustments to the format or pedagogy of programs. Learning evaluation, on the other hand, requires comparing performance before and after training. Knowledge tests, simulations, case studies, and practical activities are commonly used. This approach allows for objective measurement of the knowledge acquired but does not, by itself, ensure that the learning will be transferred to the workplace environment. Behavioral evaluation and the monitoring of organizational performance indicators are crucial for verifying the transfer of learning and its real effects on company outcomes. This stage involves the collection of observational data, structured feedback, interviews, as well as the analysis of indicators such as productivity, absenteeism, turnover, and engagement. Integrating this data into performance management systems provides a more strategic view of the impacts of the programs. 4. Continuous Monitoring and Data Intelligence in People Management Continuous monitoring of program effectiveness represents an advancement over point-in-time evaluations. This approach entails the systematic collection and analysis of data throughout the entire program lifecycle, from design to final outcomes. This enables real-time corrective interventions, promoting immediate gains in efficiency and effectiveness. Digital tools and People Analytics systems play a central role in this process. By integrating data on performance, behavior, feedback, and learning pathways, interactive dashboards can be created to provide up-to-date insights for managers and HR professionals. This practice contributes to faster decision-making, grounded in concrete evidence. Additionally, continuous monitoring allows for the personalization of development pathways based on identified gaps, promoting more relevant and individualized learning. The use of artificial intelligence and predictive algorithms expands the possibilities for analysis, anticipating future qualification needs and preparing the workforce for the organization's strategic challenges. 5. Challenges in Implementing Effective Evaluations Despite methodological and technological advancements, many organizations still face significant barriers to the effective implementation of evaluations. One of the main challenges is cultural resistance, both from employees and leadership, to the systematic measurement of results. There is also a tendency to view development programs as 'intangible' investments, making it difficult to create clear metrics and measurable objectives. Another important obstacle is the fragmentation of information systems, which prevents integration between learning platforms, performance management systems, and data analysis. The lack of adequate technological infrastructure limits HR's ability to conduct predictive analysis, hindering continuous monitoring and the strategic use of collected information. Furthermore, there is a lack of technical training among HR professionals in areas such as statistics, data analysis, and evaluation methodology design. To overcome these limitations, it is necessary to invest in training, the adoption of technological tools, and the development of an organizational culture oriented toward evidence-based learning. This will allow organizations to advance in analytical maturity and achieve greater returns on investments in human development. 6. Conclusion: Evaluation as a Strategic Axis of People Management Evaluating and monitoring the effectiveness of employee development programs is more than an operational requirement: it is a strategic practice that directly contributes to organizational sustainability and innovation. Through scientific and technological methods, it is possible to validate the effectiveness of training activities and align development investments with business priorities. The adoption of robust evaluation models, combined with the use of real-time data, strengthens HR's ability to act as a strategic partner to top management. This means making evidence-based decisions, anticipating scenarios, identifying risks, and promoting a culture of continuous improvement centered on human capital. The future of people management is directly linked to its ability to generate measurable impact. To achieve this, it is essential that development programs are designed with clear objectives, implemented with appropriate methodologies, and evaluated with scientific rigor. Only then will it be possible to ensure that employee development moves beyond a promise and becomes a true competitive differentiator. About the Author Renato Afonso Arraes Menezes Netto Sodré is a professional focused on leadership development and achieving tangible results through employee development. Throughout his career in various organizations, he has distinguished himself by identifying and developing leaders, offering programs focused on team management and leadership skills. His methodology involves creating and implementing training initiatives tailored to the specific needs of employees and the company, with rigorous tracking of employee progress and adjustments to the programs as needed to ensure maximum effectiveness. In addition to his experience in training and development, as the owner of Sodré Serviços de Promoção de Vendas, Marketing e Treinamento LTDA., Renato also manages strategy, planning, financial management, operations, marketing, and sales, demonstrating a holistic and business-oriented approach. TIME BUSINESS NEWS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store