logo
#

Latest news with #RobertSolow

How AI risks repeating the IT productivity paradox
How AI risks repeating the IT productivity paradox

AU Financial Review

time5 days ago

  • Business
  • AU Financial Review

How AI risks repeating the IT productivity paradox

'You can see the computer age everywhere but in the productivity statistics.' This now-famous observation by Nobel Prize-winning economist Robert Solow, made in 1987 in response to the so-called IT productivity paradox, captured one of the most perplexing economic puzzles of the late 20th century. Despite billions invested in IT and computers throughout the 1970s and 1980s, no aggregate productivity effects had shown up in national accounts. Today, artificial intelligence is everywhere, and the world is again investing (many) billions. As Treasurer Jim Chalmers sets out to host an economic reform roundtable focused on rekindling sluggish productivity growth, with AI firmly on the agenda, we had better learn from history.

Productivity puzzle: Solow's paradox has come to haunt AI adoption
Productivity puzzle: Solow's paradox has come to haunt AI adoption

Mint

time29-06-2025

  • Business
  • Mint

Productivity puzzle: Solow's paradox has come to haunt AI adoption

AI enthusiasts, beware: predictions that the technology will suddenly boost productivity eerily echo those that had followed the introduction of computers to the workplace. Back then, we were told that the miraculous new machines would automate vast swathes of white-collar work, leading to a lean, digital-driven economy. Fast forward 60 years, and it's more of the same. Shortly after the debut of ChatGPT in 2022, researchers at the Massachusetts Institute of Technology claimed employees would be 40% more productive than their AI-less counterparts. These claims may prove to be no more durable than the pollyannish predictions of the Mad Men era. A rigorous study published by the National Bureau of Economic Research in May found only a 3% boost in time saved, while other studies have shown that reliance on AI for high-level cognitive work leads to less motivated, impaired employees. We are witnessing the makings of another 'productivity paradox,' the term coined to describe how productivity unexpectedly stagnated and, in some cases, declined during the first four decades of the information age. The bright side is that the lessons learned then might help us navigate our expectations in the present day. The invention of transistors, integrated circuits, memory chips and microprocessors fuelled exponential improvements in information technology from the 1960s onward, with computers reliably doubling in power roughly every two years with almost no increase in cost. It quickly became an article of faith that computers would lead to widespread automation (and structural unemployment). A single person armed with the device could handle work that previously required hundreds of employees. Over the next three decades, the service sector decisively embraced computers. Yet, the promised gains did not materialize. In fact, studies from the late 1980s revealed that the services sector—what economist Stephen Roach described as 'the most heavily endowed with high-tech capital"—registered the worst productivity performance during this same period. In response, economist Robert Solow had famously quipped that 'we see computers everywhere except in the productivity statistics." Economists advanced multiple explanations for this puzzle (also known as 'Solow's Paradox'). Least satisfying, perhaps, was the claim, still made today, that the whole thing was a mirage of mismeasurement and that the effects of massive automation somehow failed to show up in the economic data. Others have argued that the failure of infotech investments to live up to the hype can be laid at the feet of managers. There's some merit to this argument: studies of infotech adoption have shown that bosses spent indiscriminately on new equipment, all while hiring expensive workers charged with maintaining and constantly upgrading these systems. Computers, far from cutting the workforce, bloated it. More compelling still was the 'time lag' hypothesis offered by economist Paul A. David. New technological regimes, he contended, generate intense conflict, regulatory battles and struggles for market share. Along the way, older ways of doing things persist alongside the new, even as much of the world is remade to accommodate the new technology. None of this translates into immediate efficiency—in fact, quite the opposite. As evidence, he cited the advent of electricity, a quicker source of manufacturing power than the steam it would eventually replace. Nonetheless, it took 40 years for the adoption of electricity to lead to increased worker efficiency. Along the way, struggles to establish industry standards, waves of consolidation, regulatory battles and the need to redesign every single factory floor made this a messy, costly and prolonged process. The computer boom would prove to be similar. These complaints did not disappear, but by the late 1990s, the American economy finally showed a belated uptick in productivity. Some economists credited it to the widespread adoption of information technology. Better late than never, as they say. However, efficiency soon declined once again, despite (or because of) the advent of the internet and all the other innovations of that era. AI is no different. The new technology will have unintended consequences, many of which will offset or even entirely undermine its efficiency. That doesn't mean AI is useless or that corporations won't embrace it with enthusiasm. Anyone expecting an overnight increase in productivity, though, will be disappointed. ©Bloomberg The author is professor of history at the University of Georgia and co-author of 'Crisis Economics: A Crash Course in the Future of Finance'.

The AI Revolution Won't Happen Overnight
The AI Revolution Won't Happen Overnight

Harvard Business Review

time24-06-2025

  • Business
  • Harvard Business Review

The AI Revolution Won't Happen Overnight

If you believe the frenzied hype, AI is about to tie our shoes, run our businesses, and solve world hunger. McKinsey predicts it will add $17.1–$25.6 trillion to the global economy annually. It's a seductive vision. It's also a hallucination. As a business-first CIO with nearly three decades of experience turning emerging tech into business value, I've seen this movie before. It rarely ends the way the trailer promises. We've spent 75 years asking whether machines can think. Maybe the better question now is whether we can. Yes, AI is powerful. Yes, it will change how we live and work. But the transformation will be slower, messier, and far less lucrative in the short term than the hype suggests. Companies are collectively pouring billions of dollars into AI without clear ROI. Open-source models like Meta and Deep Seek are rapidly eroding the competitive advantage of other big tech companies' foundation models (e.g.,Gemini, ChatGPT). And the business model for gen AI is full of potential—but missing a clear path to sustainable revenue. AI's transformational impact will come, but it won't be the instant revolution we're being sold. We're getting six fundamental things wrong about how AI will create value and how long it will take. AI's real impact will take much longer than we think. In 1987, economist Robert Solow famously quipped, 'You can see the computer age everywhere but in the productivity statistics.' Decades later, AI is the latest iteration of this paradox. Despite billions in investment, measurable efficiency gains remain elusive. So far, the Federal Reserve Bank of Kansas City found that AI's impact on productivity has been modest compared to previous technology-driven shifts. This isn't a failure of AI—it's a failure of expectations. Generative AIs like large language models are a general purpose technology (GPT). (Though the 'GPT' in ChatGPT stands for something else.) We've seen many GPTs before—the printing press, electricity, the internet—and they all follow the same pattern. In each case, it took decades before their transformative potential really hit the economy. Electricity revolutionized manufacturing, but it took 40 years before factory design caught up. The internet existed in the 1970s, but it wasn't until the 2000s that it rewrote business models. There are compelling reasons to think that AI will follow the same slow but inevitable trajectory. For example, MIT economist and Nobel laureate Daron Acemoglu argues that only 5% of tasks will be profitably automated in the next decade, adding just 1% to the U.S. GDP—a far cry from the seismic shift many expect. The challenge, he argues, is that for most organizations, the costs of disruption, retraining, integration, and computing will outweigh the returns for most tasks. Moreover, we've already picked the low-hanging fruit of digital transformation—automating operational work, digitizing information, moving customers online, and migrating core infrastructure to the cloud. These early wins delivered efficiency gains. But each new leap delivers diminishing returns, making it harder for AI—or any technology—to drive economy-wide productivity gains. Despite breakthrough technologies like smartphones, social media, and cloud computing, U.S. total factor productivity (TFP) growth has been sluggish for five decades. From 1974 to 2024 TFP growth was less than half the rate of the post-war boom. AI might boost personal productivity, but it won't deliver productivity gains at scale anytime soon—if at all. A study by the National Bureau of Economic Research recently demonstrated the difference of adoption and intensity. They showed that while 40% of U.S. adults used generative AI, most people used it infrequently. That infrequent use translated to 1–5% of total work time. When combined with the users' estimated time savings, this resulted in <1% of a productivity gain. That doesn't mean AI is useless. It just means its value won't come from sweeping, instant disruption, but from targeted, deliberate integration. Betting on a short timeline and quick ROI risks wasted capital, failed automation, and unnecessary workforce disruption. Instead, companies should focus on the long game: build the right systems, train your team, and figure out how to make AI work for your business. We're being wildly optimistic about enterprise AI adoption. When ChatGPT launched, AI felt like magic—an overnight revolution. Earnings calls were flooded with AI mentions. Venture capital shifted into overdrive. Headlines promised AI's transformation would be instant and all-encompassing. We've seen this kind of overheated hypecycle before — with the early personal computers, dot-com bubble, the blockchain boom, and even the very early days of cloud computing—and we'll likely make this mistake again. We misjudge technological change because of three cognitive biases. The planning fallacy makes us underestimate how long transformation takes. Optimism bias convinces us adoption will be smooth and easy. Recency bias leads us to believe AI's viral consumer adoption will translate seamlessly into the enterprise. For all of the concern about AI's biases, we tend to overlook our own, and this might be especially true in enterprise adoption. Enterprise AI isn't plug-and-play. It collides with outdated systems, regulatory roadblocks, risk-averse corporate cultures, AI talent shortages, and procurement bottlenecks. The barriers aren't technical, they're systemic. It took us 100 years to add wheels to luggage, don't underestimate the forces that balance the pace of technology diffusion. IBM Watson Health is a cautionary tale. IBM promised to 'outthink cancer,' betting big that AI would transform healthcare. But by 2022, Watson was sold for parts, its potential crushed by messy, fragmented medical data, regulatory red tape, and real-world complexity. Hospitals found it unreliable. Doctors found it impractical. Ethical concerns mounted. Watson didn't fail because of AI—it failed because IBM underestimated how difficult real-world implementation would be. AI will transform industries, just not at Silicon Valley speed. It will happen on enterprise time: longer, slower, and with far more friction than most expect. Companies that fall victim to bias and ignore these realities will waste resources, overpromise results, and erode trust. The winners in AI won't be the ones making the boldest claims. They'll be the ones with the patience to build real, lasting change. The market is overestimating the value of AI companies. Investors are making a critical error around AI: They're treating AI companies like high-growth, asset-light software firms, when in reality they're capital-intensive, high-cost, and infrastructure heavy. AI-heavy tech stocks have traded at a 20–40% premium, assuming future profits that haven't materialized. For executives, this disconnect isn't just a market misread—it's an execution trap. Inflated valuations set unrealistic expectations that trickle down into the enterprise: pressure to move fast, to pilot something flashy, to be seen 'doing AI.' The result? Rushed rollouts, misaligned priorities, and investments in the magic rather than margin performance. In a market priced for miracles, the real advantage lies in restraint—leaders who prioritize integration over spectacle and long-term value over short-term visibility. Consider OpenAI. It's chasing a $300 billion valuation —double Facebook at IPO and eight times Google at IPO (adjusted for inflation). Investors are pricing it like a cloud software company with expanding margins. But AI isn't SaaS. OpenAI's costs don't shrink with scale, they rise with demand. Every query has a price. Every customer adds costs. OpenAI itself expected a $5 billion loss on $3.7 billion in revenue in 2024. The problem is that the infrastructure demands of AI are staggering. Meta, Alphabet, Amazon, and Microsoft plan to spend a combined $300 billion this year. Analysis of cash flow statements and public statements show their AI-related capital expenditures have increased 40–60% in just two years. Microsoft alone is spending $80 billion this year. By 2028, Microsoft's compute needs could rival an entire country's electricity demand. This infrastructure build has created an estimated $125 billion annual revenue gap to fill. Competition is further squeezing AI's margins. Open-source models like LLaMA, Mistral, and DeepSeek-V3 are rapidly eating into market share. Meta's LLaMA 3 already reaches over a billion users across Instagram, WhatsApp, and Facebook—at zero cost to consumers. Meanwhile, OpenAI pays for every user and lacks a built-in distribution ecosystem. AI is commoditizing faster than any previous technology cycle, a reality even OpenAI's board chair has acknowledged. For industry leaders, the implications are real and immediate. Many are making high-stakes investment decisions based on tools built by companies whose AI business models may not be sustainable. If those partners face cost overruns, slowed R&D, or collapse altogether, it could leave enterprise roadmaps stranded mid-implementation. The risk isn't just financial—it's operational. The real winners in AI won't be those chasing sky-high valuations. They'll be the companies embedding AI where it creates durable, economic advantage—places where it speeds up business decision cycles, improves decision quality, or reimagines products—all with measurable ROIs. AI's transformation will be a test of leadership stamina, not speculation. The real money isn't in the models. Even if AI model companies turn a profit, they won't be able to defend their advantage. AI's biggest breakthroughs—like neural networks and attention mechanisms—are just math, and math can't be patented. That's the critical difference between invention and innovation. Invention delivers the breakthrough—the transformer architecture, the novel algorithm. But innovation at scale requires more: distribution, margin, and market fit. The real test of AI isn't whether we can build something new. It's whether we can embed it deeply enough into business systems to generate durable, measurable value. And that's exactly why models, no matter how advanced, won't hold the moat. Open-source collaboration and government-backed research will continue to push AI toward commoditization. Once AI is cheap and everywhere, no one will own it. The real value isn't in building AI—it's in using it. It's in applications, not models. AI is already moving to 'the edge,' shifting from the cloud to personal devices where users don't need to pay for access. Apple Intelligence, though early in the market, is embedded into iPhones. Some Meta LLaMA models run on laptops. This is the same trajectory cloud computing followed. Investors first bet on infrastructure—AWS, Azure, Google Cloud. But over time, the winners weren't the cloud 'infrastructure' providers. They were the application companies embedding cloud into business workflows. By 2030, Goldman Sachs expects cloud infrastructure to be a $580 billion market, while cloud applications will be more than double that at $1.38 trillion. It stands to reason that AI will follow the same pattern. Apps move AI from theory to reality, from the lab to the customer. Turning a model into a real business solution is an engineering challenge far beyond just running a model with chat on top. The companies solving complex, industry-specific problems with custom AI architectures are the ones who will create the most lasting value. This shift is already beginning as we see AI agents cropping up across industries. Harvey is an AI lawyer. Glean is an AI work assistant. Factory is an AI software engineer. Abridge is an AI medical scribe. AI's real value is in transforming human-dependent services into scalable, always-on applications. And that's exactly where enterprise companies should focus—not on building models, but on applying them with precision. The opportunity isn't to create the next GPT. It's to embed AI into the backbone of the business—product design, operations, compliance, HR, finance—where small changes will add up. Too many enterprises assume foundation models will deliver value out of the box. But without serious investment in the hard stuff—applications, integration, data infrastructure, workflow redesign and change management—AI remains a flashy prototype: impressive in demos, but ineffective at scale. Ironically, the companies that win will be the ones that make AI boring: seamlessly embedded, consistently reliable, and quietly transformative where the real work happens. We're over indexing on startups. The market hype is fixated on AI startups, but big incumbents have the real advantage in the enterprise. AI isn't about disruption, it's about distribution. Look at Microsoft Teams. Microsoft didn't build the best video conferencing tool—Zoom did. But Microsoft won in the enterprise by bundling Teams into Office 365. Businesses didn't pick Teams because it was better; they picked it because it was already there. The same playbook is unfolding in AI. Startups may push innovation forward, but incumbents control enterprise budgets, IT integration, and distribution. Microsoft, Google, and Salesforce don't need the best AI models—they just need good enough AI, seamlessly embedded into their existing enterprise stack. That's how AI adoption happens—whoever owns the enterprise and consumer workflow wins. This is why AI isn't another e-commerce disruption story. In the late 1990s, online upstarts like PayPal, Amazon, and eBay toppled brick-and-mortar giants because the internet leveled the playing field. But AI is different. It's not low-cost, high-speed disruption. It's capital-intensive, infrastructure-heavy, and favors scale. And Big Tech already owns the data, compute power, and enterprise relationships. That last point is critical. Proprietary, real-time enterprise data is the last true moat in AI. Today's AI models are trained on 300 trillion tokens of publicly available text—but that data is running out. Epoch AI estimates that between 2026 and 2032, developers will hit a wall—there won't be enough high-quality public training data left. Large incumbents have the edge—but it's not automatic. They sit on the distribution rails, the enterprise relationships, and the proprietary data that startups can only dream of. But advantage without action is inertia. Now is the time to double down: integrate AI into existing systems, leverage data as a strategic asset, and partner where it adds speed or specificity. This isn't about chasing the next big thing—it's about making the last big thing work at scale. We're obsessed with generative AI but it's not the future. We're fixated on generative AI, but the future lies beyond chat-based models. Today's AI excels at summarizing reports and drafting emails but struggles with real-world complexity. It lacks situational awareness, complex reasoning, and the ability to synthesize multiple types of changing information in real-time. That's why AI adoption lags in fields like medicine and logistics—where decisions require more than historical text. A chatbot can draft a contract, but it can't diagnose every patient or optimize a failing supply chain. The next evolution is Multimodal AI and Compound AI systems—technologies that process multiple types of input and work together like human cognition. A self-driving car doesn't rely on a single data source; it integrates LiDAR, radar, GPS, and live sensors to navigate. AI will need to do the same, layering models that analyze vision, sound, text, and real-time data. Compound AI systems take this further, combining multiple models to create intelligence that learns, plans, and acts autonomously. Today, AI operates in silos—one model generates text, another detects fraud. Future AI will orchestrate these capabilities like an ensemble of specialists working together. This is a signal for companies to plan ahead. The current generation of AI tools can offer some wins—but those wins are relatively narrow. Leaders should avoid overinvesting in single-purpose solutions and start building toward infrastructure that can support integrated, multimodal systems. That means investing in data architecture, workflow flexibility, and AI governance that can evolve as the technology does. The future of AI isn't about building a better chatbot. It's about designing systems that see, hear, analyze, and act in concert—at scale, and in sync with the complexity of the real world. Can we think smartly about machines? In 1950, Alan Turing posed the now-famous question: ' Can machines think? ' Seventy-five years later, we're evaluating AI on how well it reasons, predicts, and generates. Maybe it's time to turn that same lens on ourselves. Right now, we're collectively hallucinating our way into bad bets, misplaced priorities, and unrealistic timelines. Companies are treating AI as if it's a silver bullet, throwing billions at models while neglecting the harder work of integration, infrastructure, and real business value. Ultimately, the market will determine which companies and sectors capture AI's value. But one thing is certain: AI's ubiquity will erode its exclusivity. Its impact won't be in who owns it, but in how we use it. Turing's original question is still relevant. But today, the more important one is: 'Can we think smartly about machines?' For enterprise leaders, that means shifting the focus from potential to performance. It means asking fewer questions about what AI might do—and more about what it's actually doing in your business. It means building for endurance, not headlines—investing in architecture, talent, and systems that can turn today's tools into tomorrow's competitive advantage.

AI will eventually lead to a more extreme society of haves and have-nots
AI will eventually lead to a more extreme society of haves and have-nots

Globe and Mail

time03-06-2025

  • Business
  • Globe and Mail

AI will eventually lead to a more extreme society of haves and have-nots

The discussion these days is all about whether artificial intelligence will lead to a surge in productivity, lower inflation, higher economic growth, better living standards and value creation. The stock market seems to think so, as the shares of many AI-related companies have surged. This is leading many to believe that a bubble is forming. Such bubbles are not atypical. A 2018 paper published in Marketing Science titled Two Centuries of Innovations and Stock Market Bubbles shows that groundbreaking innovation tends to be linked to bubbles in the stock prices of companies commercializing innovation. Those investors who remain overallocated to an innovative company after the bubble has ended suffer the effects of long-term reversals. My sense is that AI will not improve productivity as much as markets expect. I tend to agree with Nobel Prize winner Daron Acemoglu, who believes that the AI-related frenzy will eventually lead to a tech crash that will leave everyone disillusioned with the technology. Historically, new technologies have been disappointing in terms of increasing productivity. There is no clear link between technological innovation and productivity growth, as defined by gross domestic product per worker. A recent example: Nobel Prize winner Robert Solow has written that the computer age was everywhere except in productivity statistics. Editorial: A real reform mandate for the first federal AI minister Here are three points to counter too much optimism about AI. First, it is hard to see the big societal problem that AI will solve. Second, AI may lower inflation but at the same time will increase demand for capital because of the huge investments and funding it will require in its early stages. This will push real interest rates up, leaving nominal interest rates little changed. Finally, AI may be hurting more than helping society. For example, researchers at Microsoft published a paper recently arguing that while AI may improve efficiency, it can also reduce human critical thinking capabilities and diminish independent problem solving. 'Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,' it found. Despite all this, I do believe that AI needs to be embedded in our day-to-day lives, as it is not going away. Whether we like it or not, we are headed full speed toward an AI-powered future. The key question is, will AI benefit and reach all people? Historically, that hasn't been the case when it comes to new technology, which is usually controlled by a few people. And it may be worse this time. In my opinion, AI will most likely create a more rigid class-based society: the upper class, which will include people who are on top of AI knowledge and applications, and the lower class – those who are not, and who will be left behind without embedding AI into their everyday life. A recent article in The Globe and Mail hit the nail on the head. Author Don Tapscott, who is co-founder of Blockchain Research Institute, said, 'AI will become a new social fault line. Those with intelligent agents [i.e., large language model-powered systems] will be superpowered; those without them will fall behind. A small class of enhanced individuals could dominate productivity, creativity and influence.' An AI-powered future and an AI class-based society reminds me of the landlord-serf relationship in medieval Europe, which was a central feature of the feudal system and could end up being a central feature of the AI-powered future. 'Landlords' were typically those who controlled large estates, while 'serfs' were peasants bound to the land, obligated to work for the landlord and lacking many freedoms. The land cultivated by serfs was owned by a landlord. A large portion of what serfs produced had to be given to their landlord. Serfs lacked freedom of movement; they could not permanently leave their village, marry, change occupation, or dispose of their property without their landlord's permission. In the future, these landlords will be those in control of AI, and the serfs will be those without much knowledge about AI. This looks like a scary dystopian future that should force us all to become sufficiently prepared ahead of time in AI and be in full command of AI agents, irrespective of whether we believe the technology will solve the society's productivity problems or not. George Athanassakos is a professor of finance and holds the Ben Graham Chair in Value Investing at the Ivey Business School, Western University. His latest book is Value Investing: From Theory to Practice.

Beyond Solow: Rethinking growth in the age of AI
Beyond Solow: Rethinking growth in the age of AI

Economic Times

time17-05-2025

  • Business
  • Economic Times

Beyond Solow: Rethinking growth in the age of AI

Tired of too many ads? Remove Ads Tired of too many ads? Remove Ads (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of .) Long-run economic growth hinges on technological progress, a core insight of Robert Solow 's renowned Growth Model. The model argues that once an economy reaches a "steady state," growth can't be sustained through capital or labour alone. Instead, ongoing technological advancements are essential for higher output. A key assumption in this model is that technology enhances labour productivity without replacing workers. However, the rise of artificial intelligence challenges this assumption, potentially reshaping our understanding of economic Solow Model was developed in the 20th century, long before the emergence of advanced large language models. At that time, it was reasonable to assume that technological progress would boost productivity by enhancing rather than replacing human labour. This assumption matched the realities of that era. However, as artificial intelligence evolves, the idea that it might replace rather than simply support human labour is no longer speculative. It is becoming a visible trend. Leading economists have already begun to acknowledge this shift. In a 2019 study, Daron Acemoglu and Pascual Restrepo pointed to the rising wave of automation that could displace workers instead of making them more Susskind, in his 2020 book A World Without Work, examined how machines might render large parts of the workforce unnecessary. Futurist Martin Ford made a similar case in his 2021 book Rule of the Robots, where he predicted that AI would transform nearly every aspect of life. Clearly, economists and thinkers are increasingly warning of a future shaped by AI, where new jobs may not appear quickly enough to replace those lost, and the transition could be long and difficult. While some still hope for mostly positive outcomes, that seems less likely as AI becomes more capable and less limited to repetitive tasks. In this new environment, the assumption that technology only augments labour, as embedded in the Solow Model, may no longer AI functions as a labour-augmenting or labour-replacing technology largely depends on the context and era in which it is deployed. If social and economic constraints make large-scale implementation of AI more costly than the economic benefits of replacing labour, then even highly capable AI, comparable to the average worker, may end up serving primarily as a tool to augment human labour. This would be a blessing in disguise for many workers whose jobs are otherwise at risk of automation. However, if the scalability of AI improves to the point where its labour-replacing benefits outweigh implementation costs, then the foundational assumption of the Solow Model begins to collapse. In such a scenario, the production function would continue to shift upward, signalling higher output, but with reduced labour input. As a result, we would need broader measures of prosperity beyond indicators like GDP per capita to accurately assess our economic well-being, especially as a growing share of output will get concentrated in the hands of a small elite made primarily of business owners and top-tier technical specialists. At this stage, governments and societies may find themselves at a crossroads. Technological progress is irreversible, and businesses will inevitably adopt AI to remain competitive. Yet this path could lead to a troubling outcome, one where machines generate ever-increasing wealth, but human participation in economic production shrinks the larger question is: where do these dynamics leave India? What kind of future should we realistically anticipate? If we take a step back and consider the broader implications, India could find itself at a complex and uncertain crossroads. On one hand, it is an economic, social, and political imperative to foster an environment that supports AI adoption to remain globally competitive. On the other hand, this path comes with significant costs. As AI becomes more capable, labour input is likely to decline. A small minority of highly paid technical specialists could come to dominate the already prestigious IT industry. While output may increase due to AI's capabilities, the gains are likely to accumulate in the hands of top-tier investors and business elites thereby increasing inequality to unprecedented makes collaboration between the government and the private sector crucial. First, we must collectively recognize that the global AI landscape is currently dominated by Western nations. Even if AI improves productivity in Indian firms, a significant portion of the value created could end up flowing abroad. To safeguard economic gains, the government must foster an environment that encourages private investors in India to develop their own large language models and AI infrastructure. Second, India should identify the sectors most vulnerable to AI-driven disruption. The country is still far from deploying AI at scale, particularly in labour-intensive industries such as agriculture and construction. These, along with manufacturing and textiles, remain relatively insulated for now and must be central to job creation strategies. However, according to the 2023–24 Economic Survey, agriculture employs 45% of the workforce, services 28%, construction 13%, and manufacturing 11% which is in sharp contrast to China, where industrial employment remains around 30%. Compounding this is the fact that India's capital-to-labour ratio has doubled between 1994–2002 and 2003–2017, reflecting a growing tendency among firms to favour capital investments over labour. This trend strengthens the economic incentive to adopt AI, further raising the risk of labour displacement. The imbalance is troubling because more young Indians are entering IT, finance, and consulting which are sectors highly exposed to automation. If AI adoption leads to widespread job losses here, India could face a severe employment crisis, with limited fallback we need a new paradigm of economic growth, one that moves beyond the Solow model's assumption of labour-augmenting technology. Emerging models, such as modern extensions of Romer's endogenous growth theory and Aghion and Howitt's Schumpeterian framework, begin to account for labour-replacing technologies. Though still evolving, these models offer a necessary foundation for deeper debates on India's economic future in the age of AI. Ultimately, India must tread carefully in its transition to AI. Non-IT sectors, long overlooked, may offer a crucial fallback for the country's youth. However, their prolonged neglect could undermine our economic ambitions at the very moment we need them most.(Amit Kapoor is Chair and Mohammad Saad is a Researcher at the Institute for Competitiveness).

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store