
Rethinking AI: The lessons for India
The 2025 Human Development Report (HDR) by the United Nations Development Programme (UNDP) offers more than statistics and rankings. While the HDR is often reported in the media for India's position on the Human Development Index (HDI), this year's report demands a much deeper engagement. Titled 'A Matter of Choices: People and Possibilities in the Age of AI', the report invites India, and the world, to reflect on how we are going to deal with transformative technologies like AI which will probably shape the future of the world..It forces nations to rethink and confront urgent ethical, social, and political questions of our times, especially around AI. It places AI at the centre of the development discourse and raises a pressing question: will AI empower humanity or deepen inequality? In India, where rapid digital growth coexists with vast socio-economic gaps, the answer will be decided by the choices we make now..A key aspect of the report highlights a troubling paradox. Despite unprecedented technological advances, global human development is stagnating. The rebound from the 2020-21 decline in HDI is weak, and gaps between high and low HDI countries are widening. AI is hailed as a transformative force – 'the new electricity' – and yet, the lived reality for millions remains unchanged or in fact, worsened..This paradox is highly relevant for India. Though it is the fastest-growing major economy and home to an expanding digital infrastructure, it faces persistent inequalities in education, healthcare, gender equity, and digital access. Without intentional and inclusive policy design, AI may deepen these divides..Rather than treating AI as inherently good or bad, the HDR calls for a people-centric approach that gives primacy to human agency. The future of AI, it argues, must be guided by democratic values, ethical governance, and shared responsibility. If not, we risk replacing human agency with algorithmic control..For India, this means building AI tools and institutions that serve the many, not just the few. India's growing digital platforms along with its startup ecosystem give it a strong foundation. But realising the full potential of AI will require conscious efforts to embed human rights, privacy, fairness, and inclusion into AI design, deployment, and governance..The report makes a strong case for 'AI-augmented human development' rather than AI-led automation. It urges nations to create 'complementary economies' where AI enhances human creativity and productivity rather than replacing it. This is critical for a labour-rich country like India, where the real challenge lies in generating decent-quality jobs..The HDR also warns of rising geopolitical tensions and the growing weaponisation of AI. With China and the US competing to dominate AI development and markets, developing countries risk becoming dependent 'data colonies.' But AI is not merely an industrial or strategic arms race; it is a political and ethical choice. For India, the goal should not be dominance but dignity: building an AI model that respects its constitutional values, protects diversity, and serves all sections of society..India has no choice but to tread carefully. As a founding member of the Global Partnership on AI (GPAI) and a leader of the Global South, it is well positioned to champion multilateral governance of AI that is inclusive and accountable. But it must avoid falling into techno-nationalism or strategic alignments that compromise its sovereignty or developmental goals..From user to leader.Another critical insight in the HDR is the emergence of an AI divide – a new form of inequality layered over existing development gaps. Countries at the AI frontier are moving at jet speed, while others are falling behind. India, though ambitious, lags in investments, infrastructure, and global influence in AI..Interestingly, the report cites LinkedIn data showing that India has the world's highest self-reported AI skill penetration. But this alone is not enough. Are we producing AI creators or merely users? Are we building indigenous technologies or relying on foreign platforms? To move from aspiration to leadership, India must invest in research, computing capacity, open data frameworks, and talent retention..The HDR rightly identifies the vacuum in AI governance. It calls for new models of regulation that are transparent, flexible, and responsive to societal needs. As our earlier experience suggests, in the absence of strong public institutions, private tech companies set the rules. This is a global problem but also a local opportunity. India must lead by example. As the world's largest democracy, it can propose frameworks that are rooted in constitutional rights, participatory governance, and public accountability. India can advocate for global AI standards that reflect the priorities of the Global South..Finally, perhaps the most important contribution of the HDR is its emphasis on narrative. The way AI is discussed – as destiny, disruption or deliverance – shapes public policy. The report warns against surrendering to narratives that glorify automation and ignore the social consequences of unchecked innovation. In India, the media, civil society, and academia must foster informed debates on AI. They must question hype, expose harm, and amplify marginal voices. India's rich democratic tradition offers the perfect ground for promoting such discourse. But this requires vigilance and active engagement, not passive adoption..In a way, the UNDP's 2025 HDR offers a sobering but powerful message: human development is not determined by machines but by choices. The age of AI is not just a test of our intelligence but of our wisdom. India with its unique demographic, technological, and democratic mix, has the opportunity to craft an alternative AI path – one that is inclusive, ethical, and globally relevant. In the end, the question is not whether AI will define our future. The question is: will we define AI to serve a future we believe in?.(The writer is a professor of journalism and Regional Director at Indian Institute of Mass Communication, Dhenkanal)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Gazette
20 minutes ago
- India Gazette
High AI maturity fuels long-term project success and trust: Gartner survey
New Delhi [India], June 30 (ANI): A recent survey by Gartner, Inc. indicates that organisations with high AI maturity are significantly more successful at sustaining their AI initiatives, with 45 per cent reporting that their AI projects remain operational for three years or more. The survey, conducted in Q4 2024 with 432 respondents across the U.S., U.K., France, Germany, India, and Japan, assessed AI maturity using Gartner's AI Maturity Model. High-maturity organisations, scoring an average of 4.2-4.5 on a 5-level scale, demonstrated that selecting AI projects based on business value and technical feasibility, coupled with robust governance and engineering practices, is key to long-term success. This stands in stark contrast to low-maturity organisations, where only 20 per cent achieve similar longevity. 'Trust is one of the differentiators between success and failure for an AI or GenAI initiative,' stated Birgi Tamersoy, Sr Director Analyst at Gartner. The survey found that in 57 per cent of high-maturity organisations, business units trust and are ready to utilise new AI solutions, compared to a mere 14 per cent in low-maturity organisations. 'Building trust in AI and GenAI solutions fundamentally drives adoption, and since adoption is the first step in generating value, it significantly influences success,' Tamersoy added. Additionally, the report also reveals that, despite varying maturity levels, data availability and quality remain prominent hurdles in AI implementation. The survey revealed that 34 per cent of leaders from low-maturity organisations and 29 per cent from high-maturity organisations identified these as top challenges. For high-maturity organisations, security threats were also a significant barrier (48 per cent), while low-maturity organisations frequently struggled with identifying the right use cases (37 per cent). A notable finding is the strong trend towards dedicated AI leadership in high-maturity organisations, with 91 per cent already having appointed such roles. These AI leaders are primarily focused on fostering AI innovation (65 per cent), delivering AI infrastructure (56 per cent), building AI organisations and teams (50 per cent), and designing AI architecture (48 per cent). Furthermore, nearly 60 per cent of leaders in high-maturity organisations reported centralising their AI strategy, governance, data, and infrastructure capabilities to enhance consistency and efficiency. (ANI)


India.com
20 minutes ago
- India.com
India's big step after Operation Sindoor, will keep hawk's eye on every place, street, location of Pakistan and China with..., all movements...
(Representational image: AI generated) New Delhi: Just like we can see everything easily even in the dark by lighting a bright torch, Indian Army is going to get a 'sky torch' to do something similar. After Operation Sindoor, its need is also being felt for monitoring the enemy's territory. By the way, this time also we did surveillance from the sky but India's preparation is even bigger now. What is 'sky torch' mission? Keeping in mind the defense preparations of its armed forces, India is preparing to launch 52 dedicated satellites. This is part of the process of finalizing a comprehensive military space doctrine. The third phase of the Space Based Surveillance i.e. SBS programme was approved by the Cabinet Committee on Security headed by Prime Minister Narendra Modi in October last year. Rs 26,968 crore will be spent on this. Under this, ISRO is to launch 21 satellites and 31 satellites with the help of three private companies. The first satellite is to be launched by April next year. All the remaining 51 satellites will be deployed by 2029. How will China and Pakistan be on continuous target? This project is being run by the Defense Space Agency under the Integrated Defense Staff (IDS) of the Ministry of Defense. According to a TOI report, a source said that work is also going on to reduce the time frame to launch these satellites quickly in low-earth orbit (LEO) and geostationary orbit. The three private companies that have been assigned the work have been asked to speed up the construction of satellites. The purpose of SBS-3 is to keep a constant watch on very large areas of the Indian Ocean region along with China and Pakistan. Right now it takes some time hence, it is very important to reduce the gap between two consecutive monitoring trips of the same place. Also, the picture will be of better resolution. If everything goes well, with the help of Indian satellites, we will be able to easily see even the enemy's streets and neighbourhoods. What is the role of Indian Air Force? According to a TOI report, India's space doctrine is being improved. Along with this, the Indian Air Force is also working rapidly on three high altitude platform system (HAPS) aircraft. This will be an unmanned aircraft or say a 'pseudo satellite'. It will be deployed at a height of 10 to 50 km from the ground (stratosphere) for a long time on intelligence, surveillance and reconnaissance missions. China's military space programme has grown from just 36 satellites in 2010 to over 1,000 by 2024. Of these, 360 are dedicated to surveillance, intelligence and reconnaissance missions.


Time of India
31 minutes ago
- Time of India
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.