logo
#

Latest news with #KunalSrivastava

Minds, machines & morality: Principles & paradoxes in
Minds, machines & morality: Principles & paradoxes in

New Indian Express

time03-07-2025

  • Politics
  • New Indian Express

Minds, machines & morality: Principles & paradoxes in

Transparency and explainability are essential for building trust in AI. The deep learning models like GPT-4 are fundamentally 'black boxes', as they generate outputs by computing a large set of complex probabilistic relationships across billions of parameters. So, in such cases, forcing machines to produce human-understandable explanations for their decisions is not just difficult, but can also be misleading and counterproductive. Accountability, too, becomes quite murky in distributed AI systems. Meanwhile, the push for risk-based regulation, especially visible in the EU AI Act, 2024, offers a pragmatic pathway but also suffers from rigidity. A tool categorized as low-risk today might be repurposed into a high-risk context tomorrow. Generative models that began as daily assistants now offer professional level medical and legal advice. Without continuous and rigorous reassessment, risk tiers may become outdated and thus, allowing 'risk-washing' by actors who underreport capabilities to evade scrutiny. So, where do we go from here to ensure that AI gets regulated and serves humanity? First, AI regulations must be adaptive. Features like built-in sunset clauses, timely and regularly updating risk tiers and using sandboxes to test innovation under supervision can act as dynamic components. Second, there should be confluence of traceable causality with tiered liability across the AI lifecycle to assign accountability: developers for design flaws, deployers for misuse and auditors for systemic bias. Third, the laws must promote ethical pluralism as fairness which cannot be standardized globally but it can be localized. Fourth, we may think of integrating AI oversight with sectoral regulators—finance, health, transport; rather than building new siloed verticals. And fifth, we need global governance coalitions that bridge nations on core values and build consensus. In conclusion, it can be said that regulating AI isn't just a legal exercise, rather it's an ideological balancing act for human civilization. Like the samudra manthan in Hindu mythology, AI law must separate promise from peril, efficiency from exploitation and progress from predation. To do this, we must move towards responsive, adaptive, rights-anchored and risk-sensitive regulation. Kunal Srivastava | Director (Finance), Department of Telecommunications (Views are personal)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store