
India prepares reporting standard as AI failures may hold clues to managing risks
India is framing guidelines for companies, developers and public institutions to report artificial intelligence-related incidents as the government seeks to create a database to understand and manage the risks AI poses to critical infrastructure.
The proposed standard aims to record and classify problems such as AI system failures, unexpected results, or harmful effects of automated decisions, according to a new draft from the Telecommunications Engineering Centre (TEC). Mint has reviewed the document released by the technical arm of the Department of Telecommunications (DoT).
The guidelines will ask stakeholders to report events such as telecom network outages, power grid failures, security breaches, and AI mismanagement, and document their impact, according to the draft.
'Consultations with stakeholders are going on pertaining to the draft standard to document such AI-related incidents. TEC's focus is primarily on the telecom and other critical digital infrastructure sectors such as energy and power,"said a government official, speaking on the condition of anonymity. 'However, once a standard to record such incidents is framed, it can be used interoperably in other sectors as AI is being used everywhere."
The plan is to create a central repository and pitch the standard globally to the United Nations' International Telecommunication Union, the official said.
Recording and analysing AI incidents is important because system failures, bias, privacy breaches, and unexpected results have raised concerns about how the technology affects people and society.
'AI systems are now instrumental in making decisions that affect individuals and society at large," TEC said in the document proposing the draft standard. 'Despite their numerous benefits, these systems are not without risks and challenges."
Queries emailed to TEC didn't elicit a response till press time.
Also read | AI at war: Artificial intelligence is reshaping defence strategies
Incidents similar to the recent Crowdstrike incident, the largest IT outage in history, can be reported under India's proposed standard. Any malfunction in chatbots, cyber breaches, telecom service quality degradation, IoT sensor failures, etc. will also be covered.
The draft requires developers, companies, regulators, and other entities to report the name of the AI application involved in an incident, the cause, location, and industry/sector affected, as well as the severity and kind of harm it caused.
Like OECD AI Monitor
The TEC's proposal builds on a recommendation from a MeitY sub-committee of on 'AI Governance and Guidelines Development'. The panel's report in January had called for the creation of a national AI incident database to improve transparency, oversight, and accountability. MeitY is also engaged in developing a comprehensive governance framework for the country, with a focus on fostering innovation while ensuring responsible and ethical development and deployment of AI.
According to the TEC, the draft defines a standardized scheme for AI incident databases in telecommunications and critical digital infrastructure. 'It also establishes a structured taxonomy for classifying AI incidents systematically. The schema ensures consistency in how incidents are recorded, making data collection and exchange more uniform across different systems," the draft document said.
Also read | Apple quietly opens AI gates to developers at WWDC 2025
India's proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development (OECD), which documents incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable information about the real-world risks and harms posed by the technology.
'So far, most of the conversations have been primarily around first principles of ethical and responsible AI. However, there is a need to have domain and sector-specific discussions around AI safety," said Dhruv Garg, a tech policy lawyer and partner at Indian Governance and Policy Project (IGAP).
'We need domain specialist technical bodies like TEC for setting up a standardized approach to AI incidents and risks of AI for their own sectoral use cases," Garg said. 'Ideally, the sectoral approach may feed into the objective of the proposed AI Safety Institute at the national level and may also be discussed internationally through the network of AI Safety Institutes."
Need for self-reglation
In January, MeitY announced the IndiaAI Safety Institute under the ₹10,000 crore IndiaAI Mission to address AI risks and safety challenges. The institute focuses on risk assessment and management, ethical frameworks, deepfake detection tools, and stress testing tools.
'Standardisation is always beneficial as it has generic advantages," said Satya N. Gupta, former principal advisor at the Telecom Regulatory Authority of India (Trai). 'Telecom and Information and Communication Technology (ICT) cuts across all sectors and, therefore, once standards to mitigate AI risks are formed here, then other sectors can also take a cue."
Also read | AI hallucination spooks law firms, halts adoption
According to Gupta, recording the AI issues should start with guidelines and self-regulation, as enforcing these norms will increase the compliance burden on telecom operators and other companies.
The MeitY sub-committee had recommended that the AI incident database should not be started as an enforcement tool and its objective should not be to penalise people who report AI incidents. 'There is a clarity within the government that the plan is not to do fault finding with this exercise but help policy makers, researchers, AI practitioners, etc., learn from the incidents to minimize or prevent future AI harms," the official cited above said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
31 minutes ago
- Time of India
K'taka Bank MD & exec director quit
MUMBAI: Karnataka Bank on Saturday saw both its MD & CEO, Srikrishnan Hari Hara Sarma, and executive director, Sekhar Rao, resign, marking a major leadership shake-up for the century-old private lender. To maintain operational stability during the transition, the board appointed Raghavendra Srinivas Bhat as COO, effective July 2. The leadership exits come after a year of growing strain within the bank. Governance tensions came to a head in May when the bank's auditors flagged certain overlimit expenses. The board refused to ratify the expenditures and asked for the amounts to be recovered. The standoff between the board and management that followed was considered unprecedented for the institution. Adding to the pressure, Karnataka Bank in Feb had to reverse Rs 18.87 crore worth of suspicious cross-border UPI transactions due to reconciliation failures. A forensic audit was ordered in April at the direction of the board and the RBI. The bank has since stated that the audit observations have been amicably resolved and that it remains well-capitalised. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Economic Times
3 hours ago
- Economic Times
Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI
OpenAI CEO Sam Altman has expressed surprise at the high level of trust people place in ChatGPT, despite its known tendency to "hallucinate" or fabricate information. Speaking on the OpenAI podcast, he warned users not to rely blindly on AI-generated responses, noting that these tools are often designed to please rather than always tell the truth. Tired of too many ads? Remove Ads Trusting the Tool That Admits It Lies? Tired of too many ads? Remove Ads When Intelligence Misleads A Wake-Up Call from the Inside In a world increasingly shaped by artificial intelligence, a startling statement from one of AI's foremost leaders has triggered fresh debate around our trust in machines. Sam Altman , CEO of OpenAI and the face behind ChatGPT, has admitted that even he is surprised by the degree of faith people place in generative AI tools—despite their very human-like revelation came during a recent episode of the OpenAI podcast , where Altman openly acknowledged, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' His remarks, first reported by Complex, have added fuel to the ongoing discourse around artificial intelligence and its real-world comments arrive at a time when AI is embedded in virtually every aspect of daily life—from phones and personal assistants to corporate software and academic tools. Yet his warning is rooted in a key flaw of current language models : AI parlance, hallucinations refer to moments when a model like ChatGPT fabricates information. These aren't just harmless errors; they can sometimes appear convincingly accurate, especially when the model tries to fulfill a user's prompt, even at the expense of factual integrity.'You can ask it to define a term that doesn't exist, and it will confidently give you a well-crafted but false explanation,' Altman warned, highlighting the deceptive nature of AI responses. This is not an isolated issue—OpenAI has in the past rolled out updates to mitigate what some have termed the tool's 'sycophantic tendencies,' where it tends to agree with users or generate agreeable but incorrect makes hallucinations particularly dangerous is their subtlety. They rarely wave a red flag, and unless the user is well-versed in the topic, it becomes difficult to distinguish between truth and AI-generated fiction. That ambiguity is at the heart of Altman's caution.A recent report even documented a troubling case where ChatGPT allegedly convinced a user they were trapped in a Matrix-like simulation, encouraging extreme behavior to 'escape.' Though rare and often anecdotal, such instances demonstrate the psychological sway these tools can wield when used without critical Altman's candid reflection is more than a passing remark—it's a wake-up call. Coming from the very creator of one of the world's most trusted AI platforms, it reframes the conversation about how we use and trust machine-generated also raises a broader question: In our rush to embrace AI as a problem-solving oracle, are we overlooking its imperfections?Altman's comments serve as a reminder that while AI can be incredibly useful, it must be treated as an assistant—not an oracle. Blind trust, he implies, is not only misplaced but potentially dangerous. As generative AI continues to evolve, so must our skepticism.
&w=3840&q=100)

Business Standard
3 hours ago
- Business Standard
Microsoft retires iconic 'Blue Screen of Death' after three decades
Sopan Deb For millennials, blue can be a significant colour. It is associated with clues left by a well-meaning dog in our youth. Songs about a little guy that lives in a blue world (Da Ba Dee Da Ba Di). Or the rage-inducing abject failure of the Windows computer in front of us. In other words, the Blue Screen of Death. And now, the world is set to bid a fond farewell to a generation's most feared and notable error message, as Microsoft announced on Thursday that the screen was being officially replaced by a less friendly but more efficient Black Screen of Death. The simplified screen, Microsoft said in a blog post, would roll out later this summer, and 'improves readability and aligns better with Windows 11 design principles, while preserving the technical information on the screen for when it is needed.' A new message — in white lettering — is slated to say, 'Your device ran into a problem and needs to restart.' For more than three decades, Windows has denoted some sort of serious crash or slow down in its system with a blue screen. An early version of the message was written by the former chief executive, Steve Ballmer, according to Raymond Chen, a longtime Microsoft programmer. The message, released in the early S, would fill the screen: 'This Windows application has stopped responding to the system.' Underneath, multiple soothing options were provided over the blue-screen background, including ESC, and ENTER — which would give you false hope that the problem was fixable — and then the last resort 'Ctrl+Alt+Del' to give up and start over. An engineer named John Vert designed one for Windows NT soon after, and Chen helped finalise a new one for Windows 95 in 1995. All of them were blue by coincidence, according to a blog post by Chen. The change to a black screen comes in the wake of last year's outage generated by the cybersecurity company CrowdStrike. Its software update unintentionally crippled computers using Windows software all around the world, causing disruption. 'I like the Blue Screen of Death. To me, it means a lot. It's calming because it's blue and it's got this kind of comical side to it,' Jake Moore, a cybersecurity adviser for the European-based company ESET, said. But after the CrowdStrike incident, Moore said that the blue screen may have overstayed its welcome. 'When it triggered millions of blue screens of death around the world, I think the way it has become so synonymised with the outage, I could see that may have created a time for change,' Moore said. The change of the blue screen to black is causing an unusual type of nostalgia — longing for a reminder of bad times. The black screen, Microsoft says, is a signifier of better days ahead. It will be 'easier than ever to navigate unexpected restarts and recover faster,' the company said in its blog post. Customers may get a better experience, but that doesn't mean everyone is ready to say goodbye.'I've learned so much from playing around with hardware, making mistakes, understanding what caused the 'Blue Screen Of Death,'' Moore said. 'It made me want to progress. It'll be a shame to see it go. To some people it might not mean much. They might not even realise or notice any change. But to me, it's an end of an era.'