
Be on your toes always: TGSPDCL boss to babus
Speaking at a meeting held recently at the company's headquarters in Mint Compound with Chief Engineers, Superintending Engineers, and Divisional Engineers, the CMD stated that out of 8,681 11kV feeders under the company's jurisdiction, 6,885 are currently monitored through the Feeder Outage Management System. Efforts are actively underway to bring the remaining feeders under this comprehensive system as well.
He announced that Artificial Intelligence (AI)-based services are being introduced at the distribution level to record details of power demand, supply, and interruptions online, providing immediate alerts to relevant engineers. Analysing these details, he noted, will enable the delivery of more efficient service.
While the Feeder Outage Management System is already operational at substations and feeders, the integration of AI-based services is expected to further assist in identifying supply issues and defects at the field level, leading to enhanced service provision for consumers.
The CMD also issued clear directives to officials, instructing them to monitor power supply daily at the Distribution Transformer (DTR) level. Divisional Engineers and Superintending Engineers were specifically told to focus on feeders and DTRs experiencing frequent issues, conduct thorough field visits, and take all necessary remedial actions promptly. Noting that power demand, particularly within Greater Hyderabad, is increasing significantly each year, the CMD instructed officials to prepare detailed reports by 15 August outlining the measures required to effectively address this growing demand. He further emphasised that services such as sanctioning new connections must be processed strictly as per Standards of Performance (SOP) within the stipulated timeframes, ensuring that no consumer complaints arise.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
4 days ago
- Hindustan Times
UP uses AI, drones, and satellites to stop illegal mining
: The Uttar Pradesh government is now using modern technology, including Artificial Intelligence (AI), drones, and satellites, to stop illegal mining and the illegal transport of minerals in the state. Officials said that more than 21,477 vehicles involved in illegal transport have been blacklisted. They are used to measure the length, width, and depth of mining sites. This helps the government know exactly how much mining has taken place. (For representation only) To monitor mining vehicles, the state has set up 57 smart check gates with help from the transport department. These gates use AI and Internet of Things (IoT) technology, along with Weigh-In-Motion (WIM) sensors, to detect overloaded vehicles and stop them. The directorate of geology and mining is also using satellite images and digital mapping tools like Google Earth, Arc-GIS, and LISS-IV data. These tools help them find illegal mining areas and also discover new zones with untapped minerals. A special lab called the remote sensing lab (PGRS Lab) is working on making geological maps and watching over approved mining areas. To improve tracking of mineral transport, the government is installing GPS devices in vehicles under the Vehicle Tracking System (VTS). These GPS devices follow the AIS 140 standard and send real-time data to the VTS system. They provide alerts if a vehicle goes off-route and give full reports to catch illegal activity. Also, for the first time, transporters are being registered as official stakeholders to make the system more transparent. Drones are also playing a big role. They are used to measure the length, width, and depth of mining sites. This helps the government know exactly how much mining has taken place. Drones are also used to check how much mineral is stored and to mark out areas for legal mining. The new system has improved transparency and has stopped the operations of illegal mining groups. Officials said that this strong and smart use of technology is helping protect the environment and improve governance. Uttar Pradesh's methods are now being seen as a model for other states to follow.


Economic Times
4 days ago
- Economic Times
Biased output, privacy violation, unauthorised data sharing due to AI can have serious ramifications for India Inc
Mumbai: As India Inc rushes to adopt AI, internal auditors are sounding the alarm over a perceived lack of robust controls, ethical safeguards, and governance auditors are warning that many companies may be inadvertently exposing themselves to serious risks such as biased outputs, privacy violations, unauthorised data sharing, and opaque decision-making by AI models, potentially causing legal and financial liabilities, and operational are worried that boards and top management are largely unaware of the scale of AI experimentation taking place at the ground level in Indian companies.A Microsoft study showed that India's rapid adoption of Artificial Intelligence (AI), with 65% of surveyed Indians having used AI, is more than double the global average of 31%. "Most organisations are using AI with the best intentions-whether for efficiency, productivity, faster time to market, staying ahead of the curve, or market defence. But, in their pursuit of these goals, governance may not have received as much attention," said Ritesh Tiwari, partner and national leader for governance, risk & compliance services at KPMG in India. Auditors said the biggest concern currently is the lack of adequate board-level engagement on the issue. "Ethical usage of AI has to be a boardroom topic. Without the top-down commitment, it's a ticking time bomb. One wrong deployment, one biased output-and the reputational damage could be massive," said Tiwari. "We do tests for bias where mandated, but in many companies, it's not yet part of their risk culture. That's something we're pushing to change."Globally, there have been several high-profile instances of AI failures, from Amazon's recruiting tool showing bias against women to Microsoft's Tay chatbot generating offensive content, Facebook's Cambridge Analytica data scandal, and McDonald's Drive-Thru AI mishaps. As Indian organisations become increasingly aware of the risks associated with AI deployments, they are also strengthening their oversight and governance mechanisms to manage these challenges."Organisations have now started to ask that internal audits specifically cover the AI adoption lifecycle, looking at everything from pre-development planning to post-deployment performance," said Sunil Bhadu, partner and India GRC leader at PwC India. "The focus is typically on the governance framework and policies, design compliance with industry standards, workflow integrity, and robust working processes to ensure that these models aren't hallucinating or generating misleading results."Internal auditors emphasise that Indian firms have to realise the value of building trustworthy AI by design. "With global incidents highlighting the risks, mature organisations are proactively seeking guidance on managing AI-related risks. They're turning to recent frameworks like the EU AI Act, the NIST AI Risk Management Framework, and Deloitte's own Trustworthy AI model, asking, can our systems be reviewed and benchmarked against these frameworks to ensure responsible development and deployment?" said Anthony Crasto, president, assurance at Deloitte South crucial factor in internal audits of AI models is cybersecurity and privacy, given AI systems often process sensitive personal and business data, using complex codebases, and can be vulnerable to adversarial attacks or prompt injection if not properly secured. "From a security and privacy lens, we're also testing the algorithm logic, reviewing adherence to secure coding principles and business objectives, and data protection norms, and carrying out vulnerability assessments, etc. Data privacy is a growing area of concern," said Crasto. On their part, auditors are doing closer scrutiny of aspects like training, testing, and validation of these models, since flaws at the foundational stages could lead to biased, unreliable, or even unsafe outputs once deployed. "We ask the companies: Is there a documented AI policy? Is it aligned with global frameworks? We assess the policy's treatment of privacy, security, responsibility, human-in-the-loop accountability, transparency, model bias and explainability," said Crasto.


Time of India
4 days ago
- Time of India
Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI
Mark Zuckerberg has rarely been accused of thinking small. After attempting to redefine the internet through the metaverse, he's now set his sights on a more ambitious frontier: superintelligence—the idea that machines can one day match, or even surpass, the general intelligence of humans. To that end, Meta has created an elite unit with a name that sounds like it belongs in a sci-fi script: Meta Superintelligence Lab (MSL). But this isn't fiction. It's a real-world, founder-led moonshot, powered by aggressive hiring, audacious capital, and a cast of technologists who've quietly shaped today's AI landscape. This is not just a story of algorithms and GPUs. It's about power, persuasion, and the elite brains Zuckerberg believes will push Meta into the next epoch of intelligence. The architects: Who's running Meta's AGI Ambitions? Zuckerberg has never been one to let bureaucracy slow him down. So he didn't delegate the hiring for MSL—he did it himself. The three minds now driving this initiative are not traditional corporate executives. They are product-obsessed builders, technologists who operate with startup urgency and almost missionary belief in Artificial general intelligence (AGI). Name Role at MSL Past Lives Education Alexandr Wang Chief AI Officer, Head of MSL Founder, Scale AI MIT dropout (Computer Science) Nat Friedman Co-lead, Product & Applied AI CEO, GitHub; Microsoft executive B.S. Computer Science & Math, MIT Daniel Gross (Joining soon, role TBD) Co-founder, Safe Superintelligence; ex-Apple, YC No degree; accepted into Y Combinator at 18 Wang, once dubbed the world's youngest self-made billionaire, is a data infrastructure prodigy who understands what it takes to feed modern AI. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like My baby is in so much pain, please help us? Donate For Health Donate Now Undo Friedman, a revered figure in the open-source community, knows how to productise deep tech. And Gross, who reportedly shares Zuckerberg's intensity, brings a perspective grounded in AI alignment and risk. Together, they form a high-agency, no-nonsense leadership core—Zuckerberg's version of a Manhattan Project trio. The Scientists: 11 defections that shook the AI world If leadership provides the vision, the next 11 are the ones expected to engineer it. In a hiring spree that rattled OpenAI, DeepMind, and Anthropic, Meta recruited some of the world's most sought-after researchers—those who helped build GPT-4, Gemini, and several of the most important multimodal models of the decade. Name Recruited From Expertise Education Jack Rae DeepMind LLMs, long-term memory in AI CMU, UCL Pei Sun DeepMind Structured reasoning (Gemini project) Tsinghua, CMU Trapit Bansal OpenAI Chain-of-thought prompting, model alignment IIT Kanpur, UMass Amherst Shengjia Zhao OpenAI Alignment, co-creator of ChatGPT, GPT-4 Tsinghua, Stanford Ji Lin OpenAI Model optimization, GPT-4 scaling Tsinghua, MIT Shuchao Bi OpenAI Speech-text integration Zhejiang, UC Berkeley Jiahui Yu OpenAI/Google Gemini vision, GPT-4 multimodal USTC, UIUC Hongyu Ren OpenAI Robustness and safety in LLMs Peking Univ., Stanford Huiwen Chang Google Muse, MaskIT – next-gen image generation Tsinghua, Princeton Johan Schalkwyk Sesame AI/Google Voice AI, led Google's voice search efforts Univ. of Pretoria Joel Pobar Anthropic/Meta Infrastructure, PyTorch optimization QUT (Australia) This roster isn't just impressive on paper—it's a coup. Several were responsible for core components of GPT-4's reasoning, efficiency, and voice capabilities. Others led image generation innovations like Muse or built memory modules crucial for scaling up AI's attention spans. Meta's hires reflect a global brain gain: most completed their undergrad education in China or India, and pursued PhDs in the US or UK. It's a clear signal to students—brilliance isn't constrained by geography. What Meta offered: Money, mission, and total autonomy Convincing this calibre of talent to switch sides wasn't easy. Meta offered more than mission—it offered unprecedented compensation. • Some were offered up to $300 million over four years. • Sign-on bonuses of $50–100 million were on the table for top OpenAI researchers. • The first year's payout alone reportedly crossed $100 million for certain hires. This level of compensation places them above most Fortune 500 CEOs—not for running a company, but for building the future. It's also part of a broader message: Zuckerberg is willing to spend aggressively to win this race. OpenAI's Sam Altman called it "distasteful." Others at Anthropic and DeepMind described the talent raid as 'alarming.' Meta, meanwhile, has made no apologies. In the words of one insider: 'This is the team that gets to skip the red tape. They sit near Mark. They move faster than anyone else at Meta.' The AGI problem: Bigger than just scaling up But even with all the talent and capital in the world, AGI remains the toughest problem in computer science. The goal isn't to make better chatbots or faster image generators. It's to build machines that can reason, plan, and learn like humans. Why is that so hard? • Generalisation: Today's models excel at pattern recognition, not abstract reasoning. They still lack true understanding. • Lack of theory: There is no grand unified theory of intelligence. Researchers are working without a blueprint. • Massive compute: AGI may require an order of magnitude more compute than even GPT-4 or Gemini. • Safety and alignment: Powerful models can behave in unexpected, even dangerous ways. Getting them to want what humans want remains an unsolved puzzle. To solve these, Meta isn't just scaling up—it's betting on new architectures, new training methods, and new safety frameworks. It's also why several of its new hires have deep expertise in AI alignment and multimodal reasoning. What this means for students aiming their future in AI This story isn't just about Meta. It's about the direction AI is heading—and what it takes to get to the frontier. If you're a student in India wondering how to break into this world, take notes: • Strong math and computer science foundations matter. Most researchers began with robust undergrad training before diving into AI. • Multimodality, alignment, and efficiency are key emerging areas. Learn to work across language, vision, and reasoning. • Internships, open-source contributions, and research papers still open doors faster than flashy resumes. • And above all, remember: AI is as much about values as it is about logic. The future won't just be built by engineers—it'll be shaped by ethicists, philosophers, and policy thinkers too. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.