
Advanced Transport Mgmt System to start on 3 h'ways
These three routes include the Pune-Solapur Highway, Pune-Nashik Highway, and Pune-Satara Highway.
The ATMS is a set of technologies and software used to improve efficiency, safety, and reliability of transport networks. It integrates various systems like traffic sensors, cameras, and communication for real-time traffic information. "A proposal pertaining to this has been sent to the Union ministry of road transport and highways, and we expect approval soon.
With this system being introduced, we aim to check for different traffic violations and the exact causes of congestion on these highways," a senior NHAI official in Pune told TOI.
You Can Also Check:
Pune AQI
|
Weather in Pune
|
Bank Holidays in Pune
|
Public Holidays in Pune
According to officials, high-end CCTV cameras, which can rotate 360 degrees, will be installed at a distance of every 1km along all these highways.
"These cameras will have a range of 500m on all sides. In addition, modern cameras will be placed every 500m at gantries of the road.
These AI-enabled cameras will be able to spot any violations and read the registration number of the vehicles involved, following which an e-challan will be generated automatically without any manual intervention," the official added.
While the Pune-Nashik Highway is approximately 213km long, Pune-Satara Highway is around 141km long; the Pune-Solapur Highway is 246km in length.
Another NHAI official said the new system is similar to the Intelligent Traffic Management System (ITMS) on the Pune-Mumbai Expressway by Maharashtra State Road Development Corporation Limited (MSRDC).
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
15 minutes ago
- Business Standard
Godrej Capital eyes 2-fold AUM rise by FY28, rules out capital infusion
Over the next two years -- FY27 and FY28 -- it plans to nearly double the AUM to ₹50,000 crore, its managing director and chief executive Manish Shah said Non-bank lender Godrej Capital's assets under management have grown to about ₹18,000 crore. Press Trust of India Mumbai Non-bank lender Godrej Capital's assets under management have grown to about ₹18,000 crore, and the company is targeting to close FY26 at over ₹25,000 crore, a top official has said. Over the next two years -- FY27 and FY28 -- it plans to nearly double the AUM to ₹50,000 crore, its managing director and chief executive Manish Shah told PTI. The company, in which parent Godrej Industries owns over 90 per cent stake, has sufficient capital right now as most of the capital committed by the group has already come in, he said. "Until such time as we list, the capital will come from the group. Most of it we have already received. We don't need a lot more capital over the next few years," Shah said. The parent firm has infused an additional Rs 285 crore into the company to increase its stake by 1.41 per cent to a total of 90.89 per cent. Shah said the company is also delivering profits now, which can be reploughed back into the business for asset growth, but made it clear that the ultimate aim of the company is to list on exchanges like its fellow group companies such as Godrej Properties. At present, about Rs 3,000 crore of assets, or about a sixth of the overall AUM, has been sourced from group companies, Shah said, adding that this reliance will keep going down over time. Godrej Capital is looking at entering into the supply chain finance segment, and this is where it may go into fellow group companies for help in sourcing customers looking for finance, he said. As per a recent regulatory disclosure, its consolidated income grew to Rs 1,620 crore in FY25, from Rs 889.14 crore in the year-ago period. Shah, who was speaking on the sidelines of a company event to announce a partnership with customer relationship management company Salesforce, said it is very difficult for businesses to get the technology architecture right. Service of the customer should be the first motivation while embarking on a technology journey, he said, adding that a right balance by keeping in mind requirements like data privacy can ensure a win-win for all stakeholders. It is taking Salesforce's help to streamline its front end used by the customer facing executives, Shah said, adding that the core system on which the company runs remains intact. Salesforce will be helping Godrej put the necessary digital lending infrastructure which will make loan processing faster and improve the customer experience, the CRM company's president and chief executive for south Asia Arundhati Bhattacharya said. Replying to concerns on AI taking up jobs, Bhattacharya said Indians should not resist such a technology transition but leap into it instead. She said there will be some pain as AI's adoption increases, but added that she does not believe AI will lead to less jobs. However, the nature of jobs may undergo a change, the career SBI banker who joined Salesforce after retiring as chairman, said.


Time of India
21 minutes ago
- Time of India
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
Academy Empower your mind, elevate your skills When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning - an underlying subtext. We also often hope that this meaning will come through to the what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are - and how quickly conversational AI is improving - it's essential to explore what these technologies can (and can't) do in this on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models - the technology behind AI chatbots such as ChatGPT - showed that some are better than a study showed that LLMs can guess the emotional "valence" of words - the inherent positive or negative "feeling" associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 - a relatively recent version of ChatGPT - can read between the lines of human-written goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm - thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 x found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell - although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and study found no clear winner there - hence, using human raters doesn't help much with sarcasm does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research - especially important during crises, elections or public health and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast - and may soon be valuable teammates rather than mere this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways - perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided - will the model's underlying judgements and ratings remain consistent?Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.


Hans India
42 minutes ago
- Hans India
Why responsible AI is key for building better, brighter future for the world
Artificial intelligence has become the defining technology of our era, with recent years marking remarkable milestones in AI development. As the debate around AI technology heats up, people are actively discussing its usage, impact, and transformative potential across industries. However, amidst all the excitement and concern, there's notably less conversation about the responsibility that comes with this powerful technology. This gap in discourse presents both a challenge and an opportunity to shape how we approach AI governance. Breakthrough models like OpenAI's O3 achieved near-human performance on complex reasoning tasks, while AI applications continue to expand across industries in ways that seemed impossible just a few years ago. Yet as we witness this extraordinary progress, we find ourselves at a crucial juncture where thoughtful governance can shape AI's trajectory for maximum human benefit. The question isn't whether AI will transform our world - it's how we'll guide that transformation. Historically, humanity has taken a reactive approach to governing powerful technologies. The atomic bombing and destruction of two cities during the World War II ultimately paved the way for the formation of the United Nations - a response born from tragedy rather than foresight. Rather than repeating this pattern of waiting for catastrophe to drive action, we have an unprecedented opportunity with AI to be proactive. This is the right time for us to debate and establish a world governing council for responsible AI, creating international governance frameworks while embracing AI's transformative potential. This proactive approach represents a fundamental shift from reactive crisis management to thoughtful stewardship of emerging technology. Frankly speaking, recent developments highlight both AI's transformative power and the importance of responsible deployment. Google's Gemini 2.0 and Anthropic's Claude 4 demonstrate unprecedented capabilities in autonomous planning and execution, opening new possibilities for scientific research and problem-solving. However, this rapid advancement brings important considerations that deserve our attention. Workforce transitions affecting 14 per cent of workers require thoughtful retraining programs and support systems. Meanwhile, AI's growing energy footprint presents ongoing opportunities for innovation in sustainable computing and renewable energy integration. Artificial intelligence and ethical considerations The responsible development of AI requires addressing important ethical considerations with the same rigor we apply to technology itself. Workforce transition support becomes crucial as industries evolve, creating opportunities for new skilled roles while ensuring affected workers receive retraining and meaningful career paths. Environmental stewardship is equally important. While major tech companies have seen significant emissions increase largely due to AI infrastructure expansion, this challenge drives innovation in green computing and renewable energy solutions. Additionally, securing personal data becomes paramount as AI systems become more sophisticated, with recent studies showing 77 per cent of businesses experiencing AI-related data challenges, highlighting the need for robust privacy frameworks that protect individuals while enabling beneficial AI applications. Current data protection frameworks like Europe's GDPR provide excellent foundations, but AI's unique characteristics require evolved approaches. Unlike traditional data processing, AI systems integrate information in ways that make conventional privacy protections challenging to implement. This creates opportunities to develop next-generation privacy frameworks that address AI's permanent data integration while maintaining cross-border collaboration. The goal is to create comprehensive global standards that protect individual privacy while enabling AI's beneficial applications across borders. Understanding how personal data interacts with AI systems empowers users to make informed choices about their digital interactions. When individuals interact with ChatGPT or similar systems, they contribute to model training, creating opportunities for both personalized assistance and privacy considerations. This presents an opportunity to develop AI systems that provide personalized benefits while maintaining strong privacy protections through advanced techniques like differential privacy and federated learning. AI's extraordinary potential However, the story doesn't end with challenges - it begins with extraordinary potential. AI's capacity for human advancement continues to inspire remarkable breakthroughs. NASA's Perseverance rover uses AI for autonomous Mars exploration, opening new frontiers for space discovery. AI-powered stroke detection systems prove twice as accurate as traditional methods, saving lives through early intervention. The AI healthcare market has grown to over $32 billion, with applications reducing emergency room wait times and accelerating drug discovery for previously incurable diseases. These beneficial applications - from climate monitoring to space exploration - demonstrate AI's extraordinary capacity to address humanity's greatest challenges. The 57 countries that signed the Paris Declaration on AI governance last year recognize this tremendous opportunity. Building on this foundation, we can establish a global governing body that fosters international cooperation while ensuring AI development serves human flourishing. Like international frameworks for nuclear technology that enable both peaceful energy production and prevent harmful applications, AI governance can maximize benefits while addressing potential risks through shared standards and collaborative oversight. No doubt, the world needs responsible AI that can enhance our quality of life in all spheres and spaces. And the opportunity before us is clear: proactive governance now can unlock AI's full potential for a better tomorrow. That's the bottom line. (Krishna Kumar is a technology explorer & strategist based in Austin, Texas in the US. Rakshitha Reddy is AI developer based in Atlanta, US)