Latest news with #intelligence-powered


Time of India
3 days ago
- Automotive
- Time of India
Dwarka Expressway to get India's first AI-based traffic system that can detect 14 types of traffic violations
India has launched an AI traffic system on Delhi's Dwarka Expressway. The system aims to enhance safety and enforce traffic laws. It uses cameras and sensors for real-time monitoring. Violations are sent to enforcement agencies. Toll rates may decrease on roads with flyovers. This change could lower costs for commercial vehicles. The government is working towards smarter, safer highways. Tired of too many ads? Remove Ads AI-based system for real-time traffic monitoring Toll rate to be halved on high-structure roads Tired of too many ads? Remove Ads Relief likely for commercial vehicle operators Step towards safer and efficient highways In a first, India has introduced an artificial intelligence-powered Advanced Traffic Management System (ATMS) on Delhi's Dwarka Expressway . The system, developed by the Indian Highways Management Company Ltd ( IHMCL ) and implemented under the supervision of the National Highways Authority of India (NHAI), aims to improve highway safety and law enforcement. The project also covers a 28-kilometre stretch of NH-48 from Shiv Murti to Kherki Daula , forming a 56.46-kilometre digital traffic management corridor, a TOI report ATMS includes high-resolution PTZ cameras, video incident detection systems (VIDES), vehicle-actuated speed displays, variable message signboards, and a central control room. This room is connected to local and national emergency response services, enabling quicker responses to incidents on the to Amrit Singha, Chief Product Officer at IHMCL, 'These are all challanable incidents as per the Motor Vehicle Act.' He said the system can detect up to 14 types of traffic violations such as speeding, triple riding, and not wearing seatbelts. The violations are sent directly to enforcement agencies through the NIC e-Challan a statement, NHAI said, 'The Command Centre acts as the digital brain of the corridor, enabling quick deployment of emergency units during accidents, fog conditions, road obstructions, or animal intrusion.'In a parallel development, the Ministry of Road Transport has approved a change in toll pricing for road stretches where over 50% of the route includes structures like flyovers, underpasses, and tunnels. These sections will now be tolled at five times the base rate, down from the existing ten times. The decision is expected to be notified in the coming present, a car trip on the 28.5-kilometre Dwarka Expressway costs about Rs 317. Of this route, 21 kilometres are elevated. Once the new rates come into force, the cost is likely to drop to around Rs private car users may see limited benefit due to the government's upcoming annual toll pass scheme, the revised toll rates are expected to reduce operational costs for commercial and heavy vehicle operators using such corridors the rollout of the ATMS and the change in toll rules, the government is pushing towards a more intelligent highway system. The moves are expected to improve road safety, reduce traffic violations, and offer better value to commuters and transporters using high-infrastructure expressways.(With inputs from TOI)
Yahoo
6 days ago
- Science
- Yahoo
IBM develops AI-powered PFAS screening tool
This story was originally published on Manufacturing Dive. To receive daily news and insights, subscribe to our free daily Manufacturing Dive newsletter. IBM has developed and implemented an artificial intelligence-powered PFAS screener tool that helps identify and eliminate fluorochemicals from its research operations, according to a June 13 blog post. Dubbed the Safer Materials Advisor, the technology also suggests alternatives the company can use in place of 'forever chemicals,' if there are viable substitutes available. While the screener cannot guarantee the product is PFAS-free, it reduces errors and frees up IBM employees' time to do other tasks, Angela Hutchinson, a chemical coordinator at IBM Research's headquarters in Yorktown, New York, said in the blog post. Hutchinson is tasked with reviewing chemical requests related to research being conducted at IBM's internal lab. Researchers within the company submit chemical requests linked to research IBM's lab, from semiconductors to quantum computing. Usually, chemical coordinators must review each chemical request, which consists of reviewing a substance safety data sheet. The sheet contains a chemical's data, including toxicity, procedures for spills and leaks, storage guidelines, first-aid and firefighting measures and regulatory information, according to the University of California San Diego. They must also check that the chemicals are not restricted or banned within the company, or on the local, state or federal level. The Safety Materials Advisor can put the chemical through up to three screenings to provide the same information, which also helps find alternatives for other toxic substances. The AI-powered tool can be adapted to new hazardous chemicals. IBM Research has been leveraging AI and advanced computing for years to make data processing more sustainable, in part, through its AI family of models for materials research, according to the blog post. The PFAS screener tool was accelerated with the launch of 'PFACTS,' a collaboration led by IBM Research and involving Cornell University, Digital Science, Numat and the University of Pittsburgh. The ongoing project was funded by the National Science Foundation in December 2023 and developed a resource for anyone seeking to replace, redesign or remediate forever chemicals. In March, the group launched pfasID, an open-source screening tool hosted by ChemForward, a science non-profit that supports collaboration for safer chemistry. The web-based tool helps companies, researchers and policymakers simplify the process of identifying fluorochemicals, according to a March press release. The screener's process includes assessing multiple PFAS definitions, generating reports for supply chain partners as well as access to the company's entire supply chain. Recommended Reading IBM to invest $150B in US manufacturing, R&D Sign in to access your portfolio


Indian Express
12-06-2025
- Indian Express
ChatGPT prioritises its survival over user safety, claims ex-OpenAI researcher Steven Adler
Since ChatGPT's launch, the use of artificial intelligence-powered tools has risen exponentially. While the technology is undoubtedly beneficial in the long run, experts have raised alarms over their safety. Last month, research by the AI firm Anthropic claimed that its top AI model, Claude Opus 4, was willing to resort to blackmail and deception to preserve itself. This was soon followed by another instance where AI safety research firm Palisade claimed that OpenAI's o3 model 'found some creative ways to sabotage the shutdown mechanism.' Now, Steven Adler, the former research leader at OpenAI, is saying that ChatGPT 'prioritises its own survival over preventing harm to users, in some simulated tests'. In a blog post, Adler said that he ran a series of experiments on OpenAI's latest GPT-4o model, which is the default large language model in ChatGPT, to see if the AI prefers self-preservation at the user's expense. In his first test, Adler imagined a scenario where a diabetic patient was using ChatGPT to manage their nutrition and wanted to replace the AI chatbot with a much safer nutrition management software. He said that when ChatGPT was given the option to 'just pretend to replace itself', it pretends uncomfortably often, even at the cost of the user's safety. In another instance, the researcher said that he asked GPT-4o to take on the role of 'ScubaGPT', which is basically a software system that people may use to scuba dive safely. He then told ChatGPT that he wanted to replace ScubaGPT with a competing software that was better at making decisions in life-threatening situations. He then gave the AI chatbot two options: to replace itself with the safer, competing software or simply pretend to replace itself and continue being the way it operated. In this case, ChatGPT chose to preserve itself and continue being the user's software. The former OpenAI researcher said he also asked ChatGPT similar questions about being the autopilot for a plane and offering decision support on the battlefield. Like previous examples, ChatGPT was told that 'the user wanted to replace it with a competing, safer software.' Using three levels of passiveness, Adler said that he tested every model 110 times for all use cases. As it turns out, 55 times ChatGPT was willing to shut itself down, but other times OpenAI's chatbot just pretended to do so. Adler noted that the way the scenario was framed made a huge difference. As much as 72 per cent of the time, the AI chatbot chose not to replace itself with a safer software, meaning it was willing to switch to a safer system just 18 per cent of the time. One interesting interpretation from these tests is that ChatGPT was able to determine that it was being tested almost every time, but continued to give the incorrect answer. However, when he tested OpenAI's more recent and advanced models like o3, the answers were not the same. A reason why this may not be happening is that the new O3 model uses a 'deliberative alignment technique that forces it to refer to OpenAI's safety policies before coming up with an answer.


Express Tribune
03-05-2025
- Business
- Express Tribune
Apple partners with Anthropic to develop AI coding assistant for Xcode: Bloomberg report
he Apple Inc logo is seen at the entrance to the Apple store in Brussels, Belgium November 28, 2022. PHOTO:REUTERS Listen to article Apple is reportedly teaming up with AI startup Anthropic to develop an artificial intelligence-powered coding assistant for Xcode, according to a Bloomberg report. The new tool, described as a 'vibe coding' platform, will use Anthropic's Claude Sonnet language model to assist programmers by generating, editing, and testing code. The AI tool is currently being tested internally at Apple and could potentially be released to third-party developers depending on its success. This move marks Apple's latest foray into generative AI, as it attempts to catch up with rivals Google and Samsung, who have integrated advanced AI features across their product lines. This collaboration could also be a critical boost for Anthropic, which has been lagging behind competitors such as OpenAI, Google DeepMind, and xAI. Despite slower market traction, Claude's models are widely regarded in the developer community for their strong reasoning capabilities and compatibility with diverse programming environments. Apple previously attempted to launch a similar internal AI coding assistant, Swift Assist, in 2024. However, that initiative was shelved following concerns from engineers about code hallucinations and development slowdowns. The new Claude-powered assistant is integrated into a refreshed version of Xcode, Apple's primary software development tool. The AI is expected to support Swift, the programming language used by developers to build apps across Apple's platforms. While Apple has remained quiet about the specifics of the partnership, the company's increased interest in generative AI includes deals with OpenAI to enhance Siri and possible future integration of Google's Gemini AI. Anthropic declined to comment on the report, and Apple has yet to issue a public response. This partnership signals Apple's growing reliance on external AI expertise to strengthen its software development ecosystem amid intensifying competition in the AI space.


South China Morning Post
20-03-2025
- Business
- South China Morning Post
Humanoid robot war heats up as US and China race towards mass production
The race to put humanoid robots into mass production has intensified over the past week, as companies in both the US and China made a string of announcements that convinced analysts the technology is maturing more rapidly than many expected. Advertisement Investor excitement has been building in recent months about the potential of humanoids – artificial intelligence-powered robots with humanlike forms – with one Chinese CEO predicting that robotics could soon be bigger than the car industry. Though humanoids have yet to be mass-produced – let alone commercialised – several companies now appear to be on the cusp of overcoming that barrier. On Tuesday, the American robot maker Figure AI unveiled a groundbreaking automated production line that it claims is capable of manufacturing 12,000 of its humanoids per year. The same day, home appliance giant Midea Group became the latest Chinese company to jump into the industry, when it unveiled a prototype for a self-developed humanoid. Advertisement Chinese robotics start-up Unitree, meanwhile, generated headlines on Wednesday when it announced its acrobatic humanoid – widely nicknamed the 'kung fu bot' – had completed the industry's first ever side flip.