logo
AI was supposed to speed up coders, new study says it did the opposite

AI was supposed to speed up coders, new study says it did the opposite

India Today5 days ago
Contrary to popular belief, new research has found that using AI tools can actually slow down experienced software developers, especially when working in codebases they already know well. The study, conducted by the nonprofit research group METR, revealed that seasoned open-source developers took 19 per cent longer to complete tasks when using Cursor, a widely used AI-powered coding assistant. As per the study, the result was based on a randomised controlled trial, which involved contributors working on their own open-source projects. advertisementBefore the trial began, developers believed AI would significantly increase their speed, which is estimated at a 24 per cent improvement in task completion time. Even after finishing their tasks, many still believed the AI had helped them work faster, estimating a 20 per cent improvement. But the real data showed otherwise.'We found that when developers use AI tools, they take 19 per cent longer than without, AI makes them slower,' the researchers wrote. The lead authors of the study, Joel Becker and Nate Rush, admitted the results came as a surprise. Rush had initially predicted 'a 2x speed up, somewhat obviously.' But the study told a different story.
The findings challenge the widespread notion that AI tools automatically make human coders more efficient, a belief that has attracted billions of dollars in investment and sparked predictions that AI could soon replace many junior engineering roles.Past studies have shown strong productivity gains with AI. One found that AI helped developers complete 56 per cent more code, while another claimed a 26 per cent boost in task volume. But the METR study suggests that those gains don't apply to all situations, especially where developers already have deep familiarity with the code.Instead of streamlining work, the AI often made suggestions that were only 'directionally correct,' said Becker. 'When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed.'As a result, developers spent additional time reviewing and correcting AI-generated code, which ultimately slowed them down. However, the researchers do not believe this slowdown would apply to all coding scenarios, such as those involving junior developers or unfamiliar codebases.Despite the results, both the study's authors and most participants continue to use Cursor. Becker suggested that while the tool may not speed up work, it can still make development feel easier and more enjoyable.'Developers have goals other than completing the task as soon as possible,' he said. 'So they're going with this less effortful route.'The authors also emphasised that their findings should not be over-generalised. The slowdown only reflects a snapshot of AI's capabilities as of early 2025, and further improvements in prompting, training, and tool design could lead to different outcomes in future.As AI systems continue to evolve, METR plans to repeat such studies to better understand how AI might accelerate, or hinder, human productivity in real-world development settings.- Ends
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China PM warns against a global AI 'monopoly'
China PM warns against a global AI 'monopoly'

Time of India

time2 hours ago

  • Time of India

China PM warns against a global AI 'monopoly'

China PM warns against a global AI 'monopoly' China will spearhead the creation of an international organisation to jointly develop AI, the country's premier said, seeking to ensure that the world-changing technology doesn't become the province of just a few nations or companies. Artificial intelligence harbours risks from widespread job losses to economic upheaval that require nations to work together to address, Premier Li Qiang told the World Artificial Intelligence Conference in Shanghai on Saturday. That means more international exchanges, Beijing's No 2 official said during China's most important annual technology summit. Li didn't name any countries in his short address to kick off the event. But Chinese executives and officials have taken aim at Washington's efforts to curtail the Asian country's tech sector, including by slapping restrictions on the export of Nvidia chips crucial to AI development. On Saturday, Li acknowledged a shortage of semiconductors was a major bottleneck, but reaffirmed President Xi Jinping's call to establish policies to propel Beijing's ambitions. The govt will now help create a body - loosely translated as the World AI Cooperation Organization - through which countries can share insights and talent. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Most Beautiful Female Athletes Right Now Undo "Currently, key resources and capabilities are concentrated in a few countries and a few enterprises. If we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises," Li told hundreds of delegates huddled at the conference venue on the banks of Shanghai's iconic Huangpu river. China and the US are locked in a race to develop a technology with the potential to turbocharge economies and - over the long run - tip the balance of geopolitical power. This week, US President Donald Trump signed executive orders to loosen regulations and expand energy supplies for data centers - a call to arms to ensure companies like OpenAI and Google help safeguard America's lead in the post-ChatGPT era. At the same time, the breakout success of DeepSeek has inspired Chinese tech leaders and startups to accelerate research and roll out AI products. The weekend conference in Shanghai - gathering star founders, Beijing officials and deep-pocketed financiers by the thousands - is designed to catalyze that movement. The event, which has featured Elon Musk and Jack Ma in years past, was launched in 2018. This year's attendance may hit a record because it's taking place at a critical juncture in the global race to lead GenAI development. It's already drawn some notable figures: Nobel Prize laureate Geoffrey Hinton and former Google chief Eric Schmidt were among heavyweights who met Shanghai party boss Chen Jining Thursday, before they were due to speak at the event.

Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman
Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman

News18

time4 hours ago

  • News18

Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman

Last Updated: Sam Altman raised concerns about user data confidentiality with AI chatbots like ChatGPT, especially for therapy, citing a lack of legal frameworks to protect sensitive info. OpenAI CEO Sam Altman has raised concerns about maintaining user data confidentiality when it comes to sensitive conversations, as millions of people, including children, have turned to AI chatbots like ChatGPT for therapy and emotional support. In a recent podcast, This Past Weekend, hosted by Theo Von on YouTube, CEO Altman replied to a question about how AI works with the current legal system, cautioning that users shouldn't expect confidentiality in their conversations with ChatGPT, citing the lack of a legal or policy framework to protect sensitive information shared with the AI chatbot. 'People talk about the most personal sh*t in their lives to ChatGPT. People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." Altman continued to say that the concept of confidentiality and privacy for conversations with AI should be addressed urgently. 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," the Indian Express quoted Altman as saying. This means that none of your conversations with ChatGPT about mental health, emotional advice, or companionship are private and can be produced in court or shared with others in case of a lawsuit. Unlike end-to-end encrypted apps like WhatsApp or Signal, which prevent third parties from reading or accessing your chats, OpenAI can access your chats with ChatGPT, using them to improve the AI model and detect misuse. While OpenAI claims to delete free-tier ChatGPT conversations within 30 days, but may retain them for legal or security reasons. Adding to privacy concerns, OpenAI is currently in the middle of a lawsuit with The New York Times, which requires the company to save user conversations with millions of ChatGPT users, excluding enterprise customers. view comments First Published: July 26, 2025, 22:27 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman

Economic Times

time4 hours ago

  • Economic Times

Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman

OpenAI CEO Sam Altman Flags Privacy Loophole in ChatGPT's Use as a Digital Confidant. (Image Source: YouTube/@Theo Von) Synopsis OpenAI CEO Sam Altman has warned that conversations with ChatGPT are not legally protected, unlike those with therapists, doctors, or lawyers. In a podcast with Theo Von, Altman explained that users often share deeply personal information with the AI, but current laws do not offer confidentiality. This means OpenAI could be required to hand over user chats in legal cases. He stressed the need for urgent privacy regulations, as the legal system has yet to catch up with AI's growing role in users' personal lives. Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality. ADVERTISEMENT During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI. Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there's a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. 'That's very screwed up,' Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention. Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn't even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman's warning OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot's assistance. 'It makes sense to want privacy clarity before you use it a lot,' Altman said, agreeing with Von's to OpenAI's own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse. ADVERTISEMENT The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company's data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court's Roe v. Wade reversal, which raised fears about digital privacy around personal choices. ADVERTISEMENT While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn't caught up. Until it does, Altman's message is clear: users should be cautious about what they choose to share. (Catch all the Budget 2024 News, Budget 2024 Live Coverage Events and Latest News Updates on The Economic Times.) NEXT STORY

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store