logo
#

Latest news with #AIs

AI slows down some experienced software developers, study finds
AI slows down some experienced software developers, study finds

Straits Times

time11-07-2025

  • Business
  • Straits Times

AI slows down some experienced software developers, study finds

Sign up now: Get ST's newsletters delivered to your inbox The study found that using AI increased task completion time by 19 per cent. SAN FRANCISCO - Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with. Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24 per cent. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20 per cent. But the study found that using AI did the opposite: it increased task completion time by 19 per cent. The study's lead authors, Mr Joel Becker and Mr Nate Rush, said they were shocked by the results: prior to the study, Mr Rush had written down that he expected 'a 2x speed up, somewhat obviously'. The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development. AI is also expected to replace entry-level coding positions. Mr Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years. Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56 per cent, another study found developers were able to complete 26 per cent more tasks in a given time. But the new METR study shows that those gains don't apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown. Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study's authors said. The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. 'When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,' Mr Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren't familiar with. Still, the majority of the study's participants, as well as the study's authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. 'Developers have goals other than completing the task as soon as possible,' Mr Becker said. 'So they're going with this less effortful route.' REUTERS

In the Loop: Is AI Making the Next Pandemic More Likely?
In the Loop: Is AI Making the Next Pandemic More Likely?

Time​ Magazine

time01-07-2025

  • Science
  • Time​ Magazine

In the Loop: Is AI Making the Next Pandemic More Likely?

Welcome back to In the Loop, TIME's new twice-weekly newsletter about AI. Starting today, we'll be publishing these editions both as stories on and as emails. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? Subscribe to In the Loop What to Know If you talk to staff at the top AI labs, you'll hear a lot of stories about how the future could go fantastically well—or terribly badly. And of all the ways that AI might cause harm to the human race, there's one that scientists in the industry are particularly worried about today. That's the possibility of AI helping bad actors to start a new pandemic. 'You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,' Anthropic's chief scientist, Jared Kaplan, told me in May. Measuring the risk — In a new study published this morning, and shared exclusively with TIME ahead of its release, we got the first hard numbers on how experts think the risk of a new pandemic might have increased thanks to AI. The Forecasting Research Institute polled experts earlier this year, asking them how likely a human-caused pandemic might be—and how likely it might become if humans had access to AI that could reliably give advice on how to build a bioweapon. What they found — Experts, who were polled between December and February, put the risk of a human-caused pandemic at 0.3% per year. But, they said, that risk would jump fivefold, to 1.5% per year, if AI were able to provide human-level virology advice. You can guess where this is going — Then, in April, the researchers tested today's AI tools on a new virology troubleshooting benchmark. They found that today's AI tools outperform PhD-level virologists at complex troubleshooting tasks in the lab. In other words, AI can now do the very thing that forecasters warned would increase the risk of a human-caused pandemic fivefold. We just published the full story on can read it here. Who to Know Person in the news – Matthew Prince, CEO of Cloudflare. Since its founding in 2009, Cloudflare has been protecting sites on the internet from being knocked offline by large influxes of traffic, or indeed coordinated attacks. Now, some 20% of the internet is covered by its network. And today, Cloudflare announced that this network would begin to block AI crawlers by default — essentially putting a fifth of the internet behind a paywall for the bots that harvest info to train AIs like ChatGPT and Claude. Step back — Today's AI is so powerful because it has essentially inhaled the whole of the internet — from my articles to your profile photos. By running neural networks over that data using immense quantities of computing power, AI companies have taught these systems the texture of the world at such an enormous scale that it has given rise to new AI capabilities, like the ability to answer questions on almost any topic, or to generate photorealistic images. But this scraping has sparked a huge backlash from publishers, artists and writers, who complain that it has been done without any consent or compensation. A new model — Cloudflare says the move will 'fundamentally change how AI companies access web content going forward.' Major publishers, including TIME, have expressed their support for the shift toward an 'opt-in' rather than an 'opt-out' system, the company says. Cloudflare also says it is working on a new initiative, called Pay Per Crawl, in which creators will have the option of setting a price on their data in return for making it available to train AI. Fighting words — Prince was not available for an interview this week. But at a recent conference, he disclosed that traffic to news sites had dropped precipitously across the board thanks to AI, in a shift that many worry will imperil the existence of the free press. 'I go to war every single day with the Chinese government, the Russian government, the Iranians, the North Koreans, probably Americans, the Israelis — all of them who are trying to hack into our customer sites,' Prince said. 'And you're telling me I can't stop some nerd with a C-corporation in Palo Alto?' AI in Action 61% percent of U.S. adults have used AI in the last six months, and 19% interact with it daily, according to a new survey of AI adoption by the venture capital firm Menlo Ventures. But just 3% percent of those users pay for access to the software, Menlo estimated based on the survey's results—suggesting 97% of users only use the free tier of AI tools. AI usage figures are higher for Americans in the workforce than other groups. Some 75% of employed adults have used AI in the last six months, including 26% who report using it daily, according to the survey. Students also report high AI usage: 85% have used it in the last six months, and 22% say they use it daily. The statistics seem to suggest that some students and workers are growing dependent on free AI tools—a usage pattern that might become lucrative if AI companies were to begin restricting access or raising prices. However, the proliferation of open-source AI models has created intense price competition that may limit any single company's ability to dramatically increase their costs. As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@ What we're reading 'The Dead Have Never Been This Talkative': The Rise of AI Resurrection by Tharin Pillay in TIME With the rise of image-to-video tools like the newest version of Midjourney, the world recently crossed a threshold: it's now possible, in just a few clicks, to reanimate a photo of your dead relative. You can train a chatbot on snippets of their writing to replicate their patterns of speech; if you have a long enough clip of them speaking, you can also replicate their voice. Will these tools make it easier to process the heart-rending pain of bereavement? Or might their allure in fact make it harder to move forward? My colleague Tharin published a deeply insightful piece last week about the rise of this new technology. It's certainly a weird time to be alive. Or, indeed, to be dead. Subscribe to In the Loop

Colleagues or overlords? The debate over AI bots has been raging but needn't
Colleagues or overlords? The debate over AI bots has been raging but needn't

Mint

time23-06-2025

  • Science
  • Mint

Colleagues or overlords? The debate over AI bots has been raging but needn't

There's the Terminator school of perceiving artificial intelligence (AI) risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders—including Sam Altman of OpenAI and Demis Hassabis of Alphabet's DeepMind—sent shockwaves with a statement that warned: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Also Read: AI didn't take the job. It changed what the job is. Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by AI 'accelerationists' largely drowning out AI doomers. Companies and countries have raced towards being the first to achieve superhuman AI, brushing off the early calls to prioritise safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades studying the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs 'live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines—especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. 'Equality' is the sensitive part. Humans want to keep believing they are superior, not equal to machines. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the United States has produced AI-inspired characters like the Terminator from the eponymous Hollywood movie, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur 'civilization extinction' remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring—whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. Also Read: You're absolutely right, as the AI chatbot says With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritise independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fear-mongering. There might be merit in seeing these machines as colleagues and not overlords. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.

Opinion: Make the Robot Your Colleague, Not Overlord
Opinion: Make the Robot Your Colleague, Not Overlord

NDTV

time19-06-2025

  • Science
  • NDTV

Opinion: Make the Robot Your Colleague, Not Overlord

There's the Terminator school of perceiving artificial intelligence risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders - including Sam Altman of OpenAI and Demis Hassabis of Alphabet Inc.'s DeepMind - sent shockwaves with a statement that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by "accelerationists" largely drowning out the doomers. Companies and countries have raced toward being the first to achieve superhuman AI, brushing off the early calls to prioritize safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades researching the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs "live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Well, kumbaya. Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines - especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. "Equality" is the sensitive part. Humans want to keep believing they are superior, not equal to the machines. His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the US has produced AI-inspired characters like the Terminator, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur "civilization extinction" remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring - whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritize independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fearmongering. There might be merit in viewing these machines as colleagues and not overlords.

Reddit sues Anthropic over unauthorized data use: What to know
Reddit sues Anthropic over unauthorized data use: What to know

Yahoo

time05-06-2025

  • Business
  • Yahoo

Reddit sues Anthropic over unauthorized data use: What to know

Artificial intelligence (AI) firms may face new legal hurdles as data licensing disputes heat up. Yahoo Finance Tech Editor Dan Howley joins Market Domination to explain why Reddit (RDDT) is suing Anthropic ( over unauthorized use of its content for AI training. To watch more expert insights and analysis on the latest market action, check out more Market Domination here. Let us talk about another side of AI and that is the licensing and training side because there's a story out today that Reddit is suing Anthropic. It has to do about the use of its data. And this is really interesting because one of Reddit's revenue sources is licensing its data to train AIs. But I guess it says Anthropic wasn't supposed to be doing it. Yeah, I mean, this is something that, you know, unless there's some rule put in place, some kind of blanket regulation or agreement across the industry, that's going to continue to rear its head, right? We had The New York Times obviously suing OpenAI, but then they're also working with AI companies. So, you know, it's this constant back and forth of, can you train data without asking for the owner of that data for permission? And so the larger thinking is that AI companies just went ahead and did that overall. And now that there's backlash, now they have to figure out who they can work with and who they can't work with. Reddit is a huge repository of knowledge, right? You have, not that it's all accurate. Don't get me wrong, look, you can go on Reddit to find out how to change your tires, but you can't go on there to figure out, I don't know, advanced math, or maybe you can, I don't know. I'm not using it for that. Bird IDs are great on there. But it's a huge repository for, for data for AI companies to then have access to that to train on is massive. So you understand why they want access to it, but you can't just go in someone's house without knocking first, you know what I'm saying? By the way, Merlin for bird IDs. Yes. Excellent. I'm looking for a painted bunting. Beautiful birds. Okay. Always something new with AI. Always something new. Yeah. Yeah. Thank you, Dan. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store