logo
Why drones and AI can't quickly find missing flood victims, yet

Why drones and AI can't quickly find missing flood victims, yet

Yahoo17-07-2025
For search and rescue, AI is not more accurate than humans, but it is far faster.
Recent successes in applying computer vision and machine learning to drone imagery for rapidly determining building and road damage after hurricanes or shifting wildfire lines suggest that artificial intelligence could be valuable in searching for missing persons after a flood.
Machine learning systems typically take less than one second to scan a high-resolution image from a drone versus one to three minutes for a person. Plus, drones often produce more imagery to view than is humanly possible in the critical first hours of a search when survivors may still be alive.
Unfortunately, today's AI systems are not up to the task.
We are robotics reseachers who study the use of drones in disasters. Our experiences searching for victims of flooding and numerous other events show that current implementations of AI fall short.
However, the technology can play a role in searching for flood victims. The key is AI-human collaboration.
AI's potential
Searching for flood victims is a type of wilderness search and rescue that presents unique challenges. The goal for machine learning scientists is to rank which images have signs of victims and indicate where in those images search-and-rescue personnel should focus. If the responder sees signs of a victim, they pass the GPS location in the image to search teams in the field to check.
The ranking is done by a classifier, which is an algorithm that learns to identify similar instances of objects – cats, cars, trees – from training data in order to recognize those objects in new images. For example, in a search-and-rescue context, a classifier would spot instances of human activity such as garbage or backpacks to pass to wilderness search-and-rescue teams, or even identify the missing person themselves.
A classifier is needed because of the sheer volume of imagery that drones can produce. For example, a single 20-minute flight can produce over 800 high-resolution images. If there are 10 flights – a small number – there would be over 8,000 images. If a responder spends only 10 seconds looking at each image, it would take over 22 hours of effort. Even if the task is divided among a group of 'squinters,' humans tend to miss areas of images and show cognitive fatigue.
The ideal solution is an AI system that scans the entire image, prioritizes images that have the strongest signs of victims, and highlights the area of the image for a responder to inspect. It could also decide whether the location should be flagged for special attention by search-and-rescue crews.
Where AI falls short
While this seems to be a perfect opportunity for computer vision and machine learning, modern systems have a high error rate. If the system is programmed to overestimate the number of candidate locations in hopes of not missing any victims, it will likely produce too many false candidates. That would mean overloading squinters or, worse, the search-and-rescue teams, which would have to navigate through debris and muck to check the candidate locations.
Developing computer vision and machine learning systems for finding flood victims is difficult for three reasons.
One is that while existing computer vision systems are certainly capable of identifying people visible in aerial imagery, the visual indicators of a flood victim are often very different compared with those for a lost hiker or fugitive. Flood victims are often obscured, camouflaged, entangled in debris or submerged in water. These visual challenges increase the possibility that existing classifiers will miss victims.
Second, machine learning requires training data, but there are no datasets of aerial imagery where humans are tangled in debris, covered in mud and not in normal postures. This lack also increases the possibility of errors in classification.
Third, many of the drone images often captured by searchers are oblique views, rather than looking straight down. This means the GPS location of a candidate area is not the same as the GPS location of the drone. It is possible to compute the GPS location if the drone's altitude and camera angle are known, but unfortunately those attributes rarely are. The imprecise GPS location means teams have to spend extra time searching.
How AI can help
Fortunately, with humans and AI working together, search-and-rescue teams can successfully use existing systems to help narrow down and prioritize imagery for further inspection.
In the case of flooding, human remains may be tangled among vegetation and debris. Therefore, a system could identify clumps of debris big enough to contain remains. A common search strategy is to identify the GPS locations of where flotsam has gathered, because victims may be part of these same deposits.
An AI classifier could find debris commonly associated with remains, such as artificial colors and construction debris with straight lines or 90-degree corners. Responders find these signs as they systematically walk the riverbanks and flood plains, but a classifier could help prioritize areas in the first few hours and days, when there may be survivors, and later could confirm that teams didn't miss any areas of interest as they navigated the difficult landscape on foot.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Robin R. Murphy, Texas A&M University and Thomas Manzini, Texas A&M University
Read more:
FEMA's flood maps often miss dangerous flash flood risks, leaving homeowners unprepared
California wildfires force students to think about the connections between STEM and society
An expert on search and rescue robots explains the technologies used in disasters like the Florida condo collapse
Robin R. Murphy receives funding from the National Science Foundation. She is affiliated with the Center for Robot-Assisted Search and Rescue.
Thomas Manzini is affiliated with the Center for Robot Assisted Search & Rescue (CRASAR), and his work is funded by the National Science Foundation's AI Institute for Societal Decision Making (AI-SDM).
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mark Zuckerberg's Meta pays ‘tens of millions of dollars' to poach top Apple AI exec — adding to murderers' row of new hires
Mark Zuckerberg's Meta pays ‘tens of millions of dollars' to poach top Apple AI exec — adding to murderers' row of new hires

Yahoo

timean hour ago

  • Yahoo

Mark Zuckerberg's Meta pays ‘tens of millions of dollars' to poach top Apple AI exec — adding to murderers' row of new hires

Mark Zuckerberg's Meta has reportedly forked over 'tens of millions of dollars' to poach one of Apple's top artificial intelligence researchers as the tech giant continued to add to a murderers' row of high-paid talent. Ruoming Pang, who led the team responsible for developing Apple's AI models, will become the latest member to join Meta's new 'Superintelligence Lab,' Bloomberg reported, citing sources with knowledge of the matter. Meta reportedly lured Pang, who had worked at Apple since 2021, with a compensation package 'worth tens of millions of dollars per year,' the sources said. The company also recently hired away researchers Yuanzhi Li from OpenAI and Anton Bakhtin from Anthropic. In all, Meta has poached more than a dozen top AI researchers since last week, purportedly offering compensation packages worth $100 million or more to win the AI arms race – meaning the company's total spending on hires could soon surpass $1 billion, if it hasn't already. At least nine of the hires jumped ship from Sam Altman's OpenAI, with the others coming from Google DeepMind and Amazon-backed Anthropic. The new hires will be part of the the newly formed Meta Superintelligence Labs, headed by former Scale AI CEO Alexandr Wang. Late last month, Zuckerberg announced that Wang came aboard after Meta invested nearly $15 billion for a 49% stake in the startup. Other key hires include former GitHub CEO Nat Friedman, ex-Safe Superintelligence CEO Daniel Gross and former OpenAI researcher Trapit Bansal, who played a key role in developing the ChatGPT maker's AI reasoning models. 'As the pace of AI progress accelerates, developing superintelligence is coming into sight,' Zuckerberg said in an internal message to employees on June 30. The announcement helped push Meta's stock to an all-time high. Meta confirmed the hire but declined further comment. Apple did not immediately respond. Meta's tactics have miffed Altman, who has publicly grumbled about his billionaire rival targeting OpenAI's employees with exorbitant packages. Top Meta executive Andrew Bosworth reportedly pushed back during a recent all-hands meeting, telling employees that Altman was being 'dishonest' about the extent of the offers. At the same time, Meta denied a report from the tech news site Wired that it had offered up to $300 million to some AI talent – numbers that, if true, would dwarf the annual pay of some of the world's top tech executives. 'Some people have chosen to greatly exaggerate what's happening for their own purposes,' Meta spokesperson Andy Stone said at the time. Meanwhile, the loss of Pang was another setback for Apple, which has struggled to integrate new AI features for its iPhones and other hardware. Pang oversaw roughly 100 employees at Apple. During Apple's Worldwide Developers Conference last month, the company confirmed that its long-teased AI overhaul of the Siri voice assistant still needed more work before it could be released to the public.

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'
Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

Yahoo

timean hour ago

  • Yahoo

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

Mark Zuckerberg's Meta gave a 24-year-old artificial intelligence whiz a staggering $250 million compensation package, raising the bar in the recruiting wars for top talent — while also raising questions about economic inequality in an AI-dominated future. Matt Deitke, who recently dropped out of a computer science doctoral program at the University of Washington, initially turned down Zuckerberg's 'low-ball' offer of approximately $125 million over four years, according to the New York Times. But when the Facebook founder, a former whiz kid himself, met with Deitke and doubled the offer to roughly $250 million — with potentially $100 million paid in the first year alone — the young researcher accepted what may be one of the largest employment packages in corporate history, the Times reported. 'When computer scientists are paid like professional athletes, we have reached the climax of the 'Revenge of the Nerds!'' Professor David Autor, an economist at MIT, told The Post on Friday. Deitke's journey illustrates how quickly fortunes can be made in AI's limited talent pool. After leaving his doctoral program, he worked at Seattle's Allen Institute for Artificial Intelligence, where he led the development of Molmo, an AI chatbot capable of processing images, sounds, and text — exactly the type of multimodal system Meta is pursuing. In November, Deitke co-founded Vercept, a startup focused on AI agents that can autonomously perform tasks using internet-based software. With approximately 10 employees, Vercept raised $16.5 million from investors including former Google CEO Eric Schmidt. His groundbreaking work on 3D datasets, embodied AI environments and multimodal models earned him widespread acclaim, including an Outstanding Paper Award at NeurIPS 2022. The award, one of the highest accolades in the AI research community, is handed out to around a dozen researchers out of more than 10,000 submissions. The deal to lock up Deitke underscores Meta's aggressive push to compete in artificial intelligence. Meta has reportedly paid out more than $1 billion to build an all-star roster, including luring away Ruoming Pang, former head of Apple's AI models team, to join its Superintelligence Labs team with a compensation package reportedly worth more than $200 million. The company said capital expenditures will go up to $72 billion for 2025, an increase of approximately $30 billion year-over-year, in its earnings report Wednesday. While proponents argue that competition drives innovation, critics worry about the concentration of power among a few companies and individuals capable of shaping AI's development. Ramesh Srinivasan, a professor of Information Studies and Design/Media Arts at UCLA and founder of the university's Digital Cultures Lab, said the direction that companies like Meta are taking with artificial intelligence is 'foundational to why our economy is becoming more unequal by the day.' 'These firms are awarding hundreds of millions of dollars to a handful of elite researchers while simultaneously laying off thousands of workers—many of whom, like content moderators, are not even classified as full employees,' Srinivasan told the New York Post. 'These are the very jobs Meta and similar companies intend to replace with the AI systems they're aggressively developing.' Srinivasan, who advises US policymakers on technology policy and has written extensively on the societal impact of AI, said this model of development rewards those advancing large language models while 'displacing and disenfranchising the workers whose labor, ironically, generated the data powering those models in the first place.' 'This is cognitive task automation,' he said. 'It's HR, administrative work, paralegal work — even driving for Uber. If data can be collected on a job, it can be mimicked by a machine. All of those forms of income are on the chopping block.' Asked whether universal basic income might address mass displacement, Srinivasan, who hosts the Utopias podcast, called it 'highly insufficient.' 'Yes, UBI gives people money, but it doesn't address the fundamental issue: no one is being paid for the data that makes these AI systems possible,' he said. On Wednesday, Zuckerberg told investors on the company's earnings call: 'We're building an elite, talent-dense team. If you're going to be spending hundreds of billions of dollars on compute and building out multiple gigawatt of clusters, then it really does make sense to compete super hard and do whatever it takes to get that, you know, 50 or 70 or whatever it is, top researchers to build your team.' 'There's just an absolute premium for the best and most talented people.' A Meta spokesperson referred The Post to Zuckerberg's comments to investors. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Nvidia must provide 'security proofs' to regain trust:China state media
Nvidia must provide 'security proofs' to regain trust:China state media

Yahoo

timean hour ago

  • Yahoo

Nvidia must provide 'security proofs' to regain trust:China state media

BEIJING (Reuters) -Nvidia (NVDA) must produce "convincing security proofs" to eliminate Chinese users' worries over security risks in its chips and regain market trust, a commentary published by China's state-run media People's Daily said on Friday. Foreign companies must comply with Chinese laws and take security to be a basic prerequisite, said the commentary - titled "Nvidia, how can I trust you?" - which was published on the paper's social media account. In a statement sent to Reuters, an Nvidia spokesperson reiterated that "Cybersecurity is critically important to us". "NVIDIA does not have 'backdoors' in our chips that would give anyone a remote way to access or control them," the spokesperson said. The commentary appeared a day after Beijing raised concerns over potential security risks in Nvidia's H20 artificial intelligence chip, casting uncertainty over the company's sales prospects in China weeks after a U.S. export ban was reversed. The Cyberspace Administration of China, the country's internet regulator, said it was concerned by a U.S. proposal for advanced chips sold abroad to be equipped with tracking and positioning functions. The regulator said it had summoned Nvidia to a meeting to explain whether its H20 AI chip had any backdoor security risks, as it was worried that Chinese user data and privacy rights could be affected. A backdoor risk refers to a hidden method of bypassing normal authentication or security controls.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store