Amid setback, South Korea pushes forward on drones, loyal wingman
The accident occurred on March 17, when an IAI Heron-1 drone belonging to Korea's army veered off a runway upon landing at Yangju, subsequently colliding with a parked Surion helicopter. Both aircraft were written off, meaning the army has now lost all three of its Herons in accidents.
Nonetheless, the country is under the gun to accelerate its drone plans – and for reasons outside immediate military-equipment considerations.
Kim Jae Yeop, senior researcher at the Sungkyun Institute for Global Strategy in Seoul, told Defense News South Korea's low birth rate, amongst the lowest in the world, is looming large.
'The number of regular troops in the armed forces, which is now roughly 500,000, will highly likely decrease to fewer than 400,000 in the next decade,' he said.
'As a result,' Kim explained, 'Seoul is taking active measures to expand the role of military unmanned systems to offset the reduction in troops. They can be acquired at significant scale at a lower cost and without risk to life in missions.'
One important program saw Korean Air roll out a new loyal wingman technology demonstrator – called the Low Observable Unmanned Wingman System, or LOWUS – on Feb. 25.
The stealthy turbofan-powered LOWUS, funded by the Agency for Defense Development since 2021, was unveiled at the Korean Air Tech Center in Pusan. Its maiden flight is expected later this year, ahead of manned-unmanned teaming flight tests in 2027.
Possessing an internal weapons bay and looking similar to the American XQ-58A Valkyrie, Korean Air lists a length of 10.4m and wingspan of 9.4m for the aicraft.
As with similar loyal wingman concepts by other major powers, the idea for the drone sidekicks is fly missions ranging from strike to surveillance, jamming and escort.
The LOWUS will likely have a domestic engine and active electronically scanned array radar. Korean Air gained experience with requisite stealth technologies when developing the blended-wing KUS-FC, or Kaori-X, drone that first flew in 2015.
In the future, Korea's air force is expected to introduce composite squadrons of manned fighters and loyal wingmen.
'Considering the fact that only a small number of countries like the U.S., Australia and Russia have been producing and testing similar kinds of systems, the LOWUS highlights Seoul's technological achievements,' said Kim.
Another program currently underway comprises a search for loitering munitions for Korean special forces units. A platform is due to be selected later this year, and Seoul is allocating around $22 million to this acquisition.
Foreign types like the Switchblade 600 and Hero 120 are under consideration, with the aim being to give special forces strike drones they can use independently against North Korean invaders without the need for calling in external fire support.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Los Angeles Times
16 hours ago
- Los Angeles Times
Children's Hospital Los Angeles Launches First-of-Its-Kind Sleep App
The CHLA-developed app will collect data for future research in machine learning to detect sleep disorders and help advance the standards of pediatric care in pain and sleep medicine Children's Hospital Los Angeles (CHLA) is launching the first sleep registry in the country for children using Apple Watch as well as a new data collection app called WISE-HARE, or 'Wearable Intelligent Sensor Enhancement Home Apnea Risk Evaluation.' The app was developed to gather streams of high-fidelity data for future research, such as training machine learning algorithms from Apple Watch data to detect sleep disorders and provide crucial information to clinicians that inform patient care decisions. 'There are not enough pediatric sleep study beds in the country, which inevitably results in delayed care for children. In looking into solutions to solve this, it was clear that no application currently on the market would give us the immense amount of raw data needed to properly conduct sleep studies on children at home without specialized equipment,' said Eugene Kim, MD, principal investigator and chief of the Division of Pain Medicine in the Department of Anesthesiology and Critical Care Medicine. 'At Children's Hospital Los Angeles, we are always looking to pioneer the latest research and innovations with the goal of advancing the standards of pediatric care. We developed a custom app with graduates from Apple's Developer Academy in Fortaleza, Brazil, who supported the integration of Apple technologies, including HealthKit. This will allow us to create a first-of-its-kind sleep registry, which will be used to train machine learning algorithms from Apple Watch data to detect sleep disorders and inform clinicians on the need for ICU (Intensive Care Unit) admissions following surgery,' according to Kim. Polysomnography (PSG) studies, in which patients are admitted to the hospital overnight and numerous sensors are placed on the patient while they sleep, are the gold-standard test for assessing sleep and are essential in the diagnosis of sleep disorders such as sleep apnea. They are often needed to assess anesthetic risk before procedures, to help clinicians evaluate the risk of complications post-surgery. However, these tests are costly, have significant waitlists and require children to sleep in an unfamiliar environment at the hospital, which can lead to different results than a child sleeping comfortably at home. To launch this new registry, CHLA is enrolling children ages 5 to 18 years old currently scheduled for a PSG study. Enrolled participants will use the WISE-HARE app and wear an Apple Watch, in addition to the standard PSG sensors. Results from the PSG and Apple Watch devices over the next year will be used to train machine learning algorithms to detect high-risk sleep disorders, with families with the ability to screen for these highrisk sleep disorders at home without the need for special equipment. 'It was important that the benefits of our research would be made accessible for all patients. For this to happen, we needed a device that was comfortable to wear, commercially available and didn't require special training to operate,' added Dr. Kim. 'Apple Watch is a device that many children and their parents are already familiar with. The latest version met our requirements for a platform that allows us to collect and manage enormous amounts of data efficiently and securely.' Throughout the course of a typical eight-hour sleep test, WISE-HARE will amass over 30 million lines of data per patient. As home to the Virtual Pediatric Intensive Care Unit (vPICU), a data hub for providers in pediatric intensive care units worldwide, CHLA and its team of data scientists are among the few in the country with the expertise and infrastructure required to manage this data. 'The WISE-HARE app has the potential to help alleviate the delays and frustrations caused by the national shortage of pediatric sleep study beds in the coming years,' said Emily Gillett, MD, pulmonologist and sleep medicine specialist at CHLA. 'The Sleep Center and Sleep Laboratory at Children's Hospital Los Angeles were among the first in the country to focus exclusively on sleep disorders in children, so it's very fitting that our team at CHLA is pioneering this new sleep monitoring technology with the potential to streamline care for pediatric sleep patients.' The registry was funded by the Robert J. Coury Family Foundation. WISE-HARE will be accessible as open-source software and made available to researchers.


Scientific American
2 days ago
- Scientific American
AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines
There are many ways to test the intelligence of an artificial intelligence —conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs. One test designed to evaluate an AI's ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games. Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. [ An edited transcript of the interview follows. ] What definition of intelligence is measured by ARC-AGI-1? Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can't go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We're basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model's ability to learn within a narrow domain. But our claim is that it does not measure AGI because it's still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI. How are you defining AGI here? There are two ways I look at it. The first is more tech-forward, which is 'Can an artificial system match the learning efficiency of a human?' Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don't really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That's called generalization. When you can do things outside of what you've been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that's when we have AGI. That's an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet's benchmark... is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that's so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that's spiky intelligence. It still doesn't have the generalization power of a human. And that's what this benchmark shows. How do your benchmarks differ from those used by other organizations? One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That's in opposition to other benchmarks, where they do 'Ph.D.-plus-plus' problems. I don't need to be told that AI is smarter than me—I already know that OpenAI's o3 can do a lot of things better than me, but it doesn't have a human's power to generalize. That's what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2. What makes this test hard for AI and relatively easy for humans? There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that's running in a human's head is orders of magnitude better and more efficient than what we're seeing with AI right now. What is the difference between ARC-AGI-1 and ARC-AGI-2? So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn't touch it at all. It wasn't even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it's the same concept, more or less.... We are now launching a developer preview for ARC-AGI-3, and that's completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark. How will ARC-AGI-3 test agents differently compared with previous tests? If you think about everyday life, it's rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There's a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we're making 100 novel video games that we will use to test humans to make sure that humans can do them because that's the basis for our benchmark. And then we're going to drop AIs into these video games and see if they can understand this environment that they've never seen beforehand. To date, with our internal testing, we haven't had a single AI be able to beat even one level of one of the games. Can you describe the video games here? Each 'environment,' or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions. How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems? Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.


The Hill
3 days ago
- The Hill
Airfare by algorithm: Delta leans into AI pricing — but is it a good thing?
What you pay for a Delta Air Lines ticket may soon depend less on timing and more on what an algorithm thinks you're willing to spend. About 3 percent of Delta's domestic ticket prices are now determined by artificial intelligence (AI), with plans to raise that to 20 percent by year's end, President Glen Hauenstein said on an earnings call last week. During an Investor Day presentation in November, Hauenstein described the new AI pricing technology as a 'super analyst,' calling it a 'full reengineering of how we price and how we will be pricing in the future.' That enthusiasm stems from the airline's partnership with Fetcherr, an Israeli tech company that uses AI to process millions of data points instantly 'to set the perfect price every time,' according to a company blog post. Delta's embrace of AI is the latest example of dynamic pricing, where companies adjust prices in real time based on factors like supply, demand and even individual consumer behavior. The concept isn't new, but the technology is making it far more sophisticated. Fetcherr's website says its algorithms tailor prices based on factors like customer lifetime value, past purchase behaviors and 'the real-time context of each booking inquiry,' all of which, the company says, help create 'a truly personalized offer.' In theory, hyperpersonalization meets customers where they are, offering a custom experience every time. But critics warn that the new pricing tactics may exploit rather than benefit consumers. Sen. Ruben Gallego (D-Ariz.) called Delta's practice 'predatory pricing,' in a post online, while accusing the airline of using AI to 'find your pain point' and 'squeeze you for every penny.' Last year, Wendy's planned to test an AI-driven dynamic pricing model that many likened to Uber's surge pricing. The plan faced intense backlash online before the burger chain clarified that menu prices would not increase during its busiest hours. It remains to be seen whether Delta will face similar pushback. Airlines already adjust fares based on seasonality and demand, so travelers may be accustomed to seeing wide price swings, with or without AI. NewsNation reached out to Delta for more details about its AI pricing strategy. In response, a spokesperson pointed to the company's latest earnings call. Early results suggest Delta's AI pricing strategy has successfully driven revenue, but it may still be some time before it's the norm. 'We're in heavy testing phase. We like what we see,' Hauenstein told investors. 'But we're going to take our time and make sure that the rollout is successful.'