
After 60 years, the search for a missing plane in Lake Superior continues
Search team that found missing plane in Michigan's Lake Huron aims to help more families
Search team that found missing plane in Michigan's Lake Huron aims to help more families
Experts searching for plane wreckage in Michigan's Lake Superior turned up logs and rocks on the bottom but no debris from an aircraft that crashed nearly 60 years ago, carrying three people on a scientific assignment.
A team from Michigan Technological University returned last week by boat to get closer to 16 targets that appeared on sonar last fall, more than 200 feet below the surface of the vast lake. The crew used side-scan sonar and other remote technology.
"We did not locate any sign of the wreckage of the missing aircraft," said Travis White, a research engineer at the Great Lakes Research Center at Michigan Tech. "However, we did validate our technical approach, as we found physical objects in each target location."
The Beechcraft plane carrying pilot Robert Carew, co-pilot Gordon Jones and graduate student Velayudh Krishna Menon left Madison, Wisconsin, for Lake Superior on Oct. 23, 1968. They were collecting data on temperature and other lake conditions for the National Center for Atmospheric Research.
Seat cushions and pieces of stray metal have washed ashore over the years along the Keweenaw Peninsula. But the wreckage and the remains of the men have never been found.
"We're probably not going to find a fully intact airplane," said Wayne Lusardi, state maritime archaeologist.
An autonomous vessel was launched last September, recording sonar readings and other data. After studying those findings over the winter, White, Lusardi and others returned to Lake Superior.
"Unfortunately, the targets turned out to be mostly natural: large sunken trees, logs, rocks," White said by email.
Metal cans on the lake bottom, believed to be 75 years old, give "hope that the plane wreckage may be reasonably well-preserved and not buried," he said.
White said the next challenge will be how to continue the work.
"We may attempt a crowdfunding model to see if we can raise some funds for future mapping activities that could help us locate the plane or other historic wrecks," he said.
The initial search last fall was organized by the Smart Ships Coalition, a group of more than 60 universities, government agencies, companies and international organizations interested in maritime autonomous technologies.
The video above was first published on Aug. 30, 2024.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
7 hours ago
- Yahoo
JetBlue posts smaller-than-expected loss as U.S. demand recovers
(Reuters) -JetBlue Airways on Tuesday posted an adjusted loss for the second quarter that was smaller than Wall Street expectations, helped by cost cutting measures and recovering demand for travel in the U.S. Over the past month, larger peers Delta and United have signaled that bookings are starting to stabilize, though at lower-than-expected levels, pointing to an uneven recovery. In April, JetBlue joined several major airlines in pulling its 2025 financial forecast, citing uncertainty tied to the Trump administration's sweeping tariff policies and federal spending cuts that weighed on consumer travel. "Demand for air travel improved as the quarter progressed, resulting in significant strength for bookings within 14-days of travel, as well as for peak travel periods," said Marty St. George, JetBlue's president, adding that the momentum continued into July. However, the carrier said it expects third-quarter revenue per available seat mile (RASM), an industry metric commonly known as unit revenue and a proxy for pricing power, to decline between 2% and 6%. It also renistated its 2025 unit cost forecast and expects it to rise between 5% and 7%. The carrier reported an adjusted loss of 16 cents per share for the quarter ended June 30, compared to analysts' estimate of a loss of 33 cents apiece. Operating revenue was $2.18 billion. Analysts, on average, were expecting $2.28 billion, as per data compiled by LSEG. Sign in to access your portfolio


CNET
7 hours ago
- CNET
What Is Superintelligence? Everything You Need to Know About AI's Endgame
You've probably chatted with ChatGPT, experimented with Gemini, Claude or Perplexity, or even asked Grok to verify a post on X. These tools are impressive, but they're just the tip of the artificial intelligence iceberg. Lurking beneath is something far bigger that has been all the talk in recent weeks: artificial superintelligence. Some people use the term "superintelligence" interchangeably with artificial general intelligence or sci-fi-level sentience. Others, like Meta CEO Mark Zuckerberg, use it to signal their next big moonshot. ASI has a more specific meaning in AI circles. It refers to an intelligence that doesn't just answer questions but could outthink humans in every field: medicine, physics, strategy, creativity, reasoning, emotional intelligence and more. We're not there yet, but the race has already started. In July, Zuckerberg said during an interview with The Information that his company is chasing "personal superintelligence" to "put the power of AI directly into individuals' hands." Or, in Meta's case, probably in everyone's smart glasses. Scott Stein/CNET That desire kicked off a recruiting spree for top researchers in Silicon Valley and a reshuffling inside Meta's FAIR team (now Meta AI) to push Meta closer to AGI and eventually ASI. So, what exactly is superintelligence, how close are we to it, and should we be excited or terrified? Let's break it down. What is superintelligence? Superintelligence doesn't have a formal definition, but it's generally described as a hypothetical AI system that would outperform humans at every cognitive task. It could process vast amounts of data instantly, reason across domains, learn from mistakes, self-improve, develop new scientific theories, write flawless code, and maybe even make emotional or ethical judgments. The idea became popularized through philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies , which warned of a scenario where an AI bot becomes smarter than humans, self-improves rapidly and then escapes our control. That vision sparked both excitement and fear among tech experts. Speaking to CNET, Bostrom says many of his 2014 warnings "have proven quite prescient." What has surprised him, he says, is "how anthropomorphic current AI systems are," with large language models behaving in surprisingly humanlike ways. Bostrom says he's now shifting his attention toward deeper issues, including "the moral status of digital minds and the relationship between the superintelligence we build with other superintelligences," which he refers to as "the cosmic host." For some, ASI represents the pinnacle of progress, a tool to cure disease, reverse climate change and crack the secrets of the universe. For others, it's a ticking time bomb -- one wrong move and we're outmatched by a machine we can't control. It's sometimes called the last human invention, not because it's final, but because ASI could invent everything else we need. British mathematician Irving John Good described it as an "intelligence explosion." Superintelligence doesn't exist yet. We're still in the early stages of what's called artificial narrow intelligence. It's an AI system that is great at specific tasks like translation, summarization and image generation, but not capable of broader reasoning. Tools like ChatGPT, Gemini, Copilot, Claude and Grok fall into this category. They're good at some tasks, but still flawed, prone to hallucinations and incapable of true reasoning or understanding. To reach ASI, AI needs to first pass through another stage: artificial general intelligence. What is AGI? AGI, or artificial general intelligence, refers to a system that can learn and reason across a wide range of tasks, not just one domain. It could match human-level versatility, such as learning new skills, adapting to unfamiliar problems and transferring knowledge across fields. Unlike current chatbots, which rely heavily on training data and struggle outside of predefined rules, AGI would handle complex problems flexibly. It wouldn't just answer questions about math and history; it could invent new solutions, explain them and apply them elsewhere. Current models hint at AGI traits, like multimodal systems that handle text, images and video. But true AGI requires breakthroughs in continual learning (updating knowledge without forgetting old stuff) and real-world grounding (understanding context beyond data). And none of the major models today qualify as true AGI, though many AI labs, including OpenAI, Google DeepMind and Meta, list it as their long-term target. Once AGI arrives and self-improves, ASI could follow quickly as a system smarter than any human in every area. How close are we to superintelligence? A superintelligent future concept I generated using Grok AI. Grok / Screenshot by CNET That depends on who you ask. A 2024 survey of 2,778 AI researchers paints a sobering picture. The aggregate forecasts give a 50% chance of machines outperforming humans in every possible task by 2047. That's 13 years sooner than a 2022 poll predicted. There's a 10% chance this could happen as early as 2027, according to the survey. For job automation specifically, researchers estimate a 10% chance that all human occupations become fully automatable by 2037, reaching 50% probability by 2116. Most concerning, 38% to 51% of experts assign at least a 10% risk of advanced AI causing human extinction. Geoffrey Hinton, often called the Godfather of AI, warned in a recent YouTube podcast that if superintelligent AI ever turned against us, it might unleash a biological threat like a custom virus -- super contagious, deadly and slow to show symptoms -- without risking itself. Resistance would be pointless, he said, because "there's no way we're going to prevent it from getting rid of us if it wants to." Instead, he argued that the focus should be on building safeguards early. "What you have to do is prevent it ever wanting to," he said in the podcast. He said this could be done by pouring resources into AI that stays friendly. Still, Hinton confessed he's struggling with the implications: "I haven't come to terms with what the development of superintelligence could do to my children's future. I just don't like to think about what could happen." Factors like faster computing, quantum AI and self-improving models could accelerate things. Hinton expects superintelligence in 10 to 20 years. Zuckerberg said during that podcast that he believes ASI could arrive within the next two to three years, and OpenAI CEO Sam Altman estimates it'll be somewhere in between those time frames. Most researchers agree we're still missing key ingredients, like more advanced learning algorithms, better hardware and the ability to generalize knowledge like a human brain. IBM points to areas like neuromorphic computing (hardware inspired by human neurons), evolutionary algorithms and multisensory AI as building blocks that might get us there. Meta's quest for 'personal superintelligence' Meta launched its Superintelligence Labs in June, led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub CEO), with $14.3 billion invested in Scale AI and $64 billion to $72 billion for data centers and AI infrastructure. Zuckerberg doesn't shy away from Greek mythology, with names like Prometheus and Hyperion for his two AI data superclusters (massive computing centers). He also doesn't talk about artificial superintelligence in abstract terms. Instead, he claims that Meta's specific focus is on delivering "personal super intelligence to everyone in the world." This vision, according to Zuckerberg, sets Meta apart from other research labs that he says primarily concentrate on "automating economically productive work." Bostrom thinks this isn't mere hype. "It's possible we're only a small number of years away from this," he said of Meta's plans, noting that today's frontier labs "are quite serious about aiming for superintelligence, so it is not just marketing moves." Though still in its early stages, Meta is actively recruiting top talent from companies like OpenAI and Google. Zuckerberg explained in his interview with The Information that the market is extremely competitive because so few people possess the requisite high level of skills. Facebook and Zuckerberg didn't respond to requests for comment. Should humans subscribe to the idea of superintelligent AI? There are two camps in the AI world: those who are overly enthusiastic, inflating its benefits and seemingly ignoring its downsides; and the doomers who believe AI will inevitably take over and end humanity. The truth probably lands somewhere in the middle. Widespread public fear and resistance, fueled by dystopian sci-fi and very real concerns over job loss and massive economic disruption, could slow progress toward superintelligence. One of the biggest problems is that we don't really know what even AGI looks like in machines, much less ASI. Is it the ability to reason across domains? Hold long conversations? Form intentions? Build theories? None of the current models, including Meta's Llama 4 and Grok 4, can reliably do any of this. There's also no agreement on what counts as "smarter than humans." Does it mean acing every test, inventing new math and physics theorems or solving climate change? And even if we get there -- should we? Building systems vastly more intelligent than us could pose serious risks, especially if they act unpredictably or pursue goals misaligned with ours. Without strict control, it could manipulate systems or even act autonomously in ways we don't fully understand. Brendan Englot, director of the Stevens Institute for Artificial Intelligence, shared with CNET that he believes "an important first step is to approach cyber-physical security similarly to how we would prepare for malicious human-engineered threats, except with the expectation that they can be generated and launched with much greater ease and frequency than ever before." That said, Englot isn't convinced that current AI can truly outpace human understanding. "AI is limited to acting within the boundaries of our existing knowledge base," Englot tells CNET. "It is unclear when and how that will change." Regulations like the EU AI Act aim to help, but global alignment is tricky. For example, China's approach differs wildly from the West's. Trust is one of the biggest open questions. A superintelligent system might be incredibly useful, but also nearly impossible to audit or constrain. And when AI systems draw from biased or chaotic data like real-time social media, those problems compound. Some researchers believe that given enough data, computing power and clever model design, we'll eventually reach AGI and ASI. Others argue that current AI approaches (especially LLMs) are fundamentally limited and won't scale to true general or superhuman intelligence because the human brain has 100 trillion connections. That's not even accounting for our capability of emotional experience and depth, arguably humanity's strongest and most distinctive attribute. But progress moves fast, and it would be naive to dismiss ASI as impossible. If it does arrive, it could reshape science, economics and politics -- or threaten them all. Until then, general intelligence remains the milestone to watch. If and when superintelligence does become a reality, it could profoundly redefine human life itself. According to Bostrom, we'd enter what he calls a "post-instrumental condition," fundamentally rethinking what it means to be human. Still, he's ultimately optimistic about what lies on the other side, exploring these ideas further in his most recent book, Deep Utopia. "It will be a profound transformation," Bostrom tells CNET.
Yahoo
8 hours ago
- Yahoo
Virginia Task Force 1 returns home after victim recovery efforts in Texas flood zone
CHANTILLY, Va. () — Virginia Task Force 1 (VA TF-1), the commonwealth's specialized search and rescue team, is back home from working victim recovery operations following deadly floods in Texas. The crew of four people and three dogs returned to their home base in Chantilly just before noon Monday after a 17-day deployment. Deadly Texas floods leave officials pointing fingers after warnings missed Special handlers and human remains detection dogs from VA TF-1 searched tough terrain, through debris, floodwaters and riverbeds, every day for more than two weeks, working to recover people missing in the devastating floods. The highly trained team included canine specialists Kristi Bartlett and Charlotte Grove and their human remains detection dogs, Athena and Ivy. 'When you're searching 60 miles of shoreline, you're like, 'Okay, I'm trying to find a needle in a haystack.' But, every day we're still giving it our all, really searching and gridding out our areas,' Bartlett said. Grove and Ivy have been paired up on past deployments, working together in search and recovery efforts after Hurricane Ian ravaged Florida back in 2022. 'You still get surprised when you get there, at the amount of devastation that there actually was,' Grove said of her arrival in Texas. This time, the pair worked 12+ hour days sniffing and searching through debris and floodwater in the Texas heat. 'We just keep working. We want to keep working until every last person has been brought home,' Grove said. More than 160 people are still missing after deadly Texas floods, governor says 'We're definitely focused on the mission. Just trying to make sure that we bring closure for everybody and their loved ones,' Bartlett said. 'We're definitely tired. We want to get our life back to normal, but also do more training. So when the next disaster happens, [Athena] is ready to go back out the door.' While 10-year-old canine Athena may have more training ahead, 11-year-old canine Ivy is a bit older. Grove said this may have been Ivy's final deployment before she heads into retirement. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.