logo
A 'new star' has exploded into the night sky — and you can see it from North America

A 'new star' has exploded into the night sky — and you can see it from North America

Yahoo20-06-2025
When you buy through links on our articles, Future and its syndication partners may earn a commission.
A "new star" is shining in the constellation Lupus thanks to an unexpected stellar explosion within the Milky Way — and it can currently be seen with the naked eye from parts of North America.
On June 12, astronomers from the All-Sky Automated Survey for Supernovae at Ohio State University first spotted the new point of light, which had an apparent magnitude of +8.7 at the time, still too dim to be seen by the naked eye, Sky & Telescope originally reported. (A smaller magnitude signifies a brighter object; for example, the moon has an apparent magnitude of -12.7).
Over the next few days, the rapidly brightening object took on several temporary names — including AT 2025nlr, ASASSN-25cm, and N Lup 2025 — as researchers scrambled to determine its identity.
Astronomer Yusuke Tampo, from the South African Astronomical Observatory at the University of Cape Town, then analyzed the light coming from the object and determined that it is likely a classical nova — a massive stellar explosion that temporarily shines bright in the night sky. On June 16, it was given the official designation V462 Lupi.
By June 18, V462 Lupi had brightened to an apparent magnitude of +5.7, which makes it just visible to the naked eye. This also makes it around 4 million times brighter than its extremely dim progenitor star was before June 12, according to Spaceweather.com.
Related: Nearly 900 years ago, astronomers spotted a strange, bright light in the sky. We finally know what caused it.
There is a chance that the nova will continue to brighten in the coming days, making it even easier to spot.
The Lupus constellation is located in the southern sky, meaning that V462 Lupi is most easily visible from the Southern Hemisphere. However, it can also be seen from North America, close to the southern horizon, just after sunset. Amateur astronomers from the U.S. have reported seeing it in places such as Arizona and California, and as far north as Lake Superior, according to Sky & Telescope.
You may be able to spot it without any additional equipment. However, it would be easier to spot if you had a decent telescope or a pair of stargazing binoculars, especially if you are viewing it from the U.S. or if the explosion starts to dim over the coming days.
Unlike supernovas, which are so powerful that they completely rip stars apart, a nova only affects the outer layer of a star. Classical novas, such as V462 Lupi, occur in a specific type of binary system, where a more massive white dwarf star is pulling material away from its larger partner. When enough material has been accreted onto the dwarf star's surface, the pressure builds up and triggers an explosion that burns up most of the stolen gas and shoots pulses of bright light toward Earth.
Naked-eye classical novas are rare. They appear "no more than once a year," Spaceweather.com representatives wrote, "and most are so close to the limit of naked-eye sensitivity that they can be invisible despite being technically [visible]."
RELATED STORIES
—Supernova that lit up Earth's skies 843 years ago has a flowering 'zombie star' at its heart — and it's still exploding
—Mystery explosion 1,000 years ago may be a rare, third type of supernova
—Rare quadruple supernova on our 'cosmic doorstep' will shine brighter than the moon when it blows up in 23 billion years
Some novas are also recurring events, blowing their top at regular intervals: For example, the long-awaited T Coronae Borealis nova, also known as the "Blaze Star," lights up our skies roughly every 80 years. However, astronomers have been predicting that the Blaze Star will reappear imminently for the last 15 months, and it is yet to emerge, which shows that it is not an exact science.
As this is the first recorded appearance of V462 Lupi, we have no idea if or when it will explode again in the future.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What Is Superintelligence? Everything You Need to Know About AI's Endgame
What Is Superintelligence? Everything You Need to Know About AI's Endgame

CNET

time7 hours ago

  • CNET

What Is Superintelligence? Everything You Need to Know About AI's Endgame

You've probably chatted with ChatGPT, experimented with Gemini, Claude or Perplexity, or even asked Grok to verify a post on X. These tools are impressive, but they're just the tip of the artificial intelligence iceberg. Lurking beneath is something far bigger that has been all the talk in recent weeks: artificial superintelligence. Some people use the term "superintelligence" interchangeably with artificial general intelligence or sci-fi-level sentience. Others, like Meta CEO Mark Zuckerberg, use it to signal their next big moonshot. ASI has a more specific meaning in AI circles. It refers to an intelligence that doesn't just answer questions but could outthink humans in every field: medicine, physics, strategy, creativity, reasoning, emotional intelligence and more. We're not there yet, but the race has already started. In July, Zuckerberg said during an interview with The Information that his company is chasing "personal superintelligence" to "put the power of AI directly into individuals' hands." Or, in Meta's case, probably in everyone's smart glasses. Scott Stein/CNET That desire kicked off a recruiting spree for top researchers in Silicon Valley and a reshuffling inside Meta's FAIR team (now Meta AI) to push Meta closer to AGI and eventually ASI. So, what exactly is superintelligence, how close are we to it, and should we be excited or terrified? Let's break it down. What is superintelligence? Superintelligence doesn't have a formal definition, but it's generally described as a hypothetical AI system that would outperform humans at every cognitive task. It could process vast amounts of data instantly, reason across domains, learn from mistakes, self-improve, develop new scientific theories, write flawless code, and maybe even make emotional or ethical judgments. The idea became popularized through philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies , which warned of a scenario where an AI bot becomes smarter than humans, self-improves rapidly and then escapes our control. That vision sparked both excitement and fear among tech experts. Speaking to CNET, Bostrom says many of his 2014 warnings "have proven quite prescient." What has surprised him, he says, is "how anthropomorphic current AI systems are," with large language models behaving in surprisingly humanlike ways. Bostrom says he's now shifting his attention toward deeper issues, including "the moral status of digital minds and the relationship between the superintelligence we build with other superintelligences," which he refers to as "the cosmic host." For some, ASI represents the pinnacle of progress, a tool to cure disease, reverse climate change and crack the secrets of the universe. For others, it's a ticking time bomb -- one wrong move and we're outmatched by a machine we can't control. It's sometimes called the last human invention, not because it's final, but because ASI could invent everything else we need. British mathematician Irving John Good described it as an "intelligence explosion." Superintelligence doesn't exist yet. We're still in the early stages of what's called artificial narrow intelligence. It's an AI system that is great at specific tasks like translation, summarization and image generation, but not capable of broader reasoning. Tools like ChatGPT, Gemini, Copilot, Claude and Grok fall into this category. They're good at some tasks, but still flawed, prone to hallucinations and incapable of true reasoning or understanding. To reach ASI, AI needs to first pass through another stage: artificial general intelligence. What is AGI? AGI, or artificial general intelligence, refers to a system that can learn and reason across a wide range of tasks, not just one domain. It could match human-level versatility, such as learning new skills, adapting to unfamiliar problems and transferring knowledge across fields. Unlike current chatbots, which rely heavily on training data and struggle outside of predefined rules, AGI would handle complex problems flexibly. It wouldn't just answer questions about math and history; it could invent new solutions, explain them and apply them elsewhere. Current models hint at AGI traits, like multimodal systems that handle text, images and video. But true AGI requires breakthroughs in continual learning (updating knowledge without forgetting old stuff) and real-world grounding (understanding context beyond data). And none of the major models today qualify as true AGI, though many AI labs, including OpenAI, Google DeepMind and Meta, list it as their long-term target. Once AGI arrives and self-improves, ASI could follow quickly as a system smarter than any human in every area. How close are we to superintelligence? A superintelligent future concept I generated using Grok AI. Grok / Screenshot by CNET That depends on who you ask. A 2024 survey of 2,778 AI researchers paints a sobering picture. The aggregate forecasts give a 50% chance of machines outperforming humans in every possible task by 2047. That's 13 years sooner than a 2022 poll predicted. There's a 10% chance this could happen as early as 2027, according to the survey. For job automation specifically, researchers estimate a 10% chance that all human occupations become fully automatable by 2037, reaching 50% probability by 2116. Most concerning, 38% to 51% of experts assign at least a 10% risk of advanced AI causing human extinction. Geoffrey Hinton, often called the Godfather of AI, warned in a recent YouTube podcast that if superintelligent AI ever turned against us, it might unleash a biological threat like a custom virus -- super contagious, deadly and slow to show symptoms -- without risking itself. Resistance would be pointless, he said, because "there's no way we're going to prevent it from getting rid of us if it wants to." Instead, he argued that the focus should be on building safeguards early. "What you have to do is prevent it ever wanting to," he said in the podcast. He said this could be done by pouring resources into AI that stays friendly. Still, Hinton confessed he's struggling with the implications: "I haven't come to terms with what the development of superintelligence could do to my children's future. I just don't like to think about what could happen." Factors like faster computing, quantum AI and self-improving models could accelerate things. Hinton expects superintelligence in 10 to 20 years. Zuckerberg said during that podcast that he believes ASI could arrive within the next two to three years, and OpenAI CEO Sam Altman estimates it'll be somewhere in between those time frames. Most researchers agree we're still missing key ingredients, like more advanced learning algorithms, better hardware and the ability to generalize knowledge like a human brain. IBM points to areas like neuromorphic computing (hardware inspired by human neurons), evolutionary algorithms and multisensory AI as building blocks that might get us there. Meta's quest for 'personal superintelligence' Meta launched its Superintelligence Labs in June, led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub CEO), with $14.3 billion invested in Scale AI and $64 billion to $72 billion for data centers and AI infrastructure. Zuckerberg doesn't shy away from Greek mythology, with names like Prometheus and Hyperion for his two AI data superclusters (massive computing centers). He also doesn't talk about artificial superintelligence in abstract terms. Instead, he claims that Meta's specific focus is on delivering "personal super intelligence to everyone in the world." This vision, according to Zuckerberg, sets Meta apart from other research labs that he says primarily concentrate on "automating economically productive work." Bostrom thinks this isn't mere hype. "It's possible we're only a small number of years away from this," he said of Meta's plans, noting that today's frontier labs "are quite serious about aiming for superintelligence, so it is not just marketing moves." Though still in its early stages, Meta is actively recruiting top talent from companies like OpenAI and Google. Zuckerberg explained in his interview with The Information that the market is extremely competitive because so few people possess the requisite high level of skills. Facebook and Zuckerberg didn't respond to requests for comment. Should humans subscribe to the idea of superintelligent AI? There are two camps in the AI world: those who are overly enthusiastic, inflating its benefits and seemingly ignoring its downsides; and the doomers who believe AI will inevitably take over and end humanity. The truth probably lands somewhere in the middle. Widespread public fear and resistance, fueled by dystopian sci-fi and very real concerns over job loss and massive economic disruption, could slow progress toward superintelligence. One of the biggest problems is that we don't really know what even AGI looks like in machines, much less ASI. Is it the ability to reason across domains? Hold long conversations? Form intentions? Build theories? None of the current models, including Meta's Llama 4 and Grok 4, can reliably do any of this. There's also no agreement on what counts as "smarter than humans." Does it mean acing every test, inventing new math and physics theorems or solving climate change? And even if we get there -- should we? Building systems vastly more intelligent than us could pose serious risks, especially if they act unpredictably or pursue goals misaligned with ours. Without strict control, it could manipulate systems or even act autonomously in ways we don't fully understand. Brendan Englot, director of the Stevens Institute for Artificial Intelligence, shared with CNET that he believes "an important first step is to approach cyber-physical security similarly to how we would prepare for malicious human-engineered threats, except with the expectation that they can be generated and launched with much greater ease and frequency than ever before." That said, Englot isn't convinced that current AI can truly outpace human understanding. "AI is limited to acting within the boundaries of our existing knowledge base," Englot tells CNET. "It is unclear when and how that will change." Regulations like the EU AI Act aim to help, but global alignment is tricky. For example, China's approach differs wildly from the West's. Trust is one of the biggest open questions. A superintelligent system might be incredibly useful, but also nearly impossible to audit or constrain. And when AI systems draw from biased or chaotic data like real-time social media, those problems compound. Some researchers believe that given enough data, computing power and clever model design, we'll eventually reach AGI and ASI. Others argue that current AI approaches (especially LLMs) are fundamentally limited and won't scale to true general or superhuman intelligence because the human brain has 100 trillion connections. That's not even accounting for our capability of emotional experience and depth, arguably humanity's strongest and most distinctive attribute. But progress moves fast, and it would be naive to dismiss ASI as impossible. If it does arrive, it could reshape science, economics and politics -- or threaten them all. Until then, general intelligence remains the milestone to watch. If and when superintelligence does become a reality, it could profoundly redefine human life itself. According to Bostrom, we'd enter what he calls a "post-instrumental condition," fundamentally rethinking what it means to be human. Still, he's ultimately optimistic about what lies on the other side, exploring these ideas further in his most recent book, Deep Utopia. "It will be a profound transformation," Bostrom tells CNET.

Castle Biosciences to Present at the Canaccord Genuity 45th Annual Growth Conference
Castle Biosciences to Present at the Canaccord Genuity 45th Annual Growth Conference

Yahoo

time8 hours ago

  • Yahoo

Castle Biosciences to Present at the Canaccord Genuity 45th Annual Growth Conference

FRIENDSWOOD, Texas, July 29, 2025 (GLOBE NEWSWIRE) -- Castle Biosciences, Inc. (Nasdaq: CSTL), a company improving health through innovative tests that guide patient care, today announced that its executive management is scheduled to present a company overview at the Canaccord Genuity 45th Annual Growth Conference on Tuesday, Aug. 12, 2025, at 12:30 p.m. Eastern time. A live audio webcast of the Company's presentation will be available by visiting Castle Biosciences' website at A replay of the webcast will be available following the conclusion of the live broadcast. About Castle BiosciencesCastle Biosciences (Nasdaq: CSTL) is a leading diagnostics company improving health through innovative tests that guide patient care. The Company aims to transform disease management by keeping people first: patients, clinicians, employees and investors. Castle's current portfolio consists of tests for skin cancers, Barrett's esophagus and uveal melanoma. Additionally, the Company has active research and development programs for tests in these and other diseases with high clinical need, including its test in development to help guide systemic therapy selection for patients with moderate-to-severe atopic dermatitis seeking biologic treatment. To learn more, please visit and connect with us on LinkedIn, Facebook, X and Instagram. DecisionDx-Melanoma, DecisionDx-CMSeq, i31-SLNB, i31-ROR, DecisionDx-SCC, MyPath Melanoma, TissueCypher, DecisionDx-UM, DecisionDx-PRAME and DecisionDx-UMSeq are trademarks of Castle Biosciences, Inc. Investor Contact:Camilla Zuckeroczuckero@ Media Contact:Allison Marshallamarshall@ Source: Castle Biosciences while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

Creyos Featured in Alzheimer's Research at AAIC 2025
Creyos Featured in Alzheimer's Research at AAIC 2025

Yahoo

time8 hours ago

  • Yahoo

Creyos Featured in Alzheimer's Research at AAIC 2025

Findings from Western University, QIMR Berghofer, and Rotman Research Institute underscore the power of Creyos in detecting early risk, validating digital testing, and uncovering markers of cognitive resilience. Used in over 400 peer-reviewed studies, Creyos continues to power cutting-edge brain health research around the world. TORONTO, July 29, 2025 /PRNewswire/ -- Creyos, the digital platform trusted by clinicians and researchers to assess brain health with precision and ease, was featured in three independent research presentations at the 2025 Alzheimer's Association International Conference (AAIC). The studies demonstrate how Creyos is helping researchers advance the science of early detection, understand risk, and explore what protects cognitive health as we age. Built on more than 30 years of neuroscience research and validated in over 400 peer-reviewed studies, Creyos enables fast, reliable measurement of key cognitive domains—including memory, attention, reasoning, and executive function. For decades, leading research institutions have leveraged the platform to explore large-scale questions that demand high-quality, scientifically reliable data, with three notable studies chosen for presentation at AAIC this year. "Creyos wasn't born in a boardroom—it came out of necessity in the lab," said Adrian Owen, Professor of Cognitive Neuroscience and Imaging at the University of Western Ontario, Chief Scientific Officer at Creyos, and co-author of one of the featured AAIC posters. "It started as a way to solve a problem I faced in my own research—how to measure cognition in a way that's both rigorous and scalable. Since then, it's been used by hundreds of colleagues around the world, contributing to a growing body of work aimed at understanding cognitive health. It's rewarding to see the role it's playing in advancing research across the field." Highlights from AAIC presentations included: 1. Early Screening Through Digital TasksWestern University – Adrian Owen, PhDUsing data from over 4,000 older adults, researchers applied machine learning to identify the two most predictive Creyos tasks for detecting age-related cognitive impairment. A screener using attention and working memory tasks matched mild cognitive impairment rates in a validation sample of over 9,000 adults, and correctly identified 100% of 14 participants with a clinical Alzheimer's disease diagnosis. The study positions Creyos as a powerful digital alternative to traditional dementia screening tools. 2. Genetics, Risk, and Online Testing in the PISA StudyQIMR Berghofer – Michelle Lupton, PhDThis study, conducted within the Prospective Imaging Study of Aging (PISA)—one of the world's largest cohorts focused on early Alzheimer's detection—validated the use of the Creyos platform for online cognitive testing in adults aged 42–75. Researchers compared self-administered Creyos assessments with traditional in-person testing and MRI-derived brain morphology measures. Findings showed strong alignment between online and in-person results, including associations with Alzheimer's-related brain changes and genetic risk. The study underscores the potential of online cognitive testing as a scalable, cost-effective tool for early detection and large-scale research in Alzheimer's disease. 3. Cognitive Resilience in Aging AdultsRotman Research Institute – Brian Levine, PhDWhy do some people maintain cognitive function despite age-related pathology or trauma? This study used Creyos to assess over 3,300 individuals across three cohorts. The Grammatical Reasoning task emerged as a potential marker of resilience—strong performance was associated with less excessive reliance on episodic memory strategies and greater resilience following PTSD. These insights point to reasoning ability as a potential buffer against cognitive decline. These studies demonstrate that Creyos is no stranger to rigorous science. The platform's role in the research being showcased at AAIC reflects not only continued trust among the global scientific community but also growing momentum in how cognitive data can support early detection, care planning, and treatment across settings. Creyos is used by healthcare providers in primary care, neurology, and behavioral health to screen for cognitive impairment, monitor longitudinal change, and inform care decisions. With nearly 20 million assessments completed and more than 10,000 providers actively using the platform, Creyos is reshaping how brain health is measured, bridging the gap between research insights and real-world care. About Creyos Creyos, formerly known as Cambridge Brain Sciences, is a pioneering healthcare technology company dedicated to transforming how healthcare providers assess and manage patient brain health. Supporting clinicians and health systems worldwide, the Creyos platform includes objective online tasks, digital behavioral health screeners, and condition-specific assessments that deliver actionable insights, promote early intervention, and enable evidence-based clinical decisions for various cognitive and behavioral health conditions, including dementia, ADHD, depression, anxiety, and others. Backed by 30 years of research and a normative database of over 85,000 participants, the FDA-registered Creyos platform has been published in over 400 peer reviewed studies and is recognized as a scientifically-validated solution for measuring and monitoring patient brain health. For more information about Creyos visit Media Contactcreyos@ View original content to download multimedia: SOURCE Creyos Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store