
Aurora Borealis May Hit These 16 States Over the Next Two Days
Of the two days, Tuesday night into Wednesday morning will be your best shot. The Space Weather Prediction Center is forecasting a Kp 5 magnetic storm that's scheduled to hit over six hours starting late Monday evening. The K-Index measures the horizontal impact of geomagnetic storms, and a Kp 5 rating results in what NOAA calls a "moderate" aurora.
Things will calm down a bit Wednesday night going into Thursday morning, with the aurora pulling back a bit and being less visible unless you're on the northern border with Canada. NOAA is forecasting a Kp 4 magnetic storm that night.
NOAA predicts the northern lights reaching into the US on Tuesday and Wednesday.
NOAA
Which states could see the aurora borealis?
Per the Space Weather Prediction Center, Tuesday night's aurora will be visible in Washington state, Idaho, Montana, Wyoming, North and South Dakota, Minnesota, Iowa, Wisconsin, Michigan, and the northernmost sections of New York, Vermont, New Hampshire, and most of Maine.
Alaska and Canada will have the best views by a wide margin, with the entire state of Alaska getting coverage. Technically, there will also be a slice of Oregon, putting the full number at 16, but unless you live on the northeastern tip, you likely won't see anything.
Much like Earth's weather, space weather prediction can be hit or miss. So, if you're in any of the above states, it's worth taking a look if you're up that late. It may be slightly stronger or weaker than forecasted, which will affect how far south the northern lights reach. It won't be as strong as the epic show we saw in May 2024.
Tips on viewing the northern lights
The standard space viewing tips all apply here. You'll get a better view if you get away from the city and suburbs to avoid light pollution. Weather will play a role as well, since clouds will obfuscate the view. If you attempt to photograph the aurora, we recommend using long exposure times to give your camera more time to soak in the light.
Other than that, you'll want to look toward the northern horizon to give yourself the best chance at a good view since that's where the northern lights originate.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Mass Burn: SpaceX Deorbits Nearly 500 Starlink Satellites in 6 Months
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing. Over a six-month stretch, hundreds of Starlink satellites met a fiery end in the Earth's atmosphere as SpaceX retired its aging hardware. From December to May, SpaceX deorbited 472 Starlink satellites, according to a new filing with the Federal Communications Commission. The means the company deorbited about 2.6 satellites per day, a notable increase after having only deorbited 73 satellites in the previous six-month period. SpaceX's satellites are designed to orbit the Earth for five years before they're retired and burn up in the Earth's atmosphere. In the FCC filing, SpaceX said 430 of the deorbited satellites belonged to the first-generation Starlink network. However, most of the satellites that reentered the atmosphere did so less than five years after beginning operations. The remaining deorbited satellites belonged to the second-generation Starlink network. SpaceX didn't immediately respond to a request for comment, making it unclear why so many satellites were sent plunging back to the Earth. However, the Starlink network has grown to nearly 8,000 satellites, according to astronomer Jonathan McDowell, who tracks satellite launches. In January, McDowell noticed the drastic spike in Starlink deorbits, and estimated the company was 'retiring and incinerating about 4 or 5 Starlinks every day." Since then, the number of Starlink satellites burning up the atmosphere has declined, he told PCMag, although his stats show another 200 satellites are facing disposal. The large number of Starlink disposals raises questions about the potential impact. While Starlink satellites are designed to disintegrate as they burn up in the atmosphere, SpaceX in February revealed that small and mainly harmless debris fragments can continue flying toward Earth. In rare instances, they slip through, like the "2.5 kg piece of aluminum" from a Starlink satellite that fell near a farm in Canada. Still, the company says the chance of its newest Starlink satellites causing human harm through falling debris is 'less than 1 in 100 million' due to improved designs and ongoing changes to make the hardware fully disintegrate during reentry. Nevertheless, astronomers and scientists have been concerned about the same Starlink satellites releasing chemicals in the atmosphere as they burn up, which might have unforeseen climate or ozone layer impacts. In response, a group of astronomers last year called on the FCC to pause Starlink launches until the environmental effects of large satellite constellations can be fully investigated. So far, the new Republican-led FCC hasn't said if it'll embark on such a study. Meanwhile, SpaceX has previously said it designed the Starlink network to minimize any environmental impact.


Medscape
an hour ago
- Medscape
Exercise Intervention Boosts Colon Cancer Survival Benefits
This transcript has been edited for clarity. Hello, everyone. I'm Dr Bishal Gyawali, associate professor of oncology at Queens University, Kingston, Canada. I'm very happy to share with you some of the most exciting data that I just saw at the plenary session at ASCO 2025. Before that, I'm going to talk to you about a fantastic new drug called exercisumab. I'm joking, of course. Exercise has been shown to improve the lives of patients with colon cancer. I'm joking that if there were a drug called exercisumab, the data would be so compelling that we'd all want to use it and fund it today. Because this is not a drug and it's about exercise, I see some challenges in implementation. I hope that I'm able to convince you that the data are really compelling and we should make an effort so that our health systems will integrate this as a part of cancer care for patients with high-risk stage II and stage III colon cancer who receive adjuvant chemotherapy. The trial I'm talking about is called the CHALLENGE trial, which was not presented at the plenary but should have been. In this trial, patients who had high-risk stage II and stage III colon cancer, after they completed their adjuvant chemotherapy, were randomized to receive a structured exercise program vs the standard-of-care arm. The standard-of-care arm patients received health education but did not receive a structured exercise program. The goal of the structured exercise program was to improve physical activity by at least 10 MET-hours compared to the baseline of these patients. The primary endpoint was disease-free survival. Disease-free survival was significantly improved, and overall survival was also significantly improved. The 5-year disease-free survival rates improved by almost 7%, and the 8-year overall survival rates also improved by a similar amount. The hazard ratio for disease-free survival was 0.72, and the hazard ratio for overall survival was 0.63. These are very compelling results. If you compare these results with results from other trials, you'll see that this is a no-brainer. If this were a drug, you would want to use it today. There are some nuances about this trial that I want to highlight. When we talk about the results, some of the comments were, 'Oh yes, I have been asking my patients to exercise anyway.' Exercise improves quality of life, it'll reduce weight, and these are all known to benefit patients. I have been telling my patients to exercise, but this trial is not about telling patients to exercise. This trial is about having a formal, structured exercise program. There are particular details. Patients need to have an in-person visit with a therapist every 2 weeks for the first year and then every month for the next 2 years, so it's a 3-year therapy program. It's a scientifically designed and tailored program. It's not just saying, oh, you should exercise. In fact, saying you should exercise and giving some health education was the control arm of this treatment, not the interventional arm. The control arm patients were told about this trial, the potential benefits of the exercise, why they should enroll in this trial, and they were given health education materials. An interesting observation is that even the control arm patients had improvements in their physical functioning, VO 2 , and all those parameters from baseline to subsequent visits. One limitation is the adherence rate to exercise. We see that the adherence rate kept falling with time. I think that by the end of 3 years, the adherence rate to the exercise program was around 60%-65% in that ballpark, which is a limitation. Having said that, the analysis accounts for all of that. Despite that limitation, we are seeing this substantial benefit. If you want to compare that with the ATOMIC trial, which was a plenary presentation of immunotherapy plus FOLFOX for patients who needed adjuvant FOLFOX in stage III colon cancer patients, of course, the addition of atezolizumab to FOLFOX improved disease-free survival rates. The primary endpoint here was 3-year disease-free survival, and it improved significantly. It was a plenary, and people were making the argument that this should immediately change practice. If you compare that with this exercise trial that I just discussed: A, think about the added toxicities; B, think about the added cost; and C, think about how feasible it is to implement. I think it's a no-brainer that we need to start having health systems funds for a structured exercise program for our patients with colon cancer. Yes, the atezolizumab data and the ATOMIC trial data look very interesting and this is one of the first advances in treatment of adjuvant colon cancer in a long time. This is for patients with microsatellite instability-high status. We don't have overall survival results yet. Disease-free survival is a much more reliable predictor of overall survival in this particular setting. I believe that overall survival might be positive, but we also need to know what percentage of these patients got immunotherapy when they relapsed, because immunotherapy is already standard of care for these patients when they relapse. The other point about this trial is, do they all actually need 1 year of atezolizumab? Probably not. As the discussant highlighted in her talk, in many settings, we are now using neoadjuvant strategies. Using two or three cycles might be enough. The broader point that I'm trying to make is contrasting these two studies and inviting you to think about how different these are, even in terms of magnitude of benefit. The exercise trial has overall survival, not just disease-free survival, at an 8-year time point. When I asked Dr Booth about the cost of this intervention, he said for the whole 3-year time point, it might be around $3000 Canadian dollars. This trial was conducted mostly in Canada and in Australia. As opposed to atezolizumab, where a month of atezolizumab alone is going to cost $15,000, so that's just a perspective I wanted to put forward. One more thing I wanted to talk about today is the SERENA-6 trial, which was discussed at the plenary session. This is a trial for patients with estrogen receptor-positive, HER2-negative metastatic breast cancer who have been on a CDK4/6 inhibitor plus aromatase inhibitor for 6 months. They were then tested with ctDNA to detect ESR1 mutations early, and if this was detected, then they were randomized to either follow the same treatment, which is the control arm, or get the new drug. The primary endpoint here was progression-free survival. This was debated often during the season. We have so many debates about progression-free and overall survival, but for this particular trial, progression-free survival makes no sense because this is just detecting relapse early. Detecting relapse early does not always mean that you need to intervene early. Of course, if you are intervening early, then you are going to prolong time to tumor progression. The progression-free survival in this sense is more like time on treatment with this drug rather than true progression-free survival. You're just changing treatment early, and the control arm patients are not getting that treatment when they progress. Measuring progression-free survival alone here felt similar to measuring CA-125, or whatever tumor markers we measure, then instituting treatment early and claiming that patients have a longer time on treatment, when in fact, it's just lead time bias or intervening early without knowing that it's going to improve outcomes. A final trial from the plenary session was the MATTERHORN trial. I want to bring that up as well because this trial was investigating durvalumab plus perioperative FLOT in patients with esophageal cancers. This trial had a significant improvement in event-free survival, but has not improved overall survival yet. It may or may not translate into an overall survival improvement. The discussant did not cover the limitations of this trial well, and that's why I wanted to bring it up. There are several factors to consider here. There are other trials in similar settings, where event-free or disease-free survival have improved, but overall survival has not. There is no point in getting super excited about this because it may not translate to overall survival, just like other immunotherapy trials in this space. The other thing is, we need to make sure what treatments patients are getting at the time of progression or at the time of relapse. Are they getting the right treatment?If they're not getting the right treatment, then any survival difference can be simply a function of the control arm patients not getting the right treatment at the time of relapse. If we compare these results with results of other immunotherapy trials, I don't think the results are substantially different. Yes, an event-free survival improvement is important, but especially in this setting, in this disease, we have seen other trials where disease-free or event-free survival have not necessarily led to an overall survival improvement. We need to be asking ourselves, can we claim that it is already practice changing without having those results? I don't think that's the case. Those are some of my thoughts from this year's plenary session at ASCO 2025. Thank you.


Forbes
2 hours ago
- Forbes
Are We Finally Ceding Control To The Machine? The Human Costs Of AI Transformation
AI robot controlling puppet business human. Generative Artificial Intelligence has exploded into the mainstream. Since its introduction, it has transformed the ways individuals work, create, and interact with technology. But is this adoption useful? While technology is saving people considerable time and money, will its effects have repercussions on human health and economic displacement? Jing Hu isn't your typical AI commentator. Trained as a biochemist, she traded the lab bench for the wild west of tech, spending a decade building products before turning her sights on AI research and journalism. Hu's publication on Substack, 2nd Order Thinkers AI's impact on individual and commercial world, as Hu states, 'thinking for yourself amid the AI noise.' In a recent episode of Tech Uncensored I spoke with Jing Hu to discuss the cognitive impacts from increasing usage of Chatbots built on LLMs. Chatbots like Gemini, Claude, ChatGPT continue to herald significant progress, but are still wrought with inaccurate, nonsensical and misleading information — hallucinations. The content generated can be harmful, unsafe, and often misused. LLMs today are not fully trustworthy, by the standards we should expect for full adoption of any software products. Are Writing and Coding Occupations at Risk? In her recent blog, Why thinking Hurts After Using AI, Hu writes, 'Seduced by AI's convenience, I'd rush through tasks, sending unchecked emails and publishing unvetted content,' and surmises that 'frequent AI usage is actively reshaping our critical thinking patterns.' Hu references OpenAI and UPenn study from 2023 that looks at the labor market impact from these LLMs. It states that tasks that involve science and critical thinking are the tasks that would be safe; however, those which involve programming and writing would be at risk. Hu cautions, 'however, this study is two years old, and at the pace of AI, it needs updating.' She explains, 'AI is very good at drafting articles, summarizing and formatting. However, we humans are irreplaceable when it comes to strategizing or discussing topics that are highly domain specific. Various research found that AI's knowledge is only surface level. This becomes especially apparent when it comes to originality.' Hu explains that when crafting marketing copy, 'we initially thought AI could handle all the writing. However, we noticed that AI tends to use repetitive phrases and predictable patterns, often constructing sentences like, "It's not about X, it's about Y," or overusing em-dashes. These patterns are easy to spot and can make the writing feel dull and uninspired.' For companies like Duolingo whose CEO promises to be an 'AI-first company,' replacing their contract employees is perhaps a knee-jerk decision that has yet to be brought to bear. The employee memo clarified that 'headcount will only be given if a team cannot automate more of their work.' The company was willing to take 'small hits on quality than move slowly and miss the moment.' For companies like this, Hu argues that they will run into trouble very soon and begin rehiring just to fix AI generated bugs or security issues. Generative AI for coding can be inaccurate because models were trained on Github, or similar databases. She explains, 'Every database has its own quirks and query syntax, and many contain hidden data or schema errors. If you rely on AI-generated sample code to wire them into your system, you risk importing references to tables or drivers that don't exist, using unsafe or deprecated connection methods, and overlooking vital error-handling or transaction logic. These mismatches can cause subtle bugs, security gaps, and performance problems—making integration far more error-prone than it first appears.' Another important consideration is cybersecurity, which must be approached holistically. 'If you focus on securing just one area, you might fix a vulnerability but miss the big picture,' she said. She points to the third issue: Junior developers using tools like Copilot often become overly confident in the code these tools generate. And when asked to explain their code, many are unable to do it because they don't truly understand what was produced. Hu concedes that AI is good at producing code quickly, however it is a only part (25-75%) of software development, 'People often ignore the parts that we do need: architecture, design, security. Humans are needed to configure the system properly for the system to run as a whole.' She explains that the parts of code that will be replaced by AI will be routine and repetitive, so this is an opportune moment for developers to transition, advising 'To thrive in the long term, how should we — as thinking beings —develop our capacity for complex, non-routine problem-solving? Specifically, how do we cultivate skills for ambiguous challenges that require analysis beyond pattern recognition (where AI excels)?' The Contradiction of Legacy Education and The Competition for Knowledge Creation In a recent article from the NY Times. 'Everyone is Cheating their Way through College,' a student remarked, 'With ChatGPT, I can write an essay in two hours that normally takes 12.' Cheating is not new, but as one student exclaimed, 'the ceiling has been blown off.' A professor remarks, 'Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.' For Hu, removing AI from the equation does not negate cheating. Those who genuinely want to learn will choose how to use the tools wisely. Hu was at a recent panel discussion at Greenwich University and Hu commented to a question from a professor about whether to ban students from using AI: 'Banning AI in education misses the point. AI can absolutely do good in education, but we need to find a way so students don't offload their thinking to AI and lose the purpose of learning itself. The goal should be fostering critical thinking, not just policing the latest shortcut.' Another professor posed the question, 'If a student is not a native English speaker, but the exam requires them to write an essay in English, which approach is better? Hu commented that not one professor on this panel could answer the question. The situation was unfathomable and far removed from situations covered by current policy and governance. She observes, 'There is already a significant impact on education and many important decisions have yet to be made. It's difficult to make clear choices right now because so much depends on how technology will evolve and how fast the government and schools can adapt.' For educational institutions that have traditionally been centers of knowledge creation, the rise of AI is powerful — one that often feels more like a competitor than a tool. As a result, it has left schools struggling to determine how AI should be integrated to support student learning. Meanwhile, schools face a dilemma: many have been using generative AI to develop lessons, curricula, even review students' performance, yet the institution remains uncertain and inconsistent in their overall approach to AI. On a broader scale, the incentive structures within education are evolving. The obsession with grades have 'prevented teachers from using assessments that would support meaningful learning.' The shift towards learning and critical thinking may be the hope that students need to tackle an environment with pervasive AI. MIT Study Sites Cognitive Decline with Increasing LLM Use MIT Media Lab produced a recent study that monitored the brain activity of about 60 research subjects. These participants were asked to write essays on given topics and were split into three groups: 1) use LLM only 2) use traditional search engine only 3) use only their brain and no other external aid. The conclusion: 'LLM users showed significantly weaker neural connectivity, indicating lower cognitive effort and engagement compared to others.' Brain connectivity is scaled down with the amount of external support. This MIT brain scans show: Writing with Google dims your brain by up to 48%. ChatGPT pulls the plug, with 55% less neural connectivity. Some other findings: Hu noticed that the term 'cognitive decline' was misleading since the study was conducted over a four-month period. We've yet to see the long-term effects. However, she acknowledges that in one study about how humans develop amnesia suggests just this: either we use it or lose it. She adds, 'While there are also biological factors involved such as changes in brain proteins, reduced brain activity is thought to increase the risk of diseases that affect memory.' The MIT study found that the brain-only group showed much more active brain waves compared to the search-only and LLM-only groups. In the latter two groups, participants relied on external sources for information. The search-only group still needed some topic understanding to look up information, and like using a calculator — you must understand its functions to get the right answer. In contrast, the LLM-only group simply had to remember the prompt used to generate the essay, with little to no actual cognitive processing involved. As Hu noted, 'there was little mechanism formulating when only AI was used in writing an essay. This ease of using AI, just by inputting natural language, is what makes it dangerous in the long run.' AI Won't Replace Humans, but Humans using AI Will — is Bull S***! Hu pointed to this phrase that has been circulating on the web: 'AI won't Replace Humans, but Humans using AI Will.' She argues that this kind of pressure will compel people to use AI, engineered from a position of fear explaining, 'If we refer to those studies on AI and critical thinking released last year, it is less about whether we use AI but more about our mindset, which determine how we interact with AI and what consequences you encounter.' Hu pointed to a list of concepts she curated from various studies she called AI's traits — how AI could impact our behavior: Hu stresses that we need to be aware of these traits when we work with AI on a daily basis and be mindful that we maintain our own critical thinking. 'Have a clear vision of what you're trying to achieve and continue to interrogate output from AI,' she advises. Shifting the Narrative So Humans are AI-Ready Humanity is caught in a tug of war between the provocation to adopt or be left behind and the warning to minimize dependence on a system that is far from trustworthy. When it comes to education, Hu, in her analysis of the MIT study, advocates for delaying AI integration. First, invest in independent self-directed learning to build the capacity for critical thinking, memory retention, and cognitive engagement. Secondly, make concerted efforts to use AI as a supplement — not a substitute. Finally, teach students to be mindful of AI's cognitive costs and lingering consequences. Encourage them to engage critically — knowing when to rely on AI and when to intervene with their own judgement. She realizes, 'In the education sector, there is a gap between the powerful tool and understanding how to properly leverage it. It's important to develop policy that sets boundaries for both students and faculty for AI responsible use.' Hu insists that implementing AI in the workforce needs to be done with tolerance and compassion. She points to a recent manifesto by Tobi Lütke's Shopify CEO, that called for an immediate and universal AI adoption within the company — a new uncompromising standard for current and future employees. This memo shared AI will be the baseline for work integration, improving productivity, setting performance standards which mandates a total acceptance of the technology. Hu worries that CEOs like Lütke are wielding AI to intimidate employees to work harder, or else! She alluded to one of the sections that demanded employees to demonstrate why a task could not be accomplished with AI before asking for more staff or budget as she asserts, 'This manifesto is not about innovation at all. It feels threatening and if I were an employee of Shopify, I would be in constant fear of losing my job. That kind of speech is unnecessary.' Hu emphasized that this would only discourage employees further, and it would embolden CEOs to continue to push the narrative of how AI is inevitably going to drive layoffs. She cautions CEOs to pursue an understanding of AI's limitations for to ensure sustainable benefit for their organizations. She encourages CEOs to pursue a practical AI strategy that complements workforce adoption, considers current data gaps, systems, and cultural limitations that will have more sustainable payoffs. Many CEOs today may be more likely to pursue a message with AI, 'we can achieve anything,' but this deviates from reality. Instead, develop transparent communication in lock-step with each AI implementation, that clarifies how AI will be leveraged to meet those goals, and what this will this mean for the organization. Finally, for individuals, Hu advises, 'To excel in a more pervasive world of AI, you need to clearly understand your personal goals and commit your effort to the more challenging ones requiring sustained mental effort. This is a significant step to start building the discipline and skills needed to succeed.' There was no mention, this time, of 'AI' in Hu's counsel. And rightly so — humans should own their efforts and outcomes. AI is a mere sidekick.