logo
Major reports about how climate change affects the US are removed from websites

Major reports about how climate change affects the US are removed from websites

WASHINGTON — Legally mandated U.S. national climate assessments seem to have disappeared from the federal websites built to display them, making it harder for state and local governments and the public to learn what to expect in their backyards from a warming world.
Scientists said the peer-reviewed authoritative reports save money and lives. Websites for the national assessments and the U.S. Global Change Research Program were down Monday and Tuesday with no links, notes or referrals elsewhere. The White House, which was responsible for the assessments, said the information will be housed within NASA to comply with the law, but gave no further details.
Searches for the assessments on NASA websites did not turn them up. NASA did not respond to requests for information. The National Oceanic and Atmospheric Administration, which coordinated the information in the assessments, did not respond to repeated inquiries.
'It's critical for decision makers across the country to know what the science in the National Climate Assessment is. That is the most reliable and well-reviewed source of information about climate that exists for the United States,' said University of Arizona climate scientist Kathy Jacobs, who coordinated the 2014 version of the report.
'It's a sad day for the United States if it is true that the National Climate Assessment is no longer available,' Jacobs said. 'This is evidence of serious tampering with the facts and with people's access to information, and it actually may increase the risk of people being harmed by climate-related impacts.'
Harvard climate scientist John Holdren, who was President Obama's science advisor and whose office directed the assessments, said after the 2014 edition he visited governors, mayors and other local officials who told him how useful the 841-page report was. It helped them decide whether to raise roads, build seawalls and even move hospital generators from basements to roofs, he said.
'This is a government resource paid for by the taxpayer to provide the information that really is the primary source of information for any city, state or federal agency who's trying to prepare for the impacts of a changing climate,' said Texas Tech climate scientist Katharine Hayhoe, who has been a volunteer author for several editions of the report.
Copies of past reports are still squirreled away in NOAA's library . NASA's open science data repository includes dead links to the assessment site.
The most recent report, issued in 2023, included an interactive atlas that zoomed down to the county level. It found that climate change is affecting people's security, health and livelihoods in every corner of the country in different ways, with minority and Native American communities often disproportionately at risk .
The 1990 Global Change Research Act requires a national climate assessment every four years and directs the president to establish an interagency United States Global Change Research Program. In the spring, the Trump administration told the volunteer authors of the next climate assessment that their services weren't needed and ended the contract with the private firm that helps coordinate the website and report.
Additionally, NOAA's main climate.gov website was recently forwarded to a different NOAA website. Social media and blogs at NOAA and NASA about climate impacts for the general public were cut or eliminated.
'It's part of a horrifying big picture,' Holdren said. 'It's just an appalling whole demolition of science infrastructure.'
The national assessments are more useful than international climate reports put out by the United Nations every seven or so years because they are more localized and more detailed, Hayhoe and Jacobs said.
The national reports are not only peer reviewed by other scientists, but examined for accuracy by the National Academy of Sciences, federal agencies, the staff and the public.
Hiding the reports would be censoring science, Jacobs said.
And it's dangerous for the country, Hayhoe said, comparing it to steering a car on a curving road by only looking through the rearview mirror: 'And now, more than ever, we need to be looking ahead to do everything it takes to make it around that curve safely. It's like our windshield's being painted over.'
___ Associated Press writer Will Weissert contributed to this report.
___
The Associated Press' climate and environmental coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP's standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org .
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I was the first Black man to walk in space. My career as an astronaut was hard for my family.
I was the first Black man to walk in space. My career as an astronaut was hard for my family.

Business Insider

time41 minutes ago

  • Business Insider

I was the first Black man to walk in space. My career as an astronaut was hard for my family.

This as-told-to essay is based on a conversation with Dr. Bernard Harris, author of " Embracing Infinite Possibilities: Letting Go Of Fear To Reach Your Highest Potential". It has been edited for length and clarity. I was one of the original Star Trek enthusiasts. I was about 10 when the show debuted, and I would rush home each week to watch a new episode. My imagination ran wild with the idea of space and being in space. Then, when I was 13, NASA landed on the moon. When Neil Armstrong said those famous words — "one small step for man, one giant leap for mankind" — it really set my passion off. I s tudied medicine, always with the goal of becoming an astronaut. In 1990, when I was about 34, I was selected for the astronaut program. Over the next four years, I flew twice into space and became the first Black man to do a space walk. My career as an astronaut was tough on my family Many people don't realize that the space shuttle weighs 5 million pounds. To haul that into space, we light five engines that produce 7.5 million pounds of thrust. Once those are ignited, you're leaving the planet in a hurry. The first time I went to space, my daughter was about 8 months old. Her mom — my life partner — had the hardest job, raising our daughter and my three stepchildren. My work took away from family time. When it was time to launch, my family watched nervously, hoping everything went right. Inside the shuttle, I was laser-focused on making sure everything went right, so I didn't have time to worry. Space was incredibly peaceful Later, when the hatch opened for my space walk, I felt like I was falling, just for a second. My brain expected to feel gravity's pull, but after a moment, I adjusted to the floating sensation. I was tethered to a robotic arm 35 feet above the space shuttle. Below, I could see the shuttle and the Earth beyond that. Surrounding it all was the clearest view of the Milky Way galaxy that you could ever imagine. It was an incredible view for a human. I was struck by the silence. With no air to transmit sound, space is completely quiet. Floating there, I had a great sense of peace. It was even more wonderful than I expected. I want to help others follow in my footsteps The year after that walk, I left NASA, but continued to work around the space industry. I also practiced medicine and saw patients at community clinics. That's been an important way for me to give back. I've had experiences that very few people get, particularly people of color. Part of my legacy is to translate that experience and use it to create awareness here on Earth. That means sharing my story and also helping create opportunities for other people to follow in my footsteps. I founded The Harris Foundation to focus on that legacy. Our work is structured around three pillars of success: education, health, and wealth. Health is important to me, as a physician, and I want everyone to have access to quality healthcare. In order to participate equally in the American dream, we need to build wealth, not only individually but generationally. My family was poor when I was a child, but today I have built a great deal of wealth. I hope to help others do the same. Doing so starts with education. My father only had an 11th-grade education, but my mother's college degree gave her power. It changed not only her life, but mine. I may not have been so successful if she hadn't had her education. I've seen how education can change the trajectory of a person and their descendants.

A Piece Of Mars Is Going Up For Sale This Month—And Could Break Records
A Piece Of Mars Is Going Up For Sale This Month—And Could Break Records

Forbes

timean hour ago

  • Forbes

A Piece Of Mars Is Going Up For Sale This Month—And Could Break Records

A 54-pound meteorite from Mars, believed to be the largest piece of the planet currently on Earth, will be sold to the highest bidder later this month in a Sotheby's auction that is expected to rake in between $2 million and $4 million. The Planet Mars. NASA via Getty Images Called NWA 16788, the specimen was found in November of 2023 in Niger's remote Agadez region, part of the Sahara Desert. The 'once-in-a-generation find' has a red hue and a glassy fusion crust that Sotheby's said suggests it was blasted from the surface of Mars by an asteroid impact so powerful it turned some of the meteorite's minerals into glass. There are roughly 77,000 officially recognized meteorites on Earth and, of those, only 400 are Martian, according to Sotheby's. The hunk of rock is expected to fetch between $2 million and $4 million when it is sold July 16, making it the most valuable meteorite ever offered at auction. NWA 16788, a Martian meteorite, is going up for auction July 16, 2025. Sotheby's Get Forbes Breaking News Text Alerts: We're launching text message alerts so you'll always know the biggest stories shaping the day's headlines. Text 'Alerts' to (201) 335-0739 or sign up here : 6.59%. That's the percentage of Martian material on Earth that this meteorite accounts for. The 400 recognized Martian meteorites have a combined total weight of roughly 825 pounds, meaning NWA 16788 makes up almost 7% of all Martian material ever found on our planet. Surprising Fact Only about 15 meteorites are discovered in North America per year, according to Sotheby's. . Tangent Until NWA 16788 goes up for sale, the Fukang meteorite holds the title of the most expensive ever offered at auction. The specimen was found in 2000 in China and is classified as a pallasite—a type of stony–iron meteorite with olivine crystals. It's thought to be over 4.5 million years old, possibly older than Earth, and weighs more than 2,200 pounds. In 2008, a 925-pound slice of the Fukang meteorite was valued at around $2 million and put up for auction by Bonhams in New York. It didn't sell. Further Reading Forbes White House Could Jeopardize Mars Missions By Slashing NASA's Funding By Kevin Holden Platt Forbes Updated Mars Vision From Elon Musk, SpaceX Hits Different Now, Matters More By Eric Mack Forbes We Finally Know Why Mars Is Red, Scientists Say By Jamie Carter Forbes Mars' Small Mass Still Puzzles Planetary Scientists By Bruce Dorminey

Are We Finally Ceding Control To The Machine? The Human Costs Of AI Transformation
Are We Finally Ceding Control To The Machine? The Human Costs Of AI Transformation

Forbes

time2 hours ago

  • Forbes

Are We Finally Ceding Control To The Machine? The Human Costs Of AI Transformation

AI robot controlling puppet business human. Generative Artificial Intelligence has exploded into the mainstream. Since its introduction, it has transformed the ways individuals work, create, and interact with technology. But is this adoption useful? While technology is saving people considerable time and money, will its effects have repercussions on human health and economic displacement? Jing Hu isn't your typical AI commentator. Trained as a biochemist, she traded the lab bench for the wild west of tech, spending a decade building products before turning her sights on AI research and journalism. Hu's publication on Substack, 2nd Order Thinkers AI's impact on individual and commercial world, as Hu states, 'thinking for yourself amid the AI noise.' In a recent episode of Tech Uncensored I spoke with Jing Hu to discuss the cognitive impacts from increasing usage of Chatbots built on LLMs. Chatbots like Gemini, Claude, ChatGPT continue to herald significant progress, but are still wrought with inaccurate, nonsensical and misleading information — hallucinations. The content generated can be harmful, unsafe, and often misused. LLMs today are not fully trustworthy, by the standards we should expect for full adoption of any software products. Are Writing and Coding Occupations at Risk? In her recent blog, Why thinking Hurts After Using AI, Hu writes, 'Seduced by AI's convenience, I'd rush through tasks, sending unchecked emails and publishing unvetted content,' and surmises that 'frequent AI usage is actively reshaping our critical thinking patterns.' Hu references OpenAI and UPenn study from 2023 that looks at the labor market impact from these LLMs. It states that tasks that involve science and critical thinking are the tasks that would be safe; however, those which involve programming and writing would be at risk. Hu cautions, 'however, this study is two years old, and at the pace of AI, it needs updating.' She explains, 'AI is very good at drafting articles, summarizing and formatting. However, we humans are irreplaceable when it comes to strategizing or discussing topics that are highly domain specific. Various research found that AI's knowledge is only surface level. This becomes especially apparent when it comes to originality.' Hu explains that when crafting marketing copy, 'we initially thought AI could handle all the writing. However, we noticed that AI tends to use repetitive phrases and predictable patterns, often constructing sentences like, "It's not about X, it's about Y," or overusing em-dashes. These patterns are easy to spot and can make the writing feel dull and uninspired.' For companies like Duolingo whose CEO promises to be an 'AI-first company,' replacing their contract employees is perhaps a knee-jerk decision that has yet to be brought to bear. The employee memo clarified that 'headcount will only be given if a team cannot automate more of their work.' The company was willing to take 'small hits on quality than move slowly and miss the moment.' For companies like this, Hu argues that they will run into trouble very soon and begin rehiring just to fix AI generated bugs or security issues. Generative AI for coding can be inaccurate because models were trained on Github, or similar databases. She explains, 'Every database has its own quirks and query syntax, and many contain hidden data or schema errors. If you rely on AI-generated sample code to wire them into your system, you risk importing references to tables or drivers that don't exist, using unsafe or deprecated connection methods, and overlooking vital error-handling or transaction logic. These mismatches can cause subtle bugs, security gaps, and performance problems—making integration far more error-prone than it first appears.' Another important consideration is cybersecurity, which must be approached holistically. 'If you focus on securing just one area, you might fix a vulnerability but miss the big picture,' she said. She points to the third issue: Junior developers using tools like Copilot often become overly confident in the code these tools generate. And when asked to explain their code, many are unable to do it because they don't truly understand what was produced. Hu concedes that AI is good at producing code quickly, however it is a only part (25-75%) of software development, 'People often ignore the parts that we do need: architecture, design, security. Humans are needed to configure the system properly for the system to run as a whole.' She explains that the parts of code that will be replaced by AI will be routine and repetitive, so this is an opportune moment for developers to transition, advising 'To thrive in the long term, how should we — as thinking beings —develop our capacity for complex, non-routine problem-solving? Specifically, how do we cultivate skills for ambiguous challenges that require analysis beyond pattern recognition (where AI excels)?' The Contradiction of Legacy Education and The Competition for Knowledge Creation In a recent article from the NY Times. 'Everyone is Cheating their Way through College,' a student remarked, 'With ChatGPT, I can write an essay in two hours that normally takes 12.' Cheating is not new, but as one student exclaimed, 'the ceiling has been blown off.' A professor remarks, 'Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.' For Hu, removing AI from the equation does not negate cheating. Those who genuinely want to learn will choose how to use the tools wisely. Hu was at a recent panel discussion at Greenwich University and Hu commented to a question from a professor about whether to ban students from using AI: 'Banning AI in education misses the point. AI can absolutely do good in education, but we need to find a way so students don't offload their thinking to AI and lose the purpose of learning itself. The goal should be fostering critical thinking, not just policing the latest shortcut.' Another professor posed the question, 'If a student is not a native English speaker, but the exam requires them to write an essay in English, which approach is better? Hu commented that not one professor on this panel could answer the question. The situation was unfathomable and far removed from situations covered by current policy and governance. She observes, 'There is already a significant impact on education and many important decisions have yet to be made. It's difficult to make clear choices right now because so much depends on how technology will evolve and how fast the government and schools can adapt.' For educational institutions that have traditionally been centers of knowledge creation, the rise of AI is powerful — one that often feels more like a competitor than a tool. As a result, it has left schools struggling to determine how AI should be integrated to support student learning. Meanwhile, schools face a dilemma: many have been using generative AI to develop lessons, curricula, even review students' performance, yet the institution remains uncertain and inconsistent in their overall approach to AI. On a broader scale, the incentive structures within education are evolving. The obsession with grades have 'prevented teachers from using assessments that would support meaningful learning.' The shift towards learning and critical thinking may be the hope that students need to tackle an environment with pervasive AI. MIT Study Sites Cognitive Decline with Increasing LLM Use MIT Media Lab produced a recent study that monitored the brain activity of about 60 research subjects. These participants were asked to write essays on given topics and were split into three groups: 1) use LLM only 2) use traditional search engine only 3) use only their brain and no other external aid. The conclusion: 'LLM users showed significantly weaker neural connectivity, indicating lower cognitive effort and engagement compared to others.' Brain connectivity is scaled down with the amount of external support. This MIT brain scans show: Writing with Google dims your brain by up to 48%. ChatGPT pulls the plug, with 55% less neural connectivity. Some other findings: Hu noticed that the term 'cognitive decline' was misleading since the study was conducted over a four-month period. We've yet to see the long-term effects. However, she acknowledges that in one study about how humans develop amnesia suggests just this: either we use it or lose it. She adds, 'While there are also biological factors involved such as changes in brain proteins, reduced brain activity is thought to increase the risk of diseases that affect memory.' The MIT study found that the brain-only group showed much more active brain waves compared to the search-only and LLM-only groups. In the latter two groups, participants relied on external sources for information. The search-only group still needed some topic understanding to look up information, and like using a calculator — you must understand its functions to get the right answer. In contrast, the LLM-only group simply had to remember the prompt used to generate the essay, with little to no actual cognitive processing involved. As Hu noted, 'there was little mechanism formulating when only AI was used in writing an essay. This ease of using AI, just by inputting natural language, is what makes it dangerous in the long run.' AI Won't Replace Humans, but Humans using AI Will — is Bull S***! Hu pointed to this phrase that has been circulating on the web: 'AI won't Replace Humans, but Humans using AI Will.' She argues that this kind of pressure will compel people to use AI, engineered from a position of fear explaining, 'If we refer to those studies on AI and critical thinking released last year, it is less about whether we use AI but more about our mindset, which determine how we interact with AI and what consequences you encounter.' Hu pointed to a list of concepts she curated from various studies she called AI's traits — how AI could impact our behavior: Hu stresses that we need to be aware of these traits when we work with AI on a daily basis and be mindful that we maintain our own critical thinking. 'Have a clear vision of what you're trying to achieve and continue to interrogate output from AI,' she advises. Shifting the Narrative So Humans are AI-Ready Humanity is caught in a tug of war between the provocation to adopt or be left behind and the warning to minimize dependence on a system that is far from trustworthy. When it comes to education, Hu, in her analysis of the MIT study, advocates for delaying AI integration. First, invest in independent self-directed learning to build the capacity for critical thinking, memory retention, and cognitive engagement. Secondly, make concerted efforts to use AI as a supplement — not a substitute. Finally, teach students to be mindful of AI's cognitive costs and lingering consequences. Encourage them to engage critically — knowing when to rely on AI and when to intervene with their own judgement. She realizes, 'In the education sector, there is a gap between the powerful tool and understanding how to properly leverage it. It's important to develop policy that sets boundaries for both students and faculty for AI responsible use.' Hu insists that implementing AI in the workforce needs to be done with tolerance and compassion. She points to a recent manifesto by Tobi Lütke's Shopify CEO, that called for an immediate and universal AI adoption within the company — a new uncompromising standard for current and future employees. This memo shared AI will be the baseline for work integration, improving productivity, setting performance standards which mandates a total acceptance of the technology. Hu worries that CEOs like Lütke are wielding AI to intimidate employees to work harder, or else! She alluded to one of the sections that demanded employees to demonstrate why a task could not be accomplished with AI before asking for more staff or budget as she asserts, 'This manifesto is not about innovation at all. It feels threatening and if I were an employee of Shopify, I would be in constant fear of losing my job. That kind of speech is unnecessary.' Hu emphasized that this would only discourage employees further, and it would embolden CEOs to continue to push the narrative of how AI is inevitably going to drive layoffs. She cautions CEOs to pursue an understanding of AI's limitations for to ensure sustainable benefit for their organizations. She encourages CEOs to pursue a practical AI strategy that complements workforce adoption, considers current data gaps, systems, and cultural limitations that will have more sustainable payoffs. Many CEOs today may be more likely to pursue a message with AI, 'we can achieve anything,' but this deviates from reality. Instead, develop transparent communication in lock-step with each AI implementation, that clarifies how AI will be leveraged to meet those goals, and what this will this mean for the organization. Finally, for individuals, Hu advises, 'To excel in a more pervasive world of AI, you need to clearly understand your personal goals and commit your effort to the more challenging ones requiring sustained mental effort. This is a significant step to start building the discipline and skills needed to succeed.' There was no mention, this time, of 'AI' in Hu's counsel. And rightly so — humans should own their efforts and outcomes. AI is a mere sidekick.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store