What Is Up With These Tech Billionaires? This Astrophysicist Has Answers
In More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, published this spring, Becker subjects Silicon Valley's ideology to some much-needed critical scrutiny, poking holes in — and a decent amount of fun at — the outlandish ideas that so many tech billionaires take as gospel. In so doing, he champions reality while also exposing the dangers of letting the tech billionaires push us toward a future that could never actually exist. 'The title of the book is More Everything Forever,' says Becker. 'But the secret title of the book, like, in my heart is These Fucking People.'
More from Rolling Stone
It's Been One Year Since Musk Endorsed Trump. Was It Worth It?
Grok Says It's Done Posting 'Hitler Fanfic'
Blue States Invest Retirees' Savings in Firms Boosting Trump's Extreme Agenda
Over several Zooms, Rolling Stone recently chatted with Becker about these fucking people, their magical thinking, and what the rest of us can do to fight for a reality that works for us.
A lot of people who move to Silicon Valley get swept up in its vibe. How did you avoid it?
I did sort of see the glittering temptation of Silicon Valley, but there's a toxic positivity to the culture. The startup ethos out here runs on positive emotion, and especially hype. It needs hype. It can't function without it. It's not enough that your startup could be widely adopted. It needs to change the world. It has to be something that's going to make everything better. So this ends up becoming an exercise in meaning-making, and then people start talking about these startups — their own or other people's — in semi-religious or explicitly-religious terms. And it was just a shock to see all of these people talking this way. It all feels plastic and fake. I thought, Oh wow, this is awful. I want to watch these people and see what the hell they're up to. I want to understand what is happening here, because this is bad.
And what were they up to, as far as you could tell?
Underpinning a lot of that toxic positivity was this idea that if you just make more tech, eventually tech will improve itself and become super-intelligent and godlike. [The technocrats] subscribe to a kind of ideology of technological salvation — and I use that word 'salvation' very deliberately in the Christian sense. They believe that technology is going to bring about the end of this world and usher in a new perfect world, a kind of transhumanist, algorithmically-guaranteed utopia, where every problem in the world gets reduced to a problem that can be solved with technology. And this will allow for perpetual growth, which allows for perpetual wealth creation and resource extraction.
These are deeply unoriginal ideas about the future. They're from science fiction, and I didn't know how seriously people were taking them. And then I started seeing people take them very, very seriously indeed. So, I was like, 'OK, let me go talk to actual experts in the areas these people are talking about.' I talked to the experts, and: Yeah, it's all nonsense.
What exactly is nonsensical about it?
It's a story that is based on a lot of ideas that have no evidence for them and a great deal of evidence against them. It's based on a lot of wrong ideas.
For example, I think the public perception of AI has been driven by narratives that have no foundation in reality. What does it mean to say a machine is as intelligent as a human? What does 'intelligence' mean? What does it mean to say that an intelligent machine could design an even more intelligent one? Intelligence is not this monolithic thing that is measured by IQ tests, and the history of humans trying to think about intelligence as a monolithic thing is a deeply troubling and problematic history that usually gets tied to eugenics and racism because that's what those tests were invented for. And so, unsurprisingly, there's a fair amount of eugenics and racism thrown around in these communities that discuss these ideas really seriously.
There's also no particular reason to believe that the kinds of machines that we are building now and calling 'AI' are sufficiently similar to the human brain to be able to do what humans do. Calling the systems that we have now 'AI' is a kind of marketing tool. You can see that if you think about the deflation in the term that's occurred just in the last 30 years. When I was a kid, calling something 'AI' meant Commander Data from Star Trek, something that can do what humans do. Now, AI is, like, really good autocomplete.
That's not to say that it would never be possible to build an artificial machine that does what humans do, but there's no reason to think that these can and a lot of reason to think that they can't. And the self-improvement thing is kind of silly, right? It's like saying, 'Oh, you can become an infinitely good brain surgeon by doing brain surgery on the brain surgery part of your brain.'
Can you explain the difference between the systems we have now, which we call 'AI,' and the systems that would qualify as AGI? How big is the gulf and what are the major impediments to bridging it?
So one of the problems here is that 'AGI' is ill-defined, and the vagueness is strategically useful for the people who talk about this stuff. But put that aside and just take a look at what a large language model like ChatGPT does. It's a text generation engine. I feel like that's a much better way of talking about it than calling it 'AI.' ChatGPT only cares about one thing: Generating the next word based on what words have already been generated and produced in the conversation so far. And to do that, ChatGPT consumes roughly the entire internet. It was trained on the entire Internet to pull out statistical patterns in the language usage. It's like this smeared-out average voice of the internet, and when you ask it a question, all it cares about is answering that question in that voice. It doesn't care about things like answering the question correctly. That only happens accidentally as a result of trying to sound like the text it was trained on. And so when these machines, quote-unquote, 'hallucinate,' when they make things up and get things wrong, they're not doing anything differently than they're doing when they get the right answer, because they only know how to do one thing. They're constantly hallucinating. That's all they do.
So what we're calling 'artificial intelligence' is really just kind of like an advanced version of spellcheck?
Yeah, in a way. I mean, this is not even the first time in the history of AI that people have been having conversations with these machines and thinking, 'Oh wow, there's actually something in there that's intelligent and helping me.' Back in the 1960s, there was this program called Eliza, that basically acted like a very simple version of a therapist that just reflects everything that you say back to you. So you say, 'Hey Eliza, I had a really bad day today,' and Eliza says, 'Oh, I'm really sorry to hear that. Why did you have a really bad day today?' And then you say, 'I got in a fight with my partner,' and they say, 'Oh, I'm really sorry to hear that. Why did you get in a fight with your partner?' I mean, it's a little bit more complicated than that but not a lot more complicated than that. It just kind of fills in the blanks. These are stock responses — something that's very clearly not thinking. And people would say, 'Oh, Eliza really helped me. I feel like Eliza really understands who I am.'
The human impulse for connection is powerful.
Precisely. It's the human impulse for connection — and the impulse to attribute human-like characteristics to things that are not humans, which we do constantly. We do it with our pets. We do it with random patterns that we find in nature. We'll see an arrangement of rocks and think, 'Oh, that's a smiley face.' That's called 'pareidolia.' And that's what this is.
So current AI is not even close to being human, but these tech titans think it could be godlike?
Sam Altman gave a talk two or three years ago, and he was asked a question about global warming, and he said something like, 'Oh, global warming is a really serious problem, but if we have a super-intelligent AI, then we can ask it, 'Hey, how do you build a lot of renewable energy? And hey, how do you build a lot of carbon capture systems? And hey, how do we build them at scale cheaply and quickly?' And then it would solve global warming.' What Sam Altman is saying is that his plan for solving global warming is to build a machine that nobody knows how to build and can't even define and then ask it for three wishes.
But they really believe that this is coming. Altman said earlier this year that he thinks that AGI is coming in the next four years. If a godlike AI is coming, then global warming doesn't matter. All that matters is making sure that the godlike AI is good and comes soon and is friendly and helpful to us. And so, suddenly, you have a way of solving all of the problems in the world with this one weird trick, and that one weird trick is the tech that these companies are building. It offers the possibility of control, it offers the possibility of transcendence of all boundaries, and it offers the possibility of tremendous amounts of money.
If you have an understanding of what the technology is doing right now — versus some magical idea of what it could be doing — it sounds like it would be hard to trust it with the future of humanity. Is it just complete delusion?
There's a lot of delusional thinking at work, and it's really, really easy to believe stuff that makes you rich. But there's also a lot of groupthink. If everybody around you believes this, then that makes it more likely that you're going to believe it, too. And then if all of the most powerful people and the wealthiest people and the most successful people and the most intelligent-seeming people around you all believe this, it's going to make it harder for you not to believe it.
And the arguments that they give sound pretty good at first blush. You have to really drill down to find what's wrong with them. If you were raised on a lot of science fiction, especially, these ideas are very familiar to you — and I say this as a huge science fiction fan. And so when you start looking at ideas like super-intelligent AI or going to space, these ideas carry a lot of cultural power. The point is, it's very easy for them to believe these things, because it goes along with this picture of the future that they already had, and it offers to make them a lot of money and give them a lot of power and control. It gives them the possibility of ignoring inconvenient problems, problems that often they themselves are contributing to through their work. And it also gives them a sense of moral absolution and meaning by providing this grand vision and project that they're working toward. They want to save humanity. [Elon] Musk talks about this all the time. [Jeff] Bezos talks about this. Altman talks about this. They all talk about this. And I think that's a pretty powerful drug. Then throw in, for the billionaires, the fact that when you're a billionaire, you get insulated from the world and from criticism because you're surrounded by sycophants who want your money, and it becomes very hard to change your mind about anything.
Your reality testing gets pretty messed up.
Yeah, exactly. Also, a lot of these ideas just sound ridiculous, and so there hasn't been as much trenchant criticism as there should have been for the past decades. And now, suddenly, these guys have lots of money, and they're saying what the future is, and people are just believing that.
So what you're telling me is that I'm not gonna get to live on Mars.
Yeah, that's right. You're not going to. But you shouldn't be disappointed because Mars sucks. Mars fucking sucks. Just to name a few of the problems: gravity is too low, the radiation is too high, there's no air, and the dirt is made of poison.
Sounds fun.
Also you're going to freeze even if you solve all of those problems. I mean there are some spots where you wouldn't freeze if you really bundled up, but Elton John was right: Mars isn't the place to raise your kid. It's really terrifying to see the most powerful people in the world — and some of the loudest voices in the world — confuse these beliefs with reality.
You talk about in the book about how this is a sort of messianic belief, but also about how technological utopia won't be available to everyone — which is a pretty common view in apocalyptic narratives, right? There's a chosen group that will get to enjoy the utopia, but not everyone will.
Look, inequality is a fundamental feature of the world, and I think nobody knows that better than these billionaires. I don't mean 'fundamental' in the sense that it's unalterable. I just mean it's fundamental to how we've structured our society, and billionaires are beneficiaries of that. But I think that in the version of these utopias that are promoted by these tech billionaires, there are definitely unseen and unquestioned forms of inequality that would lead to some people having a lot more control and a lot more of that utopia than other people would get.
A lot of this is in the form of questions that, surprisingly, people don't tend to ask these tech billionaires. Jeff Bezos says that he wants humanity living in giant space stations that have millions of people, and he wants millions of these space stations, so there'll be one trillion people in space generations from now. And that leads to questions like, 'OK, buddy, who's gonna own that?' One of the nice things about living on Earth is that we have these shared natural resources. If you go out into space into an artificial environment that, say, Blue Origin is going to be building, doesn't that mean that Blue Origin or some successor company is going to own those space stations and all of the air and water and whatnot inside? And doesn't that mean that there's somebody who's going to be effectively king of the space station? And if everybody lives in these space stations, isn't that going be not just a company town but a company civilization?
Musk talks about a city with a million people on Mars. The air won't even be free, right? You'll have to pay Musk just to stay alive. That's not my vision of utopia, and I think not many other people's either.
It seems pretty unlikely that these guys are going to get this utopia of which they dream, so how concerned should we even be about their delusions?
They have so much power and so much money that the choices that they make about how to exercise that power and spend that money unavoidably affect the rest of us. This is a real danger that we are seeing and experiencing right now. Musk thinks that his mission to go to Mars and beyond is the salvation of humanity — he has said as much in as many words — and he believes that, therefore, nothing should be allowed to stand in his way, not even law. So, therefore, he supported a lawless candidate for President of the United States, a literal felon, and said that it was important for the future of humanity that that felon win. This is a billionaire interfering with the democratic process and trying to erode the democratic fabric of this country — and succeeding — in order to pursue his own personal vision of utopia that will never happen. That's a fucking problem. And that makes it everybody's business.
I suppose it's also a question of who gets to decide which problems are humanity's biggest.
Which is what a lot of this comes down to, right? Part of the problem with trying to solve issues in the world through billionaire philanthropy is that it's fundamentally undemocratic who gets to make the decision: The billionaire gets to make the decision. Who elected the billionaire? Nobody. And so billionaire philanthropy is an exercise of power and deserves skepticism rather than gratitude.
But I think a lot of these billionaires see wealth as proof of someone's value and intelligence, and since they're the wealthiest people who have ever lived, that makes them the smartest people who have ever lived and so they are the ones who should be leading us into this new utopia. And if the rest of us can't see it or think that it doesn't work, well, that's because we're not as smart as they are. And if experts tell them that it can't work, well, then the experts are wrong, because, you know, if they are smart, why are they so poor? It's like [these technocrats] are constantly high on a drug called billions of dollars, and the human brain was not built to deal with that. It insulates them from criticism and makes it harder for them to think critically.
What can we do about all this? Are we all just basically fucked?
Well, look, the billionaires have an enormous amount of power and money, but there's a lot more of us than there are of them. Also, we can think critically, and so I think there's a few different things that we can do. In the short term we need to organize. One of the things that these guys are completely terrified by — and it's one of the reasons they love AI — is the idea of labor organization. They don't want workers rising up. They don't want to have to deal with workers at all, and so I think labor organizing is really important. I think political organizing is really important. We need to build political power structures that can counterbalance the massively outsized power of this really very small community of individuals who just have massive amounts of wealth. And I know that that sounds kind of facile, but I really do think it's what we have to do, and historically it is how [people] have always combated the very wealthy and their fantasies of power.
We can also point out when they're wrong. Say 'The emperor has no clothes, we are not going to Mars, and that is ridiculous.' Public ridicule of these ideas — informed, factually accurate public ridicule — is part of what I'm trying to do, and I think it's a really important and powerful tool.
And then in the longer term — hopefully not that far away, if we get to a place where we have political power to balance these guys out — I think we've got to tax their wealth away. They did not earn that money alone. They needed the infrastructure and community that the rest of us provide and they also, frankly, needed a lot of government investment. They are the biggest welfare queens in existence, right? Silicon Valley got enormous amounts of government spending to benefit it over the years, both on infrastructure and in buying products and whatnot. The government built the internet. The government was the biggest client of Silicon Valley back when it was first starting up through buying computer chips for the space program. The government built the space program without which you wouldn't be able to have something like SpaceX. So I think it's time to stop giving them handouts and start saying, 'What we invested, the bill has come due.'
Best of Rolling Stone
Every Super Bowl Halftime Show, Ranked From Worst to Best
The United States of Weed
Gaming Levels Up
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
19 minutes ago
- Associated Press
America's only rare earth producer gets a boost from Apple and Pentagon agreements
MP Materials, which runs the only American rare earths mine, announced a new $500 million agreement with tech giant Apple on Tuesday to produce more of the powerful magnets used in iPhones as well as other high-tech products like electric vehicles. This news comes on the heels of last week's announcement that the U.S. Defense Department agreed to invest $400 million in shares of the Las Vegas-based company. That will make the government the largest shareholder in MP Materials and help increase magnet production. Despite their name, the 17 rare earth elements aren't actually rare, but it's hard to find them in a high enough concentration to make a mine worth the investment. They are important ingredients in everything from smartphones and submarines to EVs and fighter jets, and it's those military applications that have made rare earths a key concern in ongoing U.S. trade talks. That's because China dominates the market and imposed new limits on exports after President Donald Trump announced his widespread tariffs. When shipments dried up, the two sides sat down in London. The agreement with Apple will allow MP Materials to further expand its new factory in Texas to use recycled materials to produce the magnets that make iPhones vibrate. The company expects to start producing magnets for GM's electric vehicles later this year and this agreement will let it start producing magnets for Apple in 2027. The Apple agreement represents a tenth of the company's pledge to invest $500 billion domestically during the Trump administration. And although the deal will provide a significant boost for MP Materials, the agreement with the Defense Department may be even more meaningful. Neha Mukherjee, a rare earths analyst with Benchmark Mineral Intelligence, said in a research note that the Pentagon's 10-year promise to guarantee a minimum price for the key elements of neodymium and praseodymium will guarantee stable revenue for MP Minerals and protect it from potential price cuts by Chinese producers that are subsidized by their government. 'This is the kind of long-term commitment needed to reshape global rare earth supply chains,' Mukherjee said. Trump has made it a priority to try to reduce American reliance on China for rare earths. His administration is both helping MP Materials and trying to encourage the development of new mines that would take years to come to fruition. China has agreed to issue some permits for rare earth exports but not for military uses, and much uncertainty remains about their supply. The fear is that the trade war between the world's two biggest economies could lead to a critical shortage of rare earth elements that could disrupt production of a variety of products. MP Materials can't satisfy all of the U.S. demand from its Mountain Pass mine in California's Mojave Desert. The deals by MP Materials come as Beijing and Washington have agreed to walk back on their non-tariff measures: China is to grant export permits for rare earth magnets to the U.S., and the U.S. is easing export controls on chip design software and jet engines. The truce is intended to ease tensions and prevent any catastrophic fall-off in bilateral relations, but is unlikely to address fundamental differences as both governments take steps to reduce dependency on each other. ___ Associated Press reporters David Klepper and Didi Tang contributed to this report from Washington D.C. Michael Liedtke contributed from San Francisco.
Yahoo
29 minutes ago
- Yahoo
The government wants AI to fight wars and review your taxes
Elon Musk has receded from Washington but one of his most disruptive ideas about government is surging inside the Trump administration. Artificial intelligence, Musk has said, can do a better job than federal employees at many tasks - a notion being tested by AI projects trying to automate work across nearly every agency in the executive branch. Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post. The Federal Aviation Administration is exploring whether AI can be a better air traffic controller. The Pentagon is using AI to help officers distinguish between combatants and civilians in the field, and said Monday that its personnel would begin using the chatbot Grok offered by Musk's start-up, xAI, which is trying to gain a foothold in federal agencies. Artificial intelligence technology could soon play a central role in tax audits, airport security screenings and more, according to public documents and interviews with current and former federal workers. Many of these AI programs aim to shrink the federal workforce - continuing the work of Musk's U.S. DOGE Service that has cut thousands of government employees. Government AI is also promised to reduce wait times and lower costs to American taxpayers. Government tech watchdogs worry the Trump administration's automation drive - combined with federal layoffs - will give unproven technology an outsize role. If AI drives federal decision-making instead of aiding human experts, glitches could unfairly deprive people of benefits or harm public safety, said Elizabeth Laird, a director at the Washington-based nonprofit Center for Democracy and Technology. There is 'a fundamental mismatch' between what AI can do and what citizens expect from government, she said. President Joe Biden in 2023 signed an executive order aimed at spurring government use of AI, while also containing its risks. In January, President Donald Trump repealed that order. His administration has removed AI guardrails while seeking to accelerate its rollout. A comprehensive White House AI plan is expected this month. 'President Trump has long stressed the importance of American AI dominance, and his administration is using every possible tool to streamline our government and deliver more efficient results for the American people,' White House spokeswoman Anna Kelly said in a statement. The Washington Post reviewed government disclosures and interviewed current and former federal workers about plans to expand government AI. Some expressed alarm at the administration's disregard for safety and government staff. Others saw potential to improve efficiency. 'In government, you have so much that needs doing and AI can help get it done and get it done faster,' said Jennifer Pahlka, who was deputy U.S. chief technology officer in President Barack Obama's second term. Sahil Lavingia, a former DOGE staffer who pushed the Department of Veterans Affairs to use AI to identify potentially wasteful spending, said government should aggressively deploy the technology becoming so prevalent elsewhere. Government processes are efficient today, he said, 'but could be made more efficient with AI.' Lavingia argued no task should be off limits for experimentation, 'especially in war.' 'I don't trust humans with life and death tasks,' he said, echoing a maximalist view of AI's potential shared by some DOGE staffers. Here's how AI is being deployed within some government agencies embracing the technology. - - - Waging war The Pentagon is charging ahead with artificial intelligence this year. The number of military and civilian personnel using NGA Maven, one of the Pentagon's core AI programs, has more than doubled since January, said Vice Adm. Frank Whitworth, director of the National Geospatial-Intelligence Agency, in a May speech. The system, launched in 2017, processes imagery from satellites, drones and other sources to detect and identify potential targets for humans to assess. More than 25,000 U.S. military and civilian personnel around the world now use NGA Maven. NGA Maven is being expanded, Adm. Whitworth said, to interpret data such as audio and text in conjunction with imagery, offering commanders a 'live map' of military operations. The aim is to help it better distinguish combatants from noncombatants and enemies from allies, and for units using NGA Maven to be able to make 1,000 accurate decisions about potential targets within an hour. The Pentagon's AI drive under Trump will give tech companies like data-mining firm Palantir a larger role in American military power. A White House executive order and a Defense Department memo have instructed federal officials to rely more on commercial technology. In May, the Defense Department announced it was more than doubling its planned spending on a core AI system that is part of NGA Maven called Maven Smart System, allocating an additional $795 million. The software, provided by Palantir, analyzes sensor data to help soldiers identify targets and commanders to approve strikes. It has been used for planning logistics to support deployed troops. - - - Air traffic control The Federal Aviation Administration is testing whether AI software can reliably aid human air traffic controllers, according to a person with knowledge of the agency's plans who spoke on the condition of anonymity to avoid retaliation. Humans would remain in the loop, the person said, but AI would help reduce fatigue and distraction. Air traffic control staff would continue to communicate with pilots, for example, but AI might handle repetitive and data-driven tasks, monitoring airspace more generally. Due in part to ongoing staff shortages in air traffic control, the agency's AI plans include 'planning for less people,' the person said. Other uses for AI being explored at the FAA include analyzing air traffic or crash data and predicting when aircraft are likely to need maintenance, the person said. The FAA sees artificial intelligence as a potential tool to address airline safety concerns that were brought to the fore by the January midair collision that killed more than 60 people near Reagan National Airport. 'The FAA is exploring how AI can improve safety,' the agency said in a unsigned statement, but air traffic controllers do not currently use the technology. That includes using the technology to scan incident reports and other data to find risks around airports with a mixture of helicopter and airplane traffic, the statement said, while emphasizing humans will remain in charge. 'FAA subject matter experts are essential to our oversight and safety mission and that will never change,' the statement said. - - - Examining patents The U.S. Patent and Trademark Office wants to test whether part of the job of patent examiners - who review patent applications to determine their validity - can replaced by AI, according to records obtained by The Post and an agency employee who spoke on the condition of anonymity to describe internal deliberations. Patent seekers who opt into a pilot program will have their applications fed into an AI search tool that will trawl the agency's databases for existing patents with similar information. It will email applicants a list of the ten most relevant documents, with the goal of efficiently spurring people to revise, alter or withdraw their application, the records show. From July 21, per an email obtained by The Post, it will become 'mandatory' for examiners to use an AI-based search tool to run a similarity check on patent applications. The agency did not respond to a question asking if it is the same technology used in the pilot program that will email patent applicants. The agency employee said AI could have an expansive role at USPTO. Examiners write reports explaining whether applications fall afoul of patent laws or rules. The large language models behind recent AI systems like ChatGPT 'are very good at writing reports, and their ability to analyze keeps getting better,' the employee said. This month, the agency had planned to roll out another new AI search tool that examiners will be expected to use, according to internal documents reviewed by The Post. But the launch moved so quickly that concerns arose USPTO workers - and some top leaders - did not understand what was about to happen. Some staff suggested delaying the launch, the documents show, and it is unclear when it will ultimately be released. USPTO referred questions to the Commerce Department, which shared a statement from an unnamed spokesperson. 'At the USPTO, we are evaluating how AI and technology can better support the great work of our patent examiners,' the statement said. - - - Airport security screening You may see fewer security staff next time you fly as the Transportation Security Administration automates a growing number of tasks at airport checkpoints. TSA began rolling out facial recognition cameras to check IDs in 2022, a program now live in more than 200 airports nationwide. Despite studies showing that facial recognition is not perfect and less accurate at identifying people of color, the agency says it is more effective at spotting impostors than human reviewers. A federal report this year found TSA's facial recognition is more than 99 percent accurate across all demographic groups tested. The agency says it is experimenting with automated kiosks that allow pre-checked passengers to pass through security with 'minimal to no assistance' from TSA officers. During the Biden administration, these and other AI efforts at TSA were aimed at helping security officers be more efficient - not replacing them, said a former technology official at the Department of Homeland Security, TSA's parent agency, who spoke on the condition of anonymity to discuss internal matters. 'It frees up the officer to spend more time interacting with a passenger,' the former official said. The new Trump administration has indicated it wants to accelerate AI projects, which could reduce the number of TSA officers at airports, according to Galvin Widjaja, CEO of Austin-based a contractor which works with TSA and DHS on tools for screening airport travelers. 'If an AI can make the decision, and there's an opportunity to reduce the manpower, they're going to do that,' Widjaja said in an interview. Russ Read, a spokesman for TSA, said in an emailed statement that 'the future of aviation security will be a combination of human talent and technological innovation.' - - - Tax audits The Internal Revenue Service has an AI program to help employees query its internal manual, in addition to chatbots for a variety of internal uses. But the agency is now looking to off-load more significant tasks to AI tools. Once the new administration took over, with a mandate from DOGE that targeted the IRS, the agency examined the feasibility of deploying AI to manage tax audits, according to a person familiar with the matter, speaking on the condition of anonymity for fear of retribution. The push to automate work so central to the IRS's mission underscores a broader strategy: to delegate functions typically left to human experts to powerful software instead. 'The end game is to have one IT, HR, etc., for Treasury and get AI to do everything,' the person said. A DOGE official, start-up founder Sam Corcos, has been overseeing work to deploy AI more broadly at the IRS. But the lack of oversight of an ambitious effort to centralize the work of the IRS and feed it to a powerful AI tool has raised internal worries, the person said. 'The IRS has used AI for business functions including operational efficiency, fraud detection, and taxpayer services for a long time,' a Treasury Department spokeswoman said in a statement. 'Treasury CIO Sam Corcos is implementing the fulsome IRS modernization plan that taxpayers have deserved for over three decades.' - - - Caring for veterans In April, the Department of Veterans Affairs's top technology official emailed lieutenants with his interpretation of the Trump administration's new AI policy. 'The message is clear to me,' Charles Worthington, who serves as VA's chief technology officer and chief AI officer, said. 'Be aggressive in seizing AI opportunity, while implementing common sense safeguards to ensure these tools are trustworthy when they are used in VA's most sensitive areas such as benefit determinations and health care.' The email was published to VA's website in response to a public records request. VA said it deployed hundreds of uses of artificial intelligence last year, making it one of the agencies most actively tapping AI based on government disclosures. Among the most controversial of these programs has been REACH VET, a scoring algorithm used to prioritize mental health assistance to patients predicted to be at the highest risk of suicide. Last year, an investigation by the Fuller Project, a nonprofit news organization, found that the system prioritized help to White men, especially those who have been divorced or widowed - groups studies show to be at the highest risk of suicide. VA acknowledged that REACH VET previously did not consider known risk factors for suicide in women veterans, making it less likely that women struggling with thoughts of suicide would flagged for assistance. Pete Kasperowicz, a VA spokesman, said in an email that the agency recently updated the REACH VET algorithm to account for several new risk factors specific to women, including military sexual trauma, pregnancy, ovarian cysts and infertility. Since the program launched in 2017, it has helped identify more than 117,000 at-risk veterans, prompting staff to offer them additional support and services, he said. REACH VET was one of over 300 AI applications that the Biden administration labeled 'safety impacting' or 'rights impacting' in annual transparency reports. The Trump administration, which has derided the 'risk-averse approach of the previous administration,' discontinued those labels and will instead denote sensitive programs as 'high-impact.' GRAPHIC Related Content He may have stopped Trump's would-be assassin. Now he's telling his story. He seeded clouds over Texas. Then came the conspiracy theories. How conservatives beat back a Republican sell-off of public lands
Yahoo
29 minutes ago
- Yahoo
NASA drops plans to publish scrubbed climate change report on its site
In a reversal, NASA no longer plans to publish a major climate report whose previous website was scrubbed by the Trump administration. The report in question, known as the National Climate Assessment, was previously housed on After the Trump administration eliminated the U.S. Global Change Research Program (USGCRP) website, NASA spokesperson Bethany Stevens said that 'all preexisting reports will be hosted on the NASA website, ensuring continuity of reporting.' But those plans have changed. Stevens appeared to indicate in a statement to The Hill on Monday that NASA no longer plans to host the information on its website. 'The USGCRP met its statutory requirements by presenting its reports to Congress. NASA has no legal obligations to host data,' Stevens said. The announcement comes amid a broader effort by the Trump administration to downplay or deny climate change's existence and its impact on extreme weather. The Environmental Protection Agency (EPA), for example, has said it wants to reconsider its finding that climate change poses a threat to the public. It has also dismissed all of the scientists who were set to work on the next version of the climate assessment, the completion of which is mandated by Congress. The 2023 version of the climate assessment can still be downloaded from online government archives as of Tuesday, or it can be viewed using the Wayback Machine internet archive. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.