
Another Top Aide to Hegseth Leaves the Pentagon
The official, Justin Fulcher, joined the Trump administration as part of the Department of Government Efficiency, Elon Musk's government overhaul initiative, and later became an adviser to Mr. Hegseth.
In a statement, Mr. Fulcher said he had planned to work for the federal government for only six months.
Earlier this month, The Washington Post detailed a confrontation between Mr. Fulcher and other DOGE staff members assigned to the Pentagon. But officials downplayed that incident as a cause, insisting Mr. Fulcher's exit was friendly.
Sean Parnell, the chief Pentagon spokesman, said in a statement that 'the Department of Defense is grateful to Justin Fulcher for his work on behalf of President Trump and Secretary Hegseth.'
Under Mr. Hegseth, the office of the secretary of defense, the core group of advisers who help manage the Pentagon's sprawling bureaucracy, has undergone an unusual amount of turnover.
In April, Dan Caldwell and Darin Selnick, aides to Mr. Hegseth, were placed on leave from the Pentagon amid a leak investigation. Colin Carroll, the chief of staff to Deputy Defense Secretary Stephen A. Feinberg, was also removed from the Pentagon. After those actions, Joe Kasper, Mr. Hegseth's first chief of staff, was moved to a different position.
John Ullyot, a veteran spokesman, also left his position at the Pentagon in April, citing disarray and a sense of incompetence.
The purges among Mr. Hegseth's major aides fed a sense of chaos, with appointees accusing one another of disloyalty and tense shouting matches breaking out inside the building.
Mr. Fulcher tried to distance his departure from any sense of disorganization or dysfunction inside Mr. Hegseth's office.
'Working alongside the dedicated men and women of the Department of Defense has been incredibly inspiring,' he said in his statement. 'Revitalizing the warrior ethos, rebuilding the military, and re-establishing deterrence are just some of the historic accomplishments I'm proud to have witnessed.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Boston Globe
19 minutes ago
- Boston Globe
What will it cost to renovate the ‘free' Air Force One? Don't ask.
Officially, and conveniently, the price tag has been classified. But even by Washington standards, where 'black budgets' are often used as an excuse to avoid revealing the cost of outdated spy satellites and lavish end-of-year parties, the techniques being used to hide the cost of Trump's pet project are inventive. Which may explain why no one wants to discuss a mysterious, $934 million transfer of funds from one of the Pentagon's most over-budget, out-of-control projects — the modernization of America's aging, ground-based nuclear missiles. Advertisement In recent weeks, congressional budget sleuths have come to think that amount, slipped into an obscure Pentagon document sent to Capitol Hill as a 'transfer' to an unnamed classified project, almost certainly includes the renovation of the new, gold-adorned Air Force One that Trump desperately wants in the air before his term is over. (It is not clear if the entire transfer will be devoted to stripping the new Air Force One back to its airframe, but Air Force officials privately acknowledge dipping into nuclear modernization funds for the complex project.) Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up Qatar's defense minister and Defense Secretary Pete Hegseth signed the final memorandum of understanding a few weeks ago, paving the way for the renovation to begin soon at a Texas facility known for secret technology projects. The document was reported earlier by The Washington Post. Advertisement Trump's plane probably won't fly for long: It will take a year or two to get the work done, and then the Qatari gift — improved with the latest communications and in-flight protective technology — will be transferred to the yet-to-be-created Trump presidential library after he leaves office in 2029, the president has said. Concerns over the many apparent conflicts of interests involved in the transaction, given Trump's government dealings and business ties with the Qataris, have swirled since reports of the gift emerged this spring. But Trump said he was unconcerned, casting the decision as a no-brainer for taxpayers. 'I would never be one to turn down that kind of an offer,' the president said in May. 'I mean, I could be a stupid person and say, 'No, we don't want a free, very expensive airplane.'' It is free in the sense that a used car handed over by a neighbor looking to get it out of his driveway is free. In this case, among the many modifications will be hardened communications, antimissile systems, and engine capabilities to take the president quickly to safety as one of the older Air Force Ones did Sept. 11, 2001, when Al Qaeda attacked the United States. And there is the delicate matter of ridding the jet of any hidden electronic listening devices that US officials suspect may be embedded in the walls. Then, of course, it has to be stuffed with the luxuries — and gold trim — with which the 47th president surrounds himself, whether he is in the Oval Office or in the air. The jet's upper deck has a lounge and a communications center, while the main bedroom can be converted into a flying sick bay in a medical emergency. Advertisement So it's no surprise that one of Washington's biggest guessing games these days is assessing just where the price tag will end up, on top of the $4 billion already being spent on the wildly behind-schedule presidential planes that Boeing was supposed to deliver last year. It was those delays that led Trump to look for a gift. Air Force officials privately concede that they are paying for renovations of the Qatari Air Force One with the transfer from another massively over-budget, behind-schedule program, called the Sentinel. That is named for the missile at the heart of Washington's long-running effort to rebuild America's aging, leaky, ground-launched nuclear missile system. The project was first sold to Congress as a $77.7 billion program to replace all 400 Minuteman III missiles, complete with launch facilities and communications built to withstand both nuclear and cyber attack. By the time Trump came back into office, that figure had ballooned by 81 percent, to $140 billion and climbing, all to reconstruct what nuclear strategists agree is the most vulnerable, impossible-to-hide element of America's nuclear deterrent. In testimony before Congress in June, Troy E. Meink, the Air Force secretary, said that he thought the cost of the Air Force One renovations would be manageable. 'I think there has been a number thrown around on the order of $1 billion,' he said. 'But a lot of those costs associated with that are costs that we'd have experienced anyway, we will just experience them early,' before Boeing delivers its two Air Force Ones. 'So it wouldn't be anywhere near that.' Advertisement 'We believe the actual retrofit of that aircraft is probably less than $400 million,' he said. If so, that would be a bargain. But engineers and Air Force experts who have been through similar projects have their doubts that it can be accomplished for anything like that price. Members of Congress express concern that Trump will pressure the Air Force to do the work so fast that sufficient security measures are not built into the plane. When asked last week, the Air Force said it simply could not discuss the cost — or anything else about the plane — because it's classified. This article originally appeared in


Forbes
20 minutes ago
- Forbes
Federal Court Strikes Down California's Ammo Background Check Law
In a major victory for the Second Amendment, on Thursday, the Ninth Circuit U.S. Court of Appeals struck down a first-of-its-kind law that required a background check before every purchase of ammunition in California. 'By subjecting Californians to background checks for all ammunition purchases,' Judge Sandra Ikuta wrote for the majority in Rhode v. Bonta, 'California's ammunition background check regime infringes on the fundamental right to keep and bear arms.' PETALUMA, CA - APRIL 02: Rounds of .223 rifle ammuntion sits on the counter at Sportsmans Arms on ... More April 2, 2013 in Petaluma, California. (Photo Illustration by) California's regime dates back to 2016, when California voters approved Proposition 63 by a margin of almost 2:1. Under the proposition, residents would pass an initial background check and then receive a four-year permit to purchase ammunition. However, California lawmakers amended the law to only allow ammunition purchases in-person and after a background check each time. By requiring face-to-face transactions, California also banned both online sales and prohibited Californians from buying ammunition out-of-state. Prior to California's regime taking effect in July 2019, multiple plaintiffs, including Olympic gold medalist Kim Rhode and the California Rifle & Pistol Association, sued the state in 2018. To determine if California's law was constitutional under the Second Amendment, the Ninth Circuit relied on a two-step test set by the Supreme Court in its 2022 landmark ruling, New York State Rifle and Pistol Association v. Bruen. Under that decision's framework, 'when the Second Amendment's plain text covers an individual's conduct, the Constitution presumptively protects that conduct.' If so, the government must then show that 'the regulation is consistent with this nation's historical tradition of firearm regulation.' In the California case, the Ninth Circuit determined that the Second Amendment protects 'operable' arms, and 'because arms are inoperable without ammunition, the right to keep and bear arms necessarily encompasses the right to have ammunition.' As a result, the court concluded that 'California's ammunition background check meaningfully constrains the right to keep operable arms.' To survive the second step of the Bruen test, California attempted to compare its background check system to a wide range of historical analogues, including loyalty oaths and disarmament provisions from the American Revolution and Reconstruction. But the Ninth Circuit was left unconvinced. 'None of the historical analogues proffered by California is within the relevant time frame, or is relevantly similar to California's ammunition background check regime,' Ikuta found, and so, 'California's ammunition background check regime does not survive scrutiny under the two-step Bruen analysis.' In a sharply worded dissent, Judge Jay Bybee blasted the majority's analysis as 'twice-flawed.' Noting that 'the vast majority of its checks cost one dollar and impose less than one minute of delay,' Judge Bybee asserted that California's background check system is 'not the kind of heavy-handed regulation that meaningfully constrains the right to keep and bear arms.' Notably, the California Department of Justice in 2024 received 191 reports of ammunition purchases from 'armed and prohibited individuals' who were denied by background check. In dueling statements, the California Rifle & Pistol Association praised Thursday's ruling against the state's background check law as a 'massive victory for gun owners in California,' while Gov. Gavin Newsom called the decision a 'slap in the face.'


Forbes
34 minutes ago
- Forbes
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.