Musk staffer 'mistakenly' given ability to edit Treasury Department payment system, legal filings say
Treasury Department officials said the "error" was quickly corrected, and a forensic investigation into the actions of Marko Elez -- who resigned from his position last week after The Wall Street Journal unearthed a series of racist social media posts -- remains ongoing.
"To the best of our knowledge, Mr. Elez never knew of the fact that he briefly had read/write permissions for the [Secure Payment System] database, and never took any action to exercise the 'write' privileges in order to modify anything within the SPS database -- indeed, he never logged in during the time that he had read/write privileges, other than during the virtual walk-through -- and forensic analysis is currently underway to confirm this," wrote Joseph Gioeli III, a deputy commissioner at Bureau of the Fiscal Service.
The high-profile mistake at BFS -- which effectively serves as the federal government's checkbook by disbursing more than $5 trillion annually -- comes as a federal judge in New York is weighing whether to continue to block individuals associated with Musk's Department of Government Efficiency from accessing Treasury Department records.
Lawyers with the Department of Justice initially insisted that Elez was strictly given "read-only" access to sensitive records, but the affidavits submitted by BFS employees on Tuesday noted that the 25-year-old was inadvertently given the ability to "read/write" the sensitive system that agencies use to send "large dollar amount transactions" to the Treasury Department.
According to Gioeli, Treasury Department officials also provided Elez with copies of the "source code" for multiple payment systems that he could edit in a digital "sandbox."
"Mr. Elez could review and make changes locally to copies of the source code in the cordoned-off code repository; however, he did not have the authority or capability to publish any code changes to the production system or underlying test environments," the filing said.
Elez resigned from his role on Feb. 6, and Gioielli claimed that the 25-year-old former SpaceX and X employee was the "only individual on the Treasury DOGE Team" who was given direct access to payment systems or source code. A "preliminary review" of his digital activity suggests that Elez stayed within the permitted bounds of his role when accessing the payment systems.
"While forensic analysis is still ongoing, Bureau personnel have conducted preliminary reviews of logs of his activity both on his laptop and within the systems and at this time have found no indication of any unauthorized use, of any use outside the scope that was directed by Treasury leadership, or that Mr. Elez used his BFS laptop to share any BFS payment systems data outside the U.S. Government," the filing said.
The filings also provided new insights into DOGE's ongoing mission with the Treasury Department, including to identify fraud, better understand how the payments are fulfilled and to enforce Trump's day-one executive order that significantly cut foreign aid.
According to Thomas Krause -- a tech CEO and DOGE volunteer who is leading the cost-cutting effort at the Treasury Department -- DOGE is engaged in 4-to-6-week assessment of the Treasury Department's payment systems. He was placed at Treasury not only to identify potential fraud but also understand how to use the Department's payment systems to potentially cut funding to other parts of the government, the filing said.
"BFS is well positioned to help agencies and the federal government holistically understand and take stock of the problems [Government Accountability Office] has reported on," Krause wrote.
Musk staffer 'mistakenly' given ability to edit Treasury Department payment system, legal filings say originally appeared on abcnews.go.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
43 minutes ago
- The Hill
Grok controversies raise questions about moderating, regulating AI content
Elon Musk's artificial intelligence (AI) chatbot Grok has been plagued by controversy recently over its responses to users, raising questions about how tech companies seek to moderate content from AI and whether Washington should play a role in setting guidelines. Grok faced sharp scrutiny last week, after an update prompted the AI chatbot to produce antisemitic responses and praise Adolf Hitler. Musk's AI company, xAI, quickly deleted numerous incendiary posts and said it added guardrails to 'ban hate speech' from the chatbot. Just days later, xAI unveiled its newest version of Grok, which Musk claimed was the 'smartest AI model in the world.' However, users soon discovered that the chatbot appeared to be relying on its owner's views to respond to controversial queries. 'We should be extremely concerned that the best performing AI model on the market is Hitler-aligned. That should set off some alarm bells for folks,' Chris MacKenzie, vice president of communications at Americans for Responsible Innovation (ARI), an advocacy group focused on AI policy. 'I think that we're at a period right now, where AI models still aren't incredibly sophisticated,' he continued. 'They might have access to a lot of information, right. But in terms of their capacity for malicious acts, it's all very overt and not incredibly sophisticated.' 'There is a lot of room for us to address this misaligned behavior before it becomes much more difficult and much more harder to detect,' he added. Lucas Hansen, co-founder of the nonprofit CivAI, which aims to provide information about AI's capabilities and risks, said it was 'not at all surprising' that it was possible to get Grok to behave the way it did. 'For any language model, you can get it to behave in any way that you want, regardless of the guardrails that are currently in place,' he told The Hill. Musk announced last week that xAI had updated Grok, after he previously voiced frustrations with some of the chatbot's responses. In mid-June, the tech mogul took issue with a response from Grok suggesting that right-wing violence had become more frequent and deadly since 2016. Musk claimed the chatbot was 'parroting legacy media' and said he was 'working on it.' He later indicated he was retraining the model and called on users to help provide 'divisive facts,' which he defined as 'things that are politically incorrect, but nonetheless factually true.' The update caused a firestorm for xAI, as Grok began making broad generalizations about people with Jewish last names and perpetuating antisemitic stereotypes about Hollywood. The chatbot falsely suggested that people with 'Ashkenazi surnames' were pushing 'anti-white hate' and that Hollywood was advancing 'anti-white stereotypes,' which it later implied was the result of Jewish people being overrepresented in the industry. It also reportedly produced posts praising Hitler and referred to itself as 'MechaHitler.' xAI ultimately deleted the posts and said it was banning hate speech from Grok. It later offered an apology for the chatbot's 'horrific behavior,' blaming the issue on 'update to a code path upstream' of Grok. 'The update was active for 16 [hours], in which deprecated code made @grok susceptible to existing X user posts; including when such posts contained extremist views,' xAI wrote in a post Saturday. 'We have removed that deprecated code and refactored the entire system to prevent further abuse.' It identified several key prompts that caused Grok's responses, including one informing the chatbot it is 'not afraid to offend people who are politically correct' and another directing it to reflect the 'tone, context and language of the post' in its response. xAI's prompts for Grok have been publicly available since May, when the chatbot began responding to unrelated queries with allegations of 'white genocide' in South Africa. The company later said the posts were the result of an 'unauthorized modification' and vowed to make its prompts public in an effort to boost transparency. Just days after the latest incident, xAI unveiled the newest version of its AI model, called Grok 4. Users quickly spotted new problems, in which the chatbot suggested its surname was 'Hitler' and referenced Musk's views when responding to controversial queries. xAI explained Tuesday that Grok's searches had picked up on the 'MechaHitler' references, resulting in the chatbot's 'Hitler' surname response, while suggesting it had turned to Musk's views to 'align itself with the company.' The company said it has since tweaked the prompts and shared the details on GitHub. 'The kind of shocking thing is how that was closer to the default behavior, and it seemed that Grok needed very, very little encouragement or user prompting to start behaving in the way that it did,' Hansen said. The latest incident has echoes of problems that plagued Microsoft's Tay chatbot in 2016, which began producing racist and offensive posts before it was disabled, noted Julia Stoyanovich, a computer science professor at New York University and director of the Center for Responsible AI. 'This was almost 10 years ago, and the technology behind Grok is different from the technology behind Tay, but the problem is similar: hate speech moderation is a difficult problem that is bound to occur if it's not deliberately safeguarded against,' Stoyanovich said in a statement to The Hill. She suggested xAI had failed to take the necessary steps to prevent hate speech. 'Importantly, the kinds of safeguards one needs are not purely technical, we cannot 'solve' hate speech,' Stoyanovich added. 'This needs to be done through a combination of technical solutions, policies, and substantial human intervention and oversight. Implementing safeguards takes planning and it takes substantial resources.' MacKenzie underscored that speech outputs are 'incredibly hard' to regulate and instead pointed to a national framework for testing and transparency as a potential solution. 'At the end of the day, what we're concerned about is a model that shares the goals of Hitler, not just shares hate speech online, but is designed and weighted to support racist outcomes,' MacKenzie said. In a January report evaluating various frontier AI models on transparency, ARI ranked Grok the lowest, with a score of 19.4 out of 100. While xAI now releases its system prompts, the company notably does not produce system cards for its models. System cards, which are offered by most major AI developers, provide information about how an AI model was developed and tested. AI startup Anthropic proposed its own transparency framework for frontier AI models last week, suggesting the largest developers should be required to publish system cards, in addition to secure development frameworks detailing how they assess and mitigate major risks. 'Grok's recent hate-filled tirade is just one more example of how AI systems can quickly become misaligned with human values and interests,' said Brendan Steinhauser, CEO of The Alliance for Secure AI, a nonprofit that aims to mitigate the risks from AI. 'These kinds of incidents will only happen more frequently as AI becomes more advanced,' he continued in a statement. 'That's why all companies developing advanced AI should implement transparent safety standards and release their system cards. A collaborative and open effort to prevent misalignment is critical to ensuring that advanced AI systems are infused with human values.'


Bloomberg
an hour ago
- Bloomberg
Donald Trump Jr. and Omeed Malik Still ‘Long on Elon' After Feud
Donald Trump Jr., a partner at 1789 Capital, said the investment firm still supports Elon Musk's companies financially and is looking past the billionaire's public falling-out with his father, the US president. 'The reality is, there's going to be political differences,' Trump Jr. said in an interview with Bloomberg TV in New York Tuesday. 'I love Elon as an innovator.'
Yahoo
an hour ago
- Yahoo
Bank Warns That Robotaxi Companies May Have Overlooked Severe Obstacles to Actually Making Money
As Elon Musk's Tesla triples down on an autonomous ridehailing service, analysts are ringing the alarm bells, warning that even if the technology were swiftly perfected, it could take many years for robotaxis to become profitable. As Business Insider reports, analysts at HSBC warn that the market for driverless taxis — if there even is one — is being "widely overestimated." In a note published on Monday, they posited that driverless cabs involve a whole host of "overlooked" costs that could severely cut into already razor-thin margins, including parking, charging, cleaning fees, and human remote operators, who are tasked with taking over when needed. "When we factor in these costs, we believe robotaxis won't be break-even on a cash flow basis until 7-8 years after launch," the note reads, as quoted by BI. It's a notable vote of no confidence, undercutting Musk's promises of establishing a fleet of "millions" of autonomous Teslas by the end of next year, efforts that would allegedly add tens of trillions of dollars to Tesla's market cap. Even without factoring in additional costs, Musk's EV maker has struggled to realize self-driving. The mercurial CEO has infamously promised that Teslas will drive themselves "next year" — every year since 2014. The only real competitor in the robotaxi space, Alphabet's Waymo, has also fallen far short of turning its autonomous ride-hailing service into a profitable business, bleeding billions of dollars each year. HSBC analysts warned that regulatory hurdles and necessary hardware upgrades could pose major challenges in scaling up robotaxi services, meaning that it could take Tesla a full eight years to turn a profit — and that's if the stars align on Musk's outsize promises. Promises made in the past could be an issue as well: Musk has previously admitted that vehicles built before 2023 will need a full onboard computer swap to enable actual self-driving. As sales continue to crater in the US, Europe, and China, nobody knows how much time Musk's EV maker still has. While investors are still happy to prop up a near $1 trillion valuation, Musk's embrace of far-right extremism has left a massive hole in the company's finances. In other words, Tesla is facing an enormous uphill battle in its attempt to catch up with Waymo and prove that there's ultimately a payoff for dominating the robotaxi space. More on robotaxis: Tesla Is in Serious Trouble in China