
The Wiretap: Trump Says Goodbye To The AI Safety Institute
The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here.
(Photo by Jim WATSON / AFP) (Photo by JIM WATSON/AFP via Getty Images)
The Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI). Set up by the Biden administration in 2023, AISI operated within the National Institute of Standards & Technology (NIST) to research risks in widely-used AI systems like OpenAI's ChatGPT or Anthropic's Claude. The move to dismantle the body had been expected for some time. In February, as JD Vance headed to France for a major AI summit, his delegation did not include anyone from the AI Safety Institute, Reuters reported at the time. The agency's inaugural director Elizabeth Kelly had stepped down earlier in the month.
The Commerce Department's announcement marking the change is thin on details about the reorganization, but it appears the aim is to favor innovation over red tape.
'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,' said Secretary of Commerce Howard Lutnick.
What could be gleaned from Lutnick's paradoxical phrasing – national security-focused standards are limiting, but America needs national security-focused standards – is that it's very difficult to tell just how much the new body will differ from the old one. The announcement goes on to state that CAISI will 'assist industry to develop voluntary standards' in AI, which summed up much of what the old body did. Similarly, just as the AI Safety Institute was tasked with assessing risks in artificial intelligence, CAISI will 'lead unclassified evaluations of AI capabilities that may pose risks to national security.' CAISI will also still be a part of NIST. And, despite Lutnick's apparent disdain for standards, the Commerce press release concludes that CAISI will 'ensure U.S. dominance of international AI standards.'
That there's little obvious change between the Institute and CAISI might alleviate any immediate concerns the U.S. is abandoning commitments to keep AI safe. Just earlier this year, a coalition of companies, nonprofits and academics called on Congress to codify the Institute's existence before the year was up. That included major players like OpenAI and Anthropic, both of which had agreements to work with the agency on research projects. What happens to those is now up in the air. The Commerce Department hadn't responded to a series of questions at the time of publication, and NIST declined to comment.
Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964.
(Photo by Melina Mara-Pool/Getty Images)
Unknown individuals have impersonated President Trump's chief of staff Susie Wiles in calls and texts to Republican lawmakers and business executives. Investigators suspect the perpetrators used artificial intelligence to clone Wiles' voice. One lawmaker was asked by the impersonator to assemble a list of individuals for potential presidential pardons, according to the Wall Street Journal.
It's unclear that motives lay behind the impersonation, or how they pulled the stunt off. Wiles had told confidantes that some of her contacts from her personal phone had been stolen by a hacker.
A Texas police officer searched Flock Safety's AI-powered surveillance camera network to track down a woman who had carried out a self-administered abortion, 404 Media reports. Because the search was conducted across different states, experts raised concerns about police using Flock to track down individuals getting abortions in states where it's legal before going back home to a state where it's illegal. The cops said they were simply worried about the woman's safety.
Nathan Vilas Laatsch, a 28-year-old IT specialist at the Defense Intelligence Agency, has been arrested and charged with leaking state secrets after becoming upset at the Trump administration. The DOJ did not specify to which country Laatsch allegedly tried to pass secrets, sources told the Washington Post it was Germany. He was caught out by undercover agents posing as interested parties, according to the DOJ.
Europol announced it had identified more than 2,000 links 'pointing to jihadist and right-wing violent extremist and terrorist propaganda targeting minors.' The agency warned that it had seen terrorists using AI to generate content like short videos and memes 'designed to resonate with younger audiences.'
A 63-year-old British man, John Miller, was charged alongside a Chinese national by the Department of Justice with conspiring to ship missiles, air defense radar, drones and unspecified 'cryptographic devices' to China. They're also charged with trying to stalk and harass an individual who was planning protests against Chinese president Xi.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
10 minutes ago
- Yahoo
Can CrowdStrike Stock Keep Moving Higher in 2025?
CrowdStrike's all-in-one Falcon cybersecurity platform is increasingly popular for businesses, and it has a substantial long-term growth runway. However, CrowdStrike stock is trading at a record high following a 40% gain this year, and its valuation is starting to look a little rich. Investors hoping for more upside in 2025 might be left disappointed, but there is still an opportunity here for those with a longer time horizon. 10 stocks we like better than CrowdStrike › CrowdStrike (NASDAQ: CRWD) is one of the world's biggest cybersecurity companies. Its stock has soared 40% year to date, but its current valuation might be a barrier to further upside for the remainder of the year. With that said, investors who are willing to take a longer-term view could still reap significant rewards by owning a slice of CrowdStrike. The company's holistic all-in-one platform is extremely popular with enterprise customers, and its annual recurring revenue (ARR) could more than double over the next six years based on a forecast from management. The cybersecurity industry is quite fragmented, meaning many providers often specialize in single products like cloud security or identity security, so businesses have to use multiple vendors to achieve adequate protection. CrowdStrike is an outlier in that regard because its Falcon platform is a true all-in-one solution that allows its customers to consolidate their entire cybersecurity stack with one vendor. Falcon uses a cloud-based architecture, which means organizations don't need to install software on every computer and device. It also relies heavily on artificial intelligence (AI) to automate threat detection and incident response, so it operates seamlessly in the background and requires minimal intervention, if any, from the average employee. To lighten the workload for cybersecurity managers specifically, CrowdStrike launched a virtual assistant in 2023 called Charlotte AI. It eliminates alert fatigue by autonomously filtering threats, which means human team members only have to focus on legitimate risks to their organization. Charlotte AI is 98% accurate when it comes to triaging threats, and the company says it's saving managers more than 40 hours per week on average right now. Falcon features 30 different modules (products), so businesses can put together a custom cybersecurity solution to suit their needs. At the end of the company's fiscal 2026 first quarter (ended April 30), a record 48% of its customers were using six or more modules, up from 44% in the year-ago period. It launched a new subscription option in 2023 called Flex, which allows businesses to shift their annual contracted spending among different Falcon modules as their needs change. This can save customers substantial amounts of money, and it also entices them to try modules they might not have otherwise used, which can lead to increased spending over the long term. This is driving what management calls "reflexes," which describes Flex customers who rapidly chew through their budgets and come back for more. The company says 39 Flex customers recently exhausted their budgets within the first five months of their 35-month contracts, and each of them came back to expand their spending. It ended the fiscal 2026 first quarter with a record $4.4 billion in ARR, which was up 22% year over year. That growth has slowed over the last few quarters, mainly because of the major Falcon outage on July 19 last year, which crashed 8.5 million customer computers. Management doesn't anticipate any long-term effects from the incident (which I'll discuss further in a moment) because Falcon is so valuable to customers, but the company did offer customer choice packages to affected businesses that included discounted Flex subscriptions. This is dealing a temporary blow to revenue growth. Here's where things get a little sticky for CrowdStrike. Its stock is up over 40% this year and is trading at a record high, but the strong move has pushed its price-to-sales ratio (P/S) up to 29.1 as of June 24. That makes it significantly more expensive than any of its peers in the AI cybersecurity space: This premium valuation might be a barrier to further upside for the rest of this year, and it seems Wall Street agrees. The Wall Street Journal tracks 53 analysts who cover the stock, and their average price target is $481.95, which is slightly under where it's trading now, implying there could be near-term downside. But there could still be an opportunity here for longer-term investors. As I mentioned earlier, management doesn't expect any lingering impacts from the Falcon outage last year because it continues to reiterate its goal to reach $10 billion in ARR by fiscal 2031. That represents potential growth of 127% from the current ARR of $4.4 billion, and if the forecast comes to fruition, it could fuel strong returns for the stock over the next six years. Plus, $10 billion is still a fraction of CrowdStrike's estimated addressable market of $116 billion today -- a figure management expects to more than double to $250 billion over the next few years. So while I don't think there's much upside on the table for CrowdStrike in the remainder of 2025, those who can hold on to it for the next six years and beyond still have a solid investment opportunity. Before you buy stock in CrowdStrike, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and CrowdStrike wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $687,731!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $945,846!* Now, it's worth noting Stock Advisor's total average return is 818% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends CrowdStrike and Zscaler. The Motley Fool recommends Palo Alto Networks. The Motley Fool has a disclosure policy. Can CrowdStrike Stock Keep Moving Higher in 2025? was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
10 minutes ago
- Yahoo
Five surprising facts about AI chatbots that can help you make better use of them
AI chatbots have already become embedded into some people's lives, but how many really know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and so how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. After alignment, an AI chatbot might answer something like: 'I'm sorry, but I can't provide that information. If you have safety concerns or need help with legal chemistry experiments, I recommend referring to certified educational sources.' Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But it is clear that AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' Read more: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, subwords or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. The sentence 'The price is $9.99.' is tokenised by ChatGPT as 'The', ' price', 'is', '$' ' 9', '.', '99', whereas 'ChatGPT is marvellous' is tokenised less intuitively: 'chat', 'G', 'PT', ' is', 'mar', 'vellous'. AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. The current version of ChatGPT has its cutoff on June 2024. If asked who is the currently president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots uses web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. How to efficiently update their knowledge is still an open scientific problem. ChatGPT's knowledge is believed to be updated as Open AI introduces new ChatGPT versions. AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work: they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don ́t know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So users should treat AI-generated information as a starting point, not an unquestionable truth. A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, chain of thought enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. This article is republished from The Conversation under a Creative Commons license. Read the original article. Cagatay Yildiz receives funding from DFG (Deutsche Forschungsgemeinschaft, in English German Research Foundation)
Yahoo
14 minutes ago
- Yahoo
Kaltura (KLTR) Pushes Toward Profit and Platform Expansion, Needham Stays Bullish
Kaltura Inc. (NASDAQ:KLTR) is one of the 10 best debt-free IT penny stocks to buy. On May 8, Needham's Ryan Koontz reaffirmed his Buy rating on Kaltura, holding firm on a $3 price target. His positive outlook was based on the company's strong first-quarter performance and encouraging strategic progress. The company reported strong growth in subscription revenue and posted its highest operating margin since 2020. Several key deals closed during the period, lifting core financial metrics compared to the prior year. A large flat-screen TV streaming video from a video hosting platform. While management expects a slight revenue dip in the second quarter due to customer churn, they left full-year guidance unchanged, a sign they remain confident in their roadmap and execution. Regarding the long-term goals, Koontz highlighted the company's aim to double EBITDA by fiscal 2026. It is also targeting to achieve the 'Rule of 30,' a widely used benchmark in the software industry which indicates a good balance between growth and profitability. These goals suggest a shift toward stronger operational discipline. He also noted that better execution, and further consolidation in the enterprise video space could strengthen Kaltura's competitive edge. Kaltura Inc. (NASDAQ:KLTR) provides a cloud-based video platform that powers real-time, on-demand, and live video experiences for enterprises, educational institutions, and media companies. While we acknowledge the potential of KLTR as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Best Tech Stocks to Buy According to Billionaires. Disclosure: None. Sign in to access your portfolio