logo
The Wiretap: Google AI Is At The Center Of An Iran-Israel Disinformation War

The Wiretap: Google AI Is At The Center Of An Iran-Israel Disinformation War

Forbes24-06-2025
The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here .
(Photo by Kobi Gideon/GPO via Getty Images) Getty Images
The Iran-Israel war isn't only being fought on the battlefield–there's an online front as well. Today Forbes reports that the U.S. isn't well prepared for any potential destructive cyberattacks by Iran. On the flip side, that nation is so concerned about U.S. and Israeli cyber and online psychological warfare that it closed off its internet, making it largely unusable across the country.
In the disinformation space, AI has been a key weapon in amplifying false narratives. Google's Veo 3 model has been at the center of some campaigns, according to GetReal Security, which tracks faked or manipulated content online. Emmanuelle Saliba, chief investigative officer at GetReal, told Forbes that Veo 3 is behind 'a slew of fabricated hyper realistic fakes circulating claiming to depict scenes from the Israel-Iran conflict.'
Google hadn't responded to a request for comment at the time of publication.
'This perhaps the first time we've seen generative AI be used at scale during a conflict,' Saliba said. 'It's also being used to replicate missile strikes, sometimes night ones which are particularly challenging to verify using visual investigations tactic.
'When both countries deny an incident, how can we be sure of what we are seeing? Technology will be key.'
She noted that Veo 3 images include an invisible watermark designed to make it easy to detect AI-created content. She described it as 'pretty robust.'
That's not to say the model isn't open to abuse–in part because you only know the watermark is there with software that's looking for it. But fixing that isn't as easy as just adding a visible watermark. 'The perceptible watermarks are nice because everyone can see them. But they are also relatively easy to remove and/or mimic, making them less secure,' says Hany Farid, cofounder at GetReal. 'A benefit of the imperceptible watermark is that they are more difficult but not impossible to remove. The drawback is that we need customized software to scan content for their presence.'
Last week, the BBC reported it had found dozens of AI-generated videos attempting to prove the effectiveness of Iran's response to Israel's attacks. These included fake clips showing the aftermath of Iranian strikes, while another showed missiles raining down on Tel Aviv. On the other side, pro-Israel accounts have been posting old protest clips, falsely claiming they show current dissent against Iran's regime.
The efficacy of such disinformation campaigns is difficult to measure, even as these videos amass tens of millions of views. In a world where a president openly says both Iran and Israel 'don't know what the fuck they're doing,' the content with the most impact still appears to come from real people.
Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964 . THE BIG STORY: LA Residents Are Foiling ICE Raids Using Amazon Ring's Neighborhood Watch
(Photo by Smith Collection/Gado/Getty Images) Getty Images
As the protests against immigration raids across the L.A. area exploded earlier this month, residents took to a number of services to issue warnings about ICE agents in their area. One of the most popular turned out to be Amazon Ring's Neighbors app.
Forbes spoke with users about how they hoped posting on the Ring network would help protect immigrants or even save lives. 'It was very grassroots and it's become a tool being used by people just trying to help keep neighbors safe,' said one. Stories You Have To Read Today
Insurance giant Alfac has been hacked and its customers' social security numbers may have been pilfered. The attack is believed to be part of a hacking spree perpetrated by a cybercrime group known as Scattered Spider.
An investigative report from Lighthouse Reports claims that millions of two-factor authentication codes for services run by tech giants like Amazon, Google and Meta were being routed using Fink Telecom Services–which allegedly has links to the spyware industry. CEO Andreas Fink told Bloomberg that it's out of that business. Winner of the Week
The Electronic Frontier Foundation and the Freedom of the Press Foundation have developed a new journalism curriculum module to teach students how to protect themselves when crossing the border. The University of Texas at El Paso and San Diego State University have already been offering it to their students. Loser of the Week
The cofounder and accountant for a nonprofit organization that manages funds for people with special needs and disabilities have been accused of stealing as much as $100 million from clients. 'For over 15 years, the defendants conspired to use the funds of special needs clients as a personal piggy bank,' said Matthew Galeotti, head of the Justice Department's Criminal Division. More On Forbes Forbes US And Israel Should Prepare For Destructive Iranian Cyberattacks, Ex-Intel Officer Says By Thomas Brewster Forbes How New Balance Went From 'Dad Shoe' To Scoring The No. 1 NBA Draft Prospect By Justin Birnbaum Forbes Here's How Much Andrew Cuomo Is Worth By Kyle Khan-Mullins
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Copyrighted Books Are Fair Use For AI Training. Here's What To Know.
Copyrighted Books Are Fair Use For AI Training. Here's What To Know.

Forbes

time35 minutes ago

  • Forbes

Copyrighted Books Are Fair Use For AI Training. Here's What To Know.

The use of AI systems has become part of our daily lives. The sudden presence of generative AI systems in our daily lives has prompted many to question the legality of how AI systems are created and used. One question relevant to my practice: Does the ingestion of copyrighted works such books, articles, photographs, and art to train an AI system render the system's creators liable for copyright infringement, or is that ingestion defensible as a 'fair use'? A court ruling answers this novel question, and the answer is: Yes, the use of copyrighted works for AI training is a fair use – at least under the specific facts of those cases and the evidence presented by the parties. But because the judges in both cases were somewhat expansive in their dicta about how their decisions might have been different, they provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The rulings on Meta and Anthropic's respective cases require some attention. Let's take a closer look. More than 30 lawsuits have been filed in the past year or two, in all parts of the nation, by authors, news publishers, artists, photographers, musicians, record companies and other creators against various AI systems, asserting that using the authors' respective copyrighted works for AI training purposes violates their copyrights. The systems' owners invariably assert fair use as a defense. They provide a helpful roadmap as to how other lawsuits might be decided, and how a future AI system might be designed so as not to infringe copyright. The Anthropic Case Anthropic planned to create a central library of "all the books in the world." The first decision, issued in June, involved a lawsuit by three book authors, who alleged that Anthropic PBC infringed the authors' copyrights by copying several of their books (among millions of others) to train its text generative AI system called Claude. Anthropic's defense was fair use. Judge Alsup, sitting the Northern District Court of California, held that the use of the books for training purposes was a fair use, and that the conversion of any print books that Anthropic had purchased and converted to digital was also a fair use. However, Anthropic's use of pirated digital copies for purposes of creating a central library of 'all the books in the world' for uses beyond training Claude, was not a fair use. Whether Anthropic's copying of its central library copies for purposes other than AI training (and apparently there was some evidence that this was going on, but on a poorly developed record) was left for another day. It appears that Anthropic decided early on in its designing of Claude that books were the most valuable training materials for a system that was designed to 'think' and write like a human. Books provide patterns of speech, prose and proper grammar, among other things. Anthropic chose to download millions of free digital copies of books from pirate sites. It also purchased millions of print copies of books from booksellers, converted them to digital copies and threw the print copies away, resulting in a massive central library of 'all the books in the world' that Anthropic planned to keep 'forever.' None of this activity was done with the authors' permission. Significantly, Claude was designed so that it would not reproduce any of the plaintiffs' books as output. There was not any such assertion by the plaintiffs, nor any evidence that it did so. The assertions of copyright infringement were, therefore, limited to Claude's ingestion of the books for training, to build the central library, and for the unidentified non-training purposes. Users of Claude ask it questions and it returns text-based answers. Many users use it for free. Certain corporate and other users of Claude pay to use it, generating over one billion dollars annually in revenue for Anthropic. The Anthropic Ruling Both decisions were from the federal district court in Northern California, the situs of Silicon ... More Valley. To summarize the legal analysis, Judge Alsup evaluated each 'use' of the books separately, as it must under the Supreme Court's 2023 Warhol v. Goldsmith fair use decision. Turning first to the use of the books as training data, Alsup found that the use of the books to train Claude was a 'quintessentially' transformative use which did not supplant the market for the plaintiffs' books, and as such qualified as fair use. He further found that the conversion of the purchased print books to digital files, where the print copies were thrown away, was also a transformative use akin to the Supreme Court's 1984 Betamax decision in which the court held that the home recording of free TV programming for time-shifting purposes was a fair use. Here, Judge Alsup reasoned, Anthropic lawfully purchased the books and was merely format-shifting for space and search capability purposes, and, since the original print copy was discarded, only one copy remained (unlike the now-defunct Redigi platform of 2018). By contrast, the downloading of the over seven million of pirate copies from pirate sites, which at the outset was illegal, for central library uses other than for training purposes could not be held to be a fair use as a matter of law, because the central library use was unjustified and the use of the pirate copies could supplant the market for the original. Anthropic Is Liable For Unfair Uses – The Cost of Doing Business? The case will continue on the issue of damages for the pirated copies of the plaintiffs' books used for central library purposes and not for training purposes. The court noted that the fact that Anthropic later purchased copies of plaintiffs' books to replace the pirated copies will not absolve it of liability, but might affect the amount of statutory damages it has to pay. The statutory damages range is $750 per copy at a minimum and up to $150,000 per copy maximum. It tempts one to wonder about all those other millions of copyright owners beyond the three plaintiffs – might Anthropic have to pay statutory damages for seven million copies if the pending class action is certified? Given the lucrativeness of Claude, could that be just a cost of doing AI business? The Meta Case Meta's decision to use shadow libraries to source books was approved by CEO Mark Zuckerberg. The second decision, issued two days following the Anthropic decision, on June 25, involves thirteen book authors, most of them famous non-fiction writers, who sued Meta, the creator of a generative AI model called Llama, for using the plaintiffs' books as training data. Llama (like Claude), is free to download, but generates billions of dollars for Meta. Like Anthropic, Meta initially looked into licensing rights from book publishers, but eventually abandoned those efforts and instead downloaded the books it desired from pirate sites called 'shadow libraries' which were not authorized by the copyright owners to store their works. Also like Claude, Llama was designed not to produce output that reproduced its source material in whole or substantial part, the record indicating that Llama could not be prompted to reproduce more than 50 words from the plaintiffs' books. Judge Chhabria, also in the Northern District of California, held Meta's use of plaintiffs' works to train Llama was a fair use, but he did so very reluctantly, chiding the plaintiff's lawyers for making the 'wrong' arguments and failing to develop an adequate record. Chhabria's decision is riddled with his perceptions of the dangers of AI systems potentially flooding the market with substitutes for human authorship and destroying incentives to create. The Meta Ruling Based on the parties' arguments and the record before him, like Judge Alsup, Judge Chhabria found that Meta's use of the books as training data for Llama was 'highly transformative' noting that the purpose of the use of the books - for creating an AI system - was very different than the plaintiffs' purpose of the books, which was for education and entertainment. Rejecting plaintiff's argument that Llama could be used to imitate the style of plaintiffs' writing, Judge Chhabria noted that 'style is not copyrightable.' The fact that Meta sourced the books from shadow libraries rather than authorized copies didn't make a difference; Judge Chhabria (in my opinion rightly) reasoned that to say that a fair use depends on whether the source copy was authorized begs the question of whether the secondary copying was lawful. Although plaintiffs tried to make the 'central library for other purposes than training' argument that was successful in the Anthropic case, Judge Chhabria concluded that the evidence simply didn't support that copies were used for purposes other than training, and noted that even if some copies were not used for training, 'fair use doesn't require that the secondary user make the lowest number of copies possible.' Since Llama couldn't generate exact or substantially similar versions of plaintiffs' books, he found there was no substitution harm, noting that plaintiffs' lost licensing revenue for AI training is not a cognizable harm. Judge Chhabria's Market Dilution Prediction Judge Chhabria warns that generative AI systems could dilute the market for lower-value mass market ... More publications. In dicta, clearly expressing frustration with the outcome in Meta's favor, Judge Chhabria discussed in detail how he thought market harm could – and should - be shown in other cases, through the concept of 'market dilution' - warning that a system like Llama, while not producing direct substitutes for a plaintiff's work, could compete with and thus dilute the plaintiff's market. There may be types of works unlike award-winning fictional works more susceptible to this harm, he said, such as news articles, or 'typical human-created romance or spy novels.' But since the plaintiffs before him didn't make those arguments, nor presented any record of the same, he said, he could not make a ruling on the same. This opportunity is left for another day. AI System Roadmap For Non-Infringement The court decisions provide an early roadmap as to how to design an AI system. Based on these two court decisions, here are my take-aways for building a roadmap for a non-infringing generative AI system using books:

Year-old European startup Maisa named alongside Google and Amazon in elite list of leading AI agent vendors in top global US research reports by Gartner
Year-old European startup Maisa named alongside Google and Amazon in elite list of leading AI agent vendors in top global US research reports by Gartner

Yahoo

timean hour ago

  • Yahoo

Year-old European startup Maisa named alongside Google and Amazon in elite list of leading AI agent vendors in top global US research reports by Gartner

- First time a Spanish startup has made the list, thanks to its industry-first hallucination resistant 'digital workers' - AI startup is one of two European AI agent vendors in Gartner's Hype Cycle report for Artificial Intelligence and Hype Cycle for the Future of Work lists SAN FRANCISCO & VALENCIA, Spain, July 02, 2025--(BUSINESS WIRE)--Maisa, a rising star of enterprise AI, has been named by leading global research and advisory firm Gartner in its list of leading vendors for developing reliable AI agents. Inclusion in Gartner's 2025 Hype Cycle for AI and Hype Cycle for the Future of Work marks the first time a Spanish startup has been mentioned in these influential reports. The company, which is barely a year old and made its first raise of $5m+ from leading US investors last year, now finds itself named alongside global giants Amazon Web Services, Google, Salesforce and LangChain. The Gartner Hype Cycle for AI Agents provides an overview of emerging technologies in AI, helping organizations navigate the evolving landscape of autonomous software agents. The Hype Cycle for the Future of Work provides CIOs with a crucial human-first lens on the transformative AI advancements and disciplines required to ensure success at scale. Maisa is one of two European businesses included in its field in the prestigious report. Its technology allows businesses to use agentic AI to create 'digital workers' who can undertake complex process automation tasks such as regulatory compliance, supply chain control and financial management. It has global clients in banking, automotive and energy. Maisa is unique in the field because its technology is hallucination-resistant. Its workings are traceable and there is a fully auditable trail - what Maisa calls its 'Chain of Work' - meaning businesses can confidently deploy it in critical functions, knowing they can pinpoint exactly how the AI is functioning. Maisa's CEO and cofounder David Villalón: "We are delighted to be the first Spanish company included by Gartner in its reports and one of only two European companies in the category of AI agents. "We are especially pleased to be listed alongside global tech titans such as Google and Amazon. "Our vision and achievements in empowering companies with autonomous, trustworthy AI agents drive real business value and set new standards for intelligent automation." The Gartner analysis highlights AI agents as rapidly maturing technology with a rare "high benefit" rating, but points out that there is only a 5% - 20% market penetration to date, implying huge market growth potential. AI agents - defined as autonomous or semi-autonomous software entities capable of perceiving, deciding and acting to achieve goals - are set to revolutionise industries by automating complex tasks, enhancing decision-making and enabling new levels of workflow integration. About Maisa: A Rising Star in Agentic AI Its platform allows enterprises to create and manage AI-powered digital workers capable of automating complex, knowledge-intensive business processes with full transparency, traceability and reliability. It is simple to operate, fast to work and trustworthy. Maisa is enabled by a method the company calls 'HALP' (human-augmented LLM processing), which is a fast, no code and enterprise ready way to train digital workers. Instead of relying on massive datasets or manual programming, HALP enables digital workers to learn directly from real work inside organisations. View source version on Contacts rachael@ +44 7595048136 Effettua l'accesso per consultare il tuo portafoglio

Server sales surged in Q1, driven by GPU demand
Server sales surged in Q1, driven by GPU demand

Yahoo

timean hour ago

  • Yahoo

Server sales surged in Q1, driven by GPU demand

This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. GPU demand drove a record spike in server sales during the first three months of the year, according to IDC research published Thursday. The market shot up 134% year over year to $95.2 billion in Q1, marking the largest quarterly increase the analyst firm has recorded in 25 years. IDC expects the market to surge past $360 billion in 2025, which would indicate 45% growth compared with last year. As AI adoption ramped up in 2024, server sales increased 73.5% to $244 billion dollars, according to the firm's March market analysis. High-capacity GPU servers will make up roughly half the total market this year, according to IDC. 'The evolution from simple chatbots to reasoning models to agentic AI will require several orders of magnitude more processing capacity, especially for inferencing,' IDC Research VP Kuba Stolarski said in the report. As software providers add agentic automation to the growing menu of AI-based productivity tools, demand for traditional and accelerated compute resources is reshaping data centers, from massive cloud facilities to on-premises enterprise estates. Multibillion-dollar hyperscale infrastructure investments flooded hardware manufacturers with orders during the first quarter of the year. The three largest cloud providers — AWS, Microsoft and Google Cloud — poured $24 billion, $21 billion and $17 billion, respectively, into capital expenditures, primarily to boost data center capacity. Oracle's quarterly CapEx more than doubled year over year to $21.2 billion during the three months ending May 31. 'When we all of a sudden have higher CapEx, it means we are filling out data centers and we are buying components to build our computers,' Oracle CEO Safra Catz said during a June earnings call. Enterprise AI hardware orders rolled in, too, 'with good representation across key industry verticals, including web tech, financial services industry, manufacturing, media and entertainment, and education,' Dell Technologies Vice Chairman and COO Jeff Clarke said during a May earnings call. The company reported $6.3 billion in revenue for its server and networking segment, up 16% year over year for the three months ending May 2. Orders for AI servers surpassed $12 billion, eclipsing the entirety of shipments from the prior twelve months, Clarke said. Hewlett Packard Enterprise's server segment saw a 6% year-over-year server segment revenue boost to $4.1 billion, during the three months ending April 30. IDC expects the server market to triple in size over the next three years, the report said. Recommended Reading Nvidia lures all 4 major cloud hyperscalers with Blackwell 'superchip' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store