
Ukrainian Drone Strike Hits Russian Defense Plant In Izhevsk, Killing 3
A Ukrainian drone strike hit a facility overnight in Russia's industrial city of Izhevsk, more than 1,000 kilometers east of Moscow, killing three people and seriously injuring 35, according to Udmurtia's regional governor.
Aleksandr Brechalov said on July 1 that the attack targeted an industrial facility in the regional capital, but he did not specify which one.
He confirmed that the wounded are receiving medical care in a city hospital and that psychologists are working with victims and their families.
Ukrainian sources claimed responsibility for the strike, stating that the Kupol Izhevsk Electromechanical Plant was the target.
The plant is a key player in Russia's defense industry, known for producing the Tor surface-to-air missile system and the Garpia-A1 combat drone, making it a high-value military-industrial asset in the region.
Social media channels and Telegram posts shared video footage purportedly capturing the moment of the drone's impact and a subsequent explosion.
In response to the incident, Rosaviatsiya, Russia's federal aviation agency, announced the temporary closure of Izhevsk airport, as well as airports in Saratov, Kazan, Ulyanovsk, and Nizhnekamsk.
Izhevsk is a significant hub for Russia's defense industry. It is home to enterprises such as the Kalashnikov Group weapons manufacturer and has been a critical node in Russia's military supply chain.
According to Russia's Defense Ministry, the attack in Izhevsk was part of a larger overnight Ukrainian drone offensive.
The ministry claimed to have shot down 60 drones across several regions, including Ukraine's Russia-annexed Crimea as well as Rostov, Kursk, Saratov, Belgorod, Voronezh, Oryol, and territories around the Azov Sea and the Black Sea.
The ministry said the drones had been intercepted and did not provide any damage evaluations.
Residents in the Saratov region, located near Kazakhstan, reported hearing explosions during the night. Saratov Governor Roman Busargin acknowledged the drone threat in the area.
The Saratov region is home to strategic infrastructure, including the Engels-2 air base, which hosts Russia's strategic bombers, the Tu-95MS and Tu-160.
One of Russia's largest oil refineries, Taneco, is located in the nearby city of Nizhnekamsk in Tatarstan.
Also in Tatarstan, there is the Alabuga Special Economic Zone in the town of Yelabuga, where Shahed-type drones used by Russia in Ukraine are being assembled.
These sites have been previously targeted in past drone assaults, indicating a persistent Ukrainian strategy aimed at degrading Russia's military-industrial capabilities deep inside its own territory.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


WIRED
44 minutes ago
- WIRED
A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion'
Jul 1, 2025 3:27 PM Consumer-grade AI tools have supercharged Russian-aligned disinformation as pictures, videos, QR codes, and fake websites have proliferated. Photo Illustration: WIRED Staff; Getty Images A pro-Russia disinformation campaign is leveraging consumer artificial intelligence tools to fuel a 'content explosion' focused on exacerbating existing tensions around global elections, Ukraine, and immigration, among other controversial issues, according to new research published last week. The campaign, known by many names including Operation Overload and Matryoshka (other researchers have also tied it to Storm-1679), has been operating since 2023 and has been aligned with the Russian government by multiple groups, including Microsoft and the Institute for Strategic Dialogue. The campaign disseminates false narratives by impersonating media outlets with the apparent aim of sowing division in democratic countries. While the campaign targets audiences around the world, including in the US, its main target has been Ukraine. Hundreds of AI-manipulated videos from the campaign have tried to fuel pro-Russian narratives. The report outlines how, between September 2024 and May 2025, the amount of content being produced by those running the campaign has increased dramatically and is receiving millions of views around the world. In their report, the researchers identified 230 unique pieces of content promoted by the campaign between July 2023 and June 2024, including pictures, videos, QR codes, and fake websites. Over the last eight months, however, Operation Overload churned out a total of 587 unique pieces of content, with the majority of them being created with the help of AI tools, researchers said. The researchers said the spike in content was driven by consumer-grade AI tools that are available for free online. This easy access helped fuel the campaign's tactic of 'content amalgamation,' where those running the operation were able to produce multiple pieces of content pushing the same story thanks to AI tools. 'This marks a shift toward more scalable, multilingual, and increasingly sophisticated propaganda tactics,' researchers from Reset Tech, a London-based nonprofit that tracks disinformation campaigns, and Check First, a Finnish software company, wrote in the report. 'The campaign has substantially amped up the production of new content in the past eight months, signalling a shift toward faster, more scalable content creation methods.' Researchers were also stunned by the variety of tools and types of content the campaign was pursuing. "What came as a surprise to me was the diversity of the content, the different types of content that they started using,' Aleksandra Atanasova, lead open-source intelligence researcher at Reset Tech, tells WIRED. 'It's like they have diversified their palette to catch as many like different angles of those stories. They're layering up different types of content, one after another.' Atanasova added that the campaign did not appear to be using any custom AI tools to achieve their goals, but were using AI-powered voice and image generators that are accessible to everyone. While it was difficult to identify all the tools the campaign operatives were using, the researchers were able to narrow down to one tool in particular: Flux AI. Flux AI is a text-to-image generator developed by Black Forest Labs, a German-based company founded by former employees of Stability AI. Using the SightEngine image analysis tool, the researchers found a 99 percent likelihood that a number of the fake images shared by the Overload campaign—some of which claimed to show Muslim migrants rioting and setting fires in Berlin and Paris—were created using image generation from Flux AI. The researchers were then able to generate images that closely replicate the aesthetic of the published images using prompts that included discriminatory language—such as 'angry Muslim men.' This highlights 'how AI text-to-image models can be abused to promote racism and fuel anti-Muslim stereotypes,' the researchers wrote, adding that it raises 'ethical concerns on how prompts work across different AI generation models.' 'We build in multiple layers of safeguards to help prevent unlawful misuse, including provenance metadata that enables platforms to identify AI generated content, and we support partners in implementing additional moderation and provenance tools,' a spokesperson for Black Forest Labs wrote in an email to WIRED. 'Preventing misuse will depend on layers of mitigation as well as collaboration between developers, social media platforms, and authorities, and we remain committed to supporting these efforts.' Atansova tells WIRED the images she and her colleagues reviewed did not contain any metadata. Operation Overload's use of AI also uses AI-voice cloning technology to manipulate videos to make it appear as if prominent figures are saying things they never did. The number of videos produced by the campaign jumped from 150 between June 2023 and July 2024 to 367 between September 2024 and May 2025. The researchers said the majority of the videos in the last eight months used AI technology to trick those who saw them. In one instance, for example, the campaign published a video in February on X that featured Isabelle Bourdon, a senior lecturer and researcher at France's University of Montpellier, seemingly encouraging German citizens to engage in mass riots and vote for the far-right Alternative for Germany (AfD) party in federal elections. This was fake: The footage was taken from a video on the school's official YouTube channel where Bourdon discusses a recent social science prize she won. But in the manipulated video, AI-voice cloning technology made it seem as if she was discussing the German elections instead. The AI-generated content produced by Operation Overload is shared on over 600 Telegram channels, as well as by bot accounts on social media platforms like X and Bluesky. In recent weeks, the content has also been shared on TikTok for the first time. This was first spotted in May, and while the number of accounts was small—just 13— the videos posted were seen 3 million times before the platform demoted the accounts. "We are highly vigilant against actors who try to manipulate our platform and have already removed the accounts in this report,' Anna Sopel, a TikTok spokesperson, tells WIRED. 'We detect, disrupt and work to stay ahead of covert influence operations on an ongoing basis and report our progress transparently every month.' The researchers pointed out that while Bluesky had suspended 65 percent of the fake accounts, 'X has taken minimal action despite numerous reports on the operation and growing evidence for coordination.' X and Bluesky did not respond to requests for comment. Once the fake and AI generated content is created by Operation Overload, the campaign does something unusual: They send emails to hundreds of media and fact-checking organizations across the globe, with examples of their fake content on various platforms, along with requests for the fact-checkers to investigate if it is real or not. While it may seem counterintuitive for a disinformation campaign to alert those trying to tackle disinformation about their efforts, for the pro-Russia operatives, getting their content posted online by a real news outlet—even if it is covered with the word 'FAKE'—is the ultimate aim. According to the researchers, up to 170,000 such emails were sent to more than 240 recipients since September 2024. The messages typically contained multiple links to the AI-generated content, but the email text was not generated using AI, the researchers said. Pro-Russia disinformation groups have long been experimenting with using AI tools to supercharge their output. Last year a group dubbed CopyCop, likely linked to the Russian government, was shown to be using large language models, or LLMs, to create fake websites designed to look like legitimate media outlets. While these attempts don't typically get much traffic, the accompanying social media promotion can attract attention and in some cases the fake information can end up on the top of Google search results. A recent report from the American Sunlight Project estimated that Russian disinformation networks were producing at least 3 million AI-generated articles each year, and that this content was poisoning the output of AI-powered chatbots like OpenAI's ChatGPT and Google's Gemini. Researchers have repeatedly shown how disinformation operatives are embracing AI tools, and as it becomes increasingly difficult for people to tell real from AI-generated content, experts predict the surge in AI content fuelling disinformation campaigns will continue. 'They already have the recipe that works,' Atanasova says. 'They know what they're doing.'


CNN
an hour ago
- CNN
Social media video shows Ukrainian strike on Russian missile facility
Ukraine struck a Russian missile factory inside Russia - authorities said the attack killed three people and injured at least 35 more.


New York Times
2 hours ago
- New York Times
Macron and Putin Discuss Iran and Ukraine in Rare Call
In their first call in almost three years, President Emmanuel Macron of France and President Vladimir V. Putin of Russia appeared on Tuesday to find some common ground on Iran, but the leaders continued to be at loggerheads when it came to the war in Ukraine. The call, which lasted two hours, was initiated after the U.S. bombing of Iran's nuclear sites last month. The two leaders shared a concern, as members of the United Nations Security Council, with 'preserving the global nuclear nonproliferation regime,' a Kremlin statement said. The call came after both leaders were left on the sidelines of the American decision to bomb Iran's nuclear sites. For Mr. Macron, it appeared to be a move to regain international relevance in the Middle East. For Mr. Putin, it was also an opportunity to emphasize Russia's stature as a player in global geopolitics despite the West's outrage over his invasion of Ukraine. The call was a diplomatic risk for Mr. Macron, representing a new step in undoing the isolation of Moscow that Western leaders have tried to maintain since Russia's invasion began. It was Mr. Putin's first call with a major European Union leader since he spoke with Olaf Scholz, who was then chancellor of Germany, in November last year. Mr. Putin has tried to use the Israel-Iran war and its aftermath as a way to break that isolation, casting Russia as well positioned to mediate because of its close ties with Iran and cordial relations with Israel. Mr. Macron had visited Moscow three weeks before the Russian invasion of Ukraine in 2022 with the hopes of using diplomacy to dissuade the attack. He took the opportunity of the call with Mr. Putin to press the Russian leader on the war. Want all of The Times? Subscribe.