Latest news with #Viginum


El Chorouk
3 days ago
- Politics
- El Chorouk
French Intelligence Leaks Document Targeting Algeria!
The decline in official French statements hostile to Algeria does not necessarily mean that Paris has a desire to de-escalate and restore bilateral relations, which have been on ice for about a year now. The proof is the leaking of an official document from a sovereign French body that attacks Algeria and accuses it of destabilizing France. This document was issued by the 'French Service for Vigilance and Protection against Foreign Digital Interference,' known by its acronym 'Viginum,' which represents the technical information branch of French intelligence. It was leaked to the satirical newspaper 'Le Canard Enchaîné' and includes accusations that Algeria is waging an electronic war against France with the aim of destabilizing it, according to the newspaper, which claimed to have seen it. In its issue published on Wednesday, July 16, 2025, the newspaper spoke of another 'war,' not apparent, in addition to the escalating diplomatic crisis, taking place in the virtual world, based on the document leaked from the corridors of the French intelligence's cybercrime fighting services. This marks the latest escalation from the French side, after months of an undeclared truce, during which French politicians, led by Interior Minister Bruno Retailleau, refrained from delving into the current diplomatic and political crisis. The document issued by the 'French Service for Vigilance and Protection against Foreign Digital Interference' claims that an army of fake Algerian accounts is waging an anti-France campaign on social media platforms to manipulate public opinion and tarnish the reputation of the French government. It also claims that these accounts publish 'the exact same content at the exact same time or within minutes.' The document, which attempted to provide some details, based on what 'Le Canard Enchaîné' reported, speaks of the creation of 4652 online posts and 55 YouTube videos about an alleged conspiracy by the French Directorate-General for External Security against Algeria within just twenty days in December 2024, as well as targeting some French brands, such as the cheese brand 'La Vache qui rit,' the automotive giant 'Peugeot,' and the famous brand 'Lacoste' specializing in clothing. In a serious escalation that indicates that the warming of bilateral relations is not as close as some portray it, the French Service for Vigilance and Protection against Foreign Digital Interference accuses Algerian sovereign entities, which confirms that the French authorities are trying to hide behind media leaks in order to provoke Algeria, and then hide behind freedom of expression, as they market their justifications every time. The document leaked by 'Le Canard Enchaîné' comes at a time when the French authorities are experiencing a state of frustration due to the failure of all their maneuvers aimed at dissuading the Algerian authorities from some of their sovereign positions, especially regarding the continued imprisonment of the Franco-Algerian writer, Boualem Sansal, and the sports journalist, Christophe Galtier, a dilemma that has exhausted the Paris authorities and put them before difficult challenges in front of French public opinion. It is not unlikely that this incident will pass without a firm Algerian response, for which the appropriate time will be chosen, because the document was issued by a sovereign entity, and it reveals how a highly sensitive French institution views Algeria. Moreover, the existence of such a belief means that the victim party, if it can be said, will respond in its own way, and this indicates that there are signs of an impending escalation on the Algiers-Paris axis, which remains hostage to the repercussions of the ill-considered decision taken by French President Emmanuel Macron last summer, by engaging in support for the so-called autonomy plan in Western Sahara, which was presented by the Moroccan regime in 2007.


Al Jazeera
08-07-2025
- Politics
- Al Jazeera
Is Russia really ‘grooming' Western AI?
In March, NewsGuard – a company that tracks misinformation – published a report claiming that generative Artificial Intelligence (AI) tools, such as ChatGPT, were amplifying Russian disinformation. NewsGuard tested leading chatbots using prompts based on stories from the Pravda network – a group of pro-Kremlin websites mimicking legitimate outlets, first identified by the French agency Viginum. The results were alarming: Chatbots 'repeated false narratives laundered by the Pravda network 33 percent of the time', the report said. The Pravda network, which has a rather small audience, has long puzzled researchers. Some believe that its aim was performative – to signal Russia's influence to Western observers. Others see a more insidious aim: Pravda exists not to reach people, but to 'groom' the large language models (LLMs) behind chatbots, feeding them falsehoods that users would unknowingly encounter. NewsGuard said in its report that its findings confirm the second suspicion. This claim gained traction, prompting dramatic headlines in The Washington Post, Forbes, France 24, Der Spiegel, and elsewhere. But for us and other researchers, this conclusion doesn't hold up. First, the methodology NewsGuard used is opaque: It did not release its prompts and refused to share them with journalists, making independent replication impossible. Second, the study design likely inflated the results, and the figure of 33 percent could be misleading. Users ask chatbots about everything from cooking tips to climate change; NewsGuard tested them exclusively on prompts linked to the Pravda network. Two-thirds of its prompts were explicitly crafted to provoke falsehoods or present them as facts. Responses urging the user to be cautious about claims because they are not verified were counted as disinformation. The study set out to find disinformation – and it did. This episode reflects a broader problematic dynamic shaped by fast-moving tech, media hype, bad actors, and lagging research. With disinformation and misinformation ranked as the top global risk among experts by the World Economic Forum, the concern about their spread is justified. But knee-jerk reactions risk distorting the problem, offering a simplistic view of complex AI. It's tempting to believe that Russia is intentionally 'poisoning' Western AI as part of a cunning plot. But alarmist framings obscure more plausible explanations – and generate harm. So, can chatbots reproduce Kremlin talking points or cite dubious Russian sources? Yes. But how often this happens, whether it reflects Kremlin manipulation, and what conditions make users encounter it are far from settled. Much depends on the 'black box' – that is, the underlying algorithm – by which chatbots retrieve information. We conducted our own audit, systematically testing ChatGPT, Copilot, Gemini, and Grok using disinformation-related prompts. In addition to re-testing the few examples NewsGuard provided in its report, we designed new prompts ourselves. Some were general – for example, claims about US biolabs in Ukraine; others were hyper-specific – for example, allegations about NATO facilities in certain Ukrainian towns. If the Pravda network was 'grooming' AI, we would see references to it across the answers chatbots generate, whether general or specific. We did not see this in our findings. In contrast to NewsGuard's 33 percent, our prompts generated false claims only 5 percent of the time. Just 8 percent of outputs referenced Pravda websites – and most of those did so to debunk the content. Crucially, Pravda references were concentrated in queries poorly covered by mainstream outlets. This supports the data void hypothesis: When chatbots lack credible material, they sometimes pull from dubious sites – not because they have been groomed, but because there is little else available. If data voids, not Kremlin infiltration, are the problem, then it means disinformation exposure results from information scarcity – not a powerful propaganda machine. Furthermore, for users to actually encounter disinformation in chatbot replies, several conditions must align: They must ask about obscure topics in specific terms; those topics must be ignored by credible outlets; and the chatbot must lack guardrails to deprioritise dubious sources. Even then, such cases are rare and often short-lived. Data voids close quickly as reporting catches up, and even when they persist, chatbots often debunk the claims. While technically possible, such situations are very rare outside of artificial conditions designed to trick chatbots into repeating disinformation. The danger of overhyping Kremlin AI manipulation is real. Some counter-disinformation experts suggest the Kremlin's campaigns may themselves be designed to amplify Western fears, overwhelming fact-checkers and counter-disinformation units. Margarita Simonyan, a prominent Russian propagandist, routinely cites Western research to tout the supposed influence of the government-funded TV network, RT, she leads. Indiscriminate warnings about disinformation can backfire, prompting support for repressive policies, eroding trust in democracy, and encouraging people to assume credible content is false. Meanwhile, the most visible threats risk eclipsing quieter – but potentially more dangerous – uses of AI by malign actors, such as for generating malware reported by both Google and OpenAI. Separating real concerns from inflated fears is crucial. Disinformation is a challenge – but so is the panic it provokes. The views expressed in this article are the authors' own and do not necessarily reflect Al Jazeera's editorial stance.

LeMonde
13-06-2025
- Politics
- LeMonde
Russia targeted French speakers in Africa with AI-generated posts, says France
A clandestine pro-Russian online operation targeted French speakers in Africa with "deceptive" AI-generated posts in a campaign likely directed by Moscow, a French government agency said, in a report on Thursday, June 12. Moscow has sought to expand its influence in Africa in recent years, including in former French colonies, through campaigns using grassroots activists and social media. France's Viginum agency, which counters foreign disinformation campaigns, linked Moscow's "clandestine digital activities" to a Russian news agency openly operating in Africa, called African Initiative. With a Moscow address, African Initiative publishes in five languages, including English and French, and runs journalism courses and press trips in Africa. Viginum said the news agency appeared to have set up an operation, which it called "deceptive," posting AI-generated images, text and video and using "malign techniques" to boost views. The operation using pseudo-media outlets was "likely" run by a web marketing company subcontracted by African Initiative, the report said. Dozens of automated accounts also disseminated links to the sites on blogs, with posts appearing to be AI-generated and sometimes translated from Russian, Viginum said. The websites ran several thousand articles, largely on non-political topics such as cinema, sport and music, in an apparent bid to get linked to by other media, the report said. Despite the complex structure, the operation did not rack up many views and the sites appear to have been inactive since December, the French agency said. Replacing Wagner's information operations The Wagner group had previously played a key role in such operations, but Moscow has apparently moved to centralise control of information operations since the group was disbanded and reorganised following the death of its leader Yevgeny Prigozhin, in a 2023 plane crash. African Initiative has become "a key element in the restructuring and implementation of Russia's information and influence strategy in Africa" after Prigozhin's death, the Viginum report said. Its "activities are likely directed by the Russian state apparatus, particularly the Russian intelligence services," it said. Presented as an independent publication, its reporters include the former press secretary for Wagner's office in Saint Petersburg. Viginum released its findings after Meta in August 2024 said it had removed Facebook accounts targeting French-speaking African countries that promoted Russia's role in the region and criticized France. OpenAI, the company behind ChatGPT, later said it had banned accounts based in Russia from using its language models to generate images, comments and articles in English and French, which have been posted on sites posing as news media in Africa.
Yahoo
12-05-2025
- Politics
- Yahoo
France accuses enemies of spreading fake news after 'cocaine bag' claims
By Michel Rose PARIS (Reuters) -President Emmanuel Macron's office has accused France's enemies of spreading fake news by suggesting that he and other European leaders had taken drugs on a train during a visit to Kyiv. Video footage published online showed Macron sitting at a table in a train compartment with German Chancellor Friedrich Merz and British Prime Minister Keir Starmer. In the footage, Macron removes a crumpled white object from the table. Some social media users suggested - without providing evidence - that the object was a "cocaine" bag and Russian foreign ministry spokesperson Maria Zakharova reposted the footage. Macron's Elysee office said the white object was a tissue. "When European unity becomes inconvenient, disinformation goes so far as to make a simple tissue look like drugs," the Elysee said in a post on X, above a picture of a tissue on the table captioned: "This is a tissue. For blowing your nose". "This fake news is being spread by France's enemies, both abroad and at home. We must remain vigilant against manipulation," the Elysee said, without identifying the enemies. American far-right radio host Alex Jones was among those who criticised the European leaders online. Zakharova wrote on Telegram: "As in the joke, a Frenchman, an Englishman and a German boarded the train and ... got high. Apparently, so much so that they forgot to remove the accessories (a bag and a spoon) before the arrival of the journalists." Macron, Merz, Starmer and Polish Prime Minister Donald Tusk met Ukrainian President Volodymyr Zelenskiy on Saturday in a show of solidarity with Kyiv more than three years into Russia's war in Ukraine. France has started to take a more forceful approach to countering online rumours. It has tasked its Viginum foreign disinformation watchdog with monitoring Russia-linked social media accounts and uncovering influence operations. French officials have also expressed concern about media accounts linked to the American alt-right. "Our public debate is bombarded with Russian propaganda, relayed by the American far-right," French foreign minister Jean-Noel Barrot said on X last week.


Local France
07-05-2025
- Politics
- Local France
France says dozens of disinformation attacks came from Russia
The estimate by the French agency countering foreign online attacks, Viginum, said the campaign was "particularly... effective in distributing anti-Ukrainian and anti-Western narratives to Western audiences". The so-called "Storm-1516" campaign uses artificial intelligence to create realistic profiles, pays amateur operators, and poses a "significant threat to the digital public debate, both in France and across all European countries," the agency said. "The European public debate is being pounded by disinformation campaigns conducted by Russian entities and relayed especially by the American far-right," said French Foreign Minister Jean-Noel Barrot in a statement to AFP, adding that Russian entities had targeted the French legislative elections of 2024. A diplomatic source told AFP that Storm-1516 was part of an "information war" by the Kremlin. The Viginum report highlighted the role of American far-right influencers or pro-Russian influencers like Adrien Bocquet, a "former French soldier exiled in Russia", who amplify the dissemination of false information. Advertisement Some of the false information -- such as the alleged purchase by Ukrainian President Volodymyr Zelensky of a former Nazi building in Germany or a luxury hotel in Courchevel -- have been verified by AFP's digital investigative team in articles available on AFP Factuel's website ( Ukraine's Western allies, particularly France, are also targeted, Viginum said. The disinformation-fighting organisation NewsGuard previously attributed to Storm-1516 a video supposedly showing a Chadian migrant confessing to raping a 12-year-old girl in France. Another, AI-generated video accused Brigitte Macron, the wife of President Emmanuel Macron, of sexual assault.