logo
#

Latest news with #InternationalPanelontheInformationEnvironment

AI is starting to wear down democracy
AI is starting to wear down democracy

Observer

time09-07-2025

  • Politics
  • Observer

AI is starting to wear down democracy

Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, officials and experts said, begun to have an impact on election results. Free and easy to use, AI tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online. The technology has amplified social and partisan divisions and bolstered anti-government sentiment, especially on the far-right, which has surged in recent elections in Germany, Poland and Portugal. In Romania, a Russian influence operation using AI tainted the first round of last year's presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which AI played a decisive role in the outcome. It is unlikely to be the last. As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function. Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania's capital, Bucharest, said there was no question that the technology was already 'being used for obviously malevolent purposes' to manipulate voters. 'These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,' she said. 'What can compete with this?' In the unusually concentrated wave of elections that took place in 2024, AI was used in more than 80%, according to the International Panel on the Information Environment, an independent organisation of scientists based in Switzerland. It documented 215 instances of AI in elections that year, based on government statements, research and news reports. Already this year, AI has played a role in at least nine more major elections, from Canada to Australia. Not all uses were nefarious. In 25 per cent of the cases the panel surveyed, candidates used AI for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to reach. In India, the practice of cloning candidates became commonplace — 'not only to reach voters, but also to motivate party workers,' according to a study by the Center for Media Engagement at the University of Texas at Austin. At the same time, however, dozens of deepfakes — photographs or videos that re-create real people — used AI to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment's survey, AI was characterised as having a harmful role in 69 per cent of the cases. There were numerous malign examples in last year's US presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence and the FBI. Under Trump, the agencies have dismantled the teams that led those efforts. The most intensive deceptive uses of AI have come from autocratic countries seeking to interfere in elections outside their borders, like Russia and China. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system. One Russian campaign tried to stoke anti-Ukraine sentiment before last month's presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the voting. In previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural malapropisms. With AI, these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political news. — The New York Times By Stuart A. Thompson and Steven Lee Myers The authors write on misinformation for NYT

AI is starting to wear down democracy
AI is starting to wear down democracy

Indian Express

time29-06-2025

  • Politics
  • Indian Express

AI is starting to wear down democracy

Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and — for the first time, officials and experts said — begun to have an impact on election results. Free and easy to use, AI tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online. The technology has amplified social and partisan divisions and bolstered anti-government sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal. In Romania, a Russian influence operation using AI tainted the first round of last year's presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which AI played a decisive role in the outcome. It is unlikely to be the last. As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function. Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania's capital, Bucharest, said there was no question that the technology was already 'being used for obviously malevolent purposes' to manipulate voters. 'These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,' she said. 'What can compete with this?' In the unusually concentrated wave of elections that took place in 2024, AI was used in more than 80%, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland. It documented 215 instances of AI in elections that year, based on government statements, research and news reports. Already this year, AI has played a role in at least nine more major elections, from Canada to Australia. Not all uses were nefarious. In 25% of the cases the panel surveyed, candidates used AI for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to reach. In India, the practice of cloning candidates became commonplace — 'not only to reach voters but also to motivate party workers,' according to a study by the Center for Media Engagement at the University of Texas at Austin. At the same time, however, dozens of deepfakes — photographs or videos that re-create real people — used AI to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment's survey, AI was characterized as having a harmful role in 69% of the cases. There were numerous malign examples in last year's U.S. presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence and the FBI. Under Trump, the agencies have dismantled the teams that led those efforts. 'In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse,' said Inga Kristina Trauthig, a professor at Florida International University, who led the international panel's survey. The most intensive deceptive uses of AI have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system. One Russian campaign tried to stoke anti-Ukraine sentiment before last month's presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the voting. In previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural malapropisms. With AI, these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political news. Saman Nazari, a researcher with the Alliance 4 Europe, an organization that studies digital threats to democracies, said this year's elections in Germany and Poland showed for the first time how effective the technology had become for foreign campaigns as well as domestic political parties. 'AI will have a significant impact on democracy going forward,' he said. Advances in commercially available tools like Midjourney's image maker and Google's new AI audio-video generator, Veo, have made it even harder to distinguish fabrications from reality — especially at a swiping glance. Grok, the AI chatbot and image generator developed by Elon Musk, will readily reproduce images of popular figures, including politicians. These tools have made it harder for governments, companies and researchers to identify and trace increasingly sophisticated campaigns. Before AI, 'you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low-quality,' said Isabelle Frances-Wright, director of technology and society with the Institute for Strategic Dialogue. 'Now you can have both, and that's really scary territory to be in.' The major social media platforms, including Facebook, X, YouTube and TikTok, have policies governing the misuse of AI and have taken action in several cases that involved elections. At the same time, they are operated by companies with a vested interest in anything that keeps users scrolling, according to researchers who say the platforms should do more to restrict misleading or harmful content. In India's election, for example, little of the AI content on Meta's platform was marked with disclaimers, as required by the company, according to the study by the Center for Media Engagement. Meta did not respond to a request for comment. It goes beyond just fake content. Researchers at the University of Notre Dame found last year that inauthentic accounts generated by AI tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta's three platforms: Facebook, Instagram and Threads. The companies leading the wave of generative AI products also have policies against manipulative uses. In 2024, OpenAI disrupted five influence operations aimed at voters in Rwanda, the United States, India, Ghana and the European Union during its parliamentary races, according to the company's reports. This month, the company disclosed that it had detected a Russian influence operation that used ChatGPT during Germany's election in February. In one instance, the operation created a bot account on X that amassed 27,000 followers and posted content in support of the far-right party, Alternative for Germany, or AfD. The party, once viewed as fringe, surged into second place, doubling the number of its seats in parliament. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) The most disruptive case occurred in Romania's presidential election late last year. In the first round of voting in November, a little-known far-right candidate, Calin Georgescu, surged to the lead with the help of a covert Russian operation that, among other things, coordinated an inauthentic campaign on TikTok. Critics, including the American vice president, JD Vance, and Musk, denounced the court's subsequent nullification of the vote itself as undemocratic. 'If your democracy can be destroyed with a few hundred thousands of dollars of digital advertising from a foreign country,' Vance said in February, 'then it wasn't very strong to begin with.' The court ordered a new election last month. Georgescu, facing a criminal investigation, was barred from running again, clearing the way for another nationalist candidate, George Simion. A similar torrent of manipulated content appeared, including the fake video that made Trump appear to criticize the country's current leaders, according to researchers from the Bulgarian-Romanian Observatory of Digital Media. Nicusor Dan, the centrist mayor of Bucharest, prevailed in a second round of voting May 18. The European Union has opened an investigation into whether TikTok did enough to restrict the torrent of manipulative activity and disinformation on the platform. It is also investigating the platform's role in election campaigns in Ireland and Croatia. In statements, TikTok has claimed that it moved quickly to take down posts that violated its policies. In two weeks before the second round of voting in Romania, it said, it removed more than 7,300 posts, including ones generated by AI but not identified as such. It declined to comment beyond those statements. Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence, said he was concerned about more than just the potential for deepfakes to fool voters. AI, he warned, is so muddling the public debate that people are becoming disillusioned. 'The pollution of the information ecosystem is going to be one of the most difficult things to overcome,' he said. 'And I'm not really sure there's much of a way back from that.'

AI is starting to wear down democracy
AI is starting to wear down democracy

Time of India

time29-06-2025

  • Politics
  • Time of India

AI is starting to wear down democracy

Academy Empower your mind, elevate your skills Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and -- for the first time, officials and experts said -- begun to have an impact on election and easy to use, AI tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not -- all spread with the relative impunity of anonymity technology has amplified social and partisan divisions and bolstered anti-government sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Romania, a Russian influence operation using AI tainted the first round of last year's presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which AI played a decisive role in the outcome. It is unlikely to be the the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to Botan, a professor at the National University of Political Studies and Public Administration in Romania's capital, Bucharest, said there was no question that the technology was already "being used for obviously malevolent purposes" to manipulate voters."These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time," she said. "What can compete with this?"In the unusually concentrated wave of elections that took place in 2024, AI was used in more than 80%, according to the International Panel on the Information Environment, an independent organization of scientists based in documented 215 instances of AI in elections that year, based on government statements, research and news reports. Already this year, AI has played a role in at least nine more major elections, from Canada to all uses were nefarious. In 25% of the cases the panel surveyed, candidates used AI for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to India, the practice of cloning candidates became commonplace -- "not only to reach voters but also to motivate party workers," according to a study by the Center for Media Engagement at the University of Texas at the same time, however, dozens of deepfakes -- photographs or videos that re-create real people -- used AI to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment's survey, AI was characterized as having a harmful role in 69% of the were numerous malign examples in last year's U.S. presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence and the Trump , the agencies have dismantled the teams that led those efforts."In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse," said Inga Kristina Trauthig, a professor at Florida International University, who led the international panel's most intensive deceptive uses of AI have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview -- or simply to discredit the idea of democratic governance itself as an inferior political Russian campaign tried to stoke anti-Ukraine sentiment before last month's presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural AI, these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political Nazari, a researcher with the Alliance 4 Europe, an organization that studies digital threats to democracies, said this year's elections in Germany and Poland showed for the first time how effective the technology had become for foreign campaigns as well as domestic political parties."AI will have a significant impact on democracy going forward," he in commercially available tools like Midjourney's image maker and Google's new AI audio-video generator, Veo, have made it even harder to distinguish fabrications from reality -- especially at a swiping the AI chatbot and image generator developed by Elon Musk, will readily reproduce images of popular figures, including tools have made it harder for governments, companies and researchers to identify and trace increasingly sophisticated AI, "you had to pick between scale or quality -- quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low-quality," said Isabelle Frances-Wright, director of technology and society with the Institute for Strategic Dialogue. "Now you can have both, and that's really scary territory to be in."The major social media platforms, including Facebook, X, YouTube and TikTok , have policies governing the misuse of AI and have taken action in several cases that involved elections. At the same time, they are operated by companies with a vested interest in anything that keeps users scrolling, according to researchers who say the platforms should do more to restrict misleading or harmful India's election, for example, little of the AI content on Meta 's platform was marked with disclaimers, as required by the company, according to the study by the Center for Media Engagement. Meta did not respond to a request for goes beyond just fake content. Researchers at the University of Notre Dame found last year that inauthentic accounts generated by AI tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta's three platforms: Facebook, Instagram and companies leading the wave of generative AI products also have policies against manipulative 2024, OpenAI disrupted five influence operations aimed at voters in Rwanda, the United States, India, Ghana and the European Union during its parliamentary races, according to the company's month, the company disclosed that it had detected a Russian influence operation that used ChatGPT during Germany's election in February. In one instance, the operation created a bot account on X that amassed 27,000 followers and posted content in support of the far-right party, Alternative for Germany, or AfD. The party, once viewed as fringe, surged into second place, doubling the number of its seats in parliament.(The New York Times has sued OpenAI and its partner, Microsoft , accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.)The most disruptive case occurred in Romania's presidential election late last year. In the first round of voting in November, a little-known far-right candidate, Calin Georgescu , surged to the lead with the help of a covert Russian operation that, among other things, coordinated an inauthentic campaign on including the American vice president, JD Vance , and Musk, denounced the court's subsequent nullification of the vote itself as undemocratic. "If your democracy can be destroyed with a few hundred thousands of dollars of digital advertising from a foreign country," Vance said in February, "then it wasn't very strong to begin with."The court ordered a new election last month. Georgescu, facing a criminal investigation, was barred from running again, clearing the way for another nationalist candidate, George Simion. A similar torrent of manipulated content appeared, including the fake video that made Trump appear to criticize the country's current leaders, according to researchers from the Bulgarian-Romanian Observatory of Digital Dan, the centrist mayor of Bucharest, prevailed in a second round of voting May European Union has opened an investigation into whether TikTok did enough to restrict the torrent of manipulative activity and disinformation on the platform. It is also investigating the platform's role in election campaigns in Ireland and statements, TikTok has claimed that it moved quickly to take down posts that violated its policies. In two weeks before the second round of voting in Romania, it said, it removed more than 7,300 posts, including ones generated by AI but not identified as such. It declined to comment beyond those Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence, said he was concerned about more than just the potential for deepfakes to fool voters. AI, he warned, is so muddling the public debate that people are becoming disillusioned."The pollution of the information ecosystem is going to be one of the most difficult things to overcome," he said. "And I'm not really sure there's much of a way back from that."

AI is starting to wear down democracy
AI is starting to wear down democracy

Boston Globe

time26-06-2025

  • Politics
  • Boston Globe

AI is starting to wear down democracy

In Romania, a Russian influence operation using AI tainted the first round of last year's presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which AI played a decisive role in the outcome. It is unlikely to be the last. As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function. Advertisement Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania's capital, Bucharest, said there was no question that the technology was already 'being used for obviously malevolent purposes' to manipulate voters. 'These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,' she said. 'What can compete with this?' Advertisement In the unusually concentrated wave of elections that took place in 2024, AI was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland. It documented 215 instances of AI in elections that year, based on government statements, research, and news reports. Already this year, AI has played a role in at least nine more major elections, from Canada to Australia. Not all uses were nefarious. In 25 percent of the cases the panel surveyed, candidates used AI for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to reach. In India, the practice of cloning candidates became commonplace — 'not only to reach voters but also to motivate party workers,' according to a study by the Center for Media Engagement at the University of Texas at Austin. At the same time, however, dozens of deepfakes — photographs or videos that recreate real people — used AI to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment's survey, AI was characterized as having a harmful role in 69 percent of the cases. There were numerous malign examples in last year's US presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence, and the FBI. Under Trump, the agencies have dismantled the teams that led those efforts. 'In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse,' said Inga Kristina Trauthig, a professor at Florida International University, who led the international panel's survey. Advertisement The most intensive deceptive uses of AI have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China, and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system. One Russian campaign tried to stoke anti-Ukraine sentiment before last month's presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the voting. In previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural malapropisms. With AI, these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political news. Advances in commercially available tools like Midjourney's image maker and Google's new AI audio-video generator, Veo, have made it even harder to distinguish fabrications from reality — especially at a swiping glance. Grok, the AI chatbot and image generator developed by Elon Musk, will readily reproduce images of popular figures, including politicians. These tools have made it harder for governments, companies, and researchers to identify and trace increasingly sophisticated campaigns. Before AI, 'you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low-quality,' said Isabelle Frances-Wright, director of technology and society with the Institute for Strategic Dialogue. 'Now you can have both, and that's really scary territory to be in.' Advertisement The major social media platforms, including Facebook, X, YouTube, and TikTok, have policies governing the misuse of AI and have taken action in several cases that involved elections. At the same time, they are operated by companies with a vested interest in anything that keeps users scrolling, according to researchers who say the platforms should do more to restrict misleading or harmful content. In India's election, for example, little of the AI content on Meta's platform was marked with disclaimers, as required by the company, according to the study by the Center for Media Engagement. Meta did not respond to a request for comment. It goes beyond just fake content. Researchers at the University of Notre Dame found last year that inauthentic accounts generated by AI tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X, and Meta's three platforms: Facebook, Instagram, and Threads. The companies leading the wave of generative AI products also have policies against manipulative uses. In 2024, OpenAI disrupted five influence operations aimed at voters in Rwanda, the United States, India, Ghana, and the European Union during its parliamentary races, according to the company's reports. This month, the company disclosed that it had detected a Russian influence operation that used ChatGPT during Germany's election in February. In one instance, the operation created a bot account on X that amassed 27,000 followers and posted content in support of the far-right party, Alternative for Germany, or AfD. The party, once viewed as fringe, surged into second place, doubling the number of its seats in parliament. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) Advertisement The most disruptive case occurred in Romania's presidential election late last year. In the first round of voting in November, a little-known, far-right candidate, Calin Georgescu, surged to the lead with the help of a covert Russian operation that, among other things, coordinated an inauthentic campaign on TikTok. Critics, including the American vice president, JD Vance, denounced the court's subsequent nullification of the vote itself as undemocratic. 'If your democracy can be destroyed with a few hundred thousands of dollars of digital advertising from a foreign country,' Vance said in February, 'then it wasn't very strong to begin with.' The court ordered a new election last month. Georgescu, facing a criminal investigation, was barred from running again, clearing the way for another nationalist candidate, George Simion. A similar torrent of manipulated content appeared, including the fake video that made Trump appear to criticize the country's current leaders, according to researchers from the Bulgarian-Romanian Observatory of Digital Media. Nicusor Dan, the centrist mayor of Bucharest, prevailed in a second round of voting May 18. Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence, said he was concerned about more than just the potential for deepfakes to fool voters. AI, he warned, is so muddling the public debate that people are becoming disillusioned. 'The pollution of the information ecosystem is going to be one of the most difficult things to overcome,' he said. 'And I'm not really sure there's much of a way back from that.' This article originally appeared in Advertisement

Climate Change Deniers Are Switching Tactics
Climate Change Deniers Are Switching Tactics

Forbes

time24-06-2025

  • Politics
  • Forbes

Climate Change Deniers Are Switching Tactics

Coal-fired power station with smoking chimneys. Climate misinformation has shifted focus, moving away from denying that climate change is happening, and instead working to cast doubt on proposed solutions. An analysis of thousands of academic papers on climate misinformation published over the last ten years by the International Panel on the Information Environment (IPIE) has revealed that complete denialism is on the wane. However, it showed, misinformation is rife when it comes to the effectiveness, costs or fairness of mitigation measures. Fossil fuel companies, along with associated political groups and think tanks, are carrying out sophisticated campaigns that sow doubt about climate solutions, the research found. Key targets include political leaders, civil servants and regulatory agencies, in efforts to delay climate policy, with automated and coordinated bots playing a central role. "Climate misinformation is being amplified by institutions with the power to shape narratives and suppress inconvenient truths", said Dr Ece Elbeyi, lead author of the report. "As long as these actors continue to manipulate the flow of information, the prospects for effective and equitable climate action will remain dangerously out of reach." The fossil fuel industry, said the report, has been denying the reality of climate change, denying or distorting scientific facts, while also casting doubt on proposed solutions. Meanwhile, other business sectors have been shifting to the same tactics. While, for example, American electric utility companies were primarily denying or sowing doubt about climate change between 1990 and 2000, they are now obstructing and delaying solutions, while trying to shift the responsibility for climate change to other sectors of society. Researchers have documented extensive organized collaboration between fossil fuel companies, states and political actors. In Europe, meanwhile, studies have found right wing populist parties to be actively working against mitigation measures, with the Swiss People's Party, for example, trying to obstruct the transition to renewable energy, arguing that this imposes an excessive economic burden on the country. And, said the researchers, individual politicians are working to discredit climate solutions too. "Based on a network analysis of 7.3 million tweets, one study identified U.S. president Donald Trump as the key influencer of the network, whose logical fallacies, unfounded claims, and cherry-picking of findings were heavily retweeted by other users", the researchers said. A comparative analysis of the rhetoric of U.S. political parties showed that while Democrats presented scientific facts, Republicans tend to rely on anecdotes and storytelling - both more persuasive and more difficult to refute, they said. Skepticism may be gradually taking precedence over denial globally, with a variety of messages questioning the relevance, feasibility, and effectiveness of potential solutions. Russia, for example, has been labeling EU policies of transitioning to renewable energy sources as "hypocritical' and 'politically motivated,' and even claiming that renewables are harmful to nature. "We are dealing with an information environment that has been deliberately distorted", said Dr Klaus Bruhn Jensen, professor at the University of Copenhagen. "When corporations, governments, and media platforms obscure climate realities, the result is paralysis. Addressing the climate emergency therefore demands not only policy reform, but an unflinching reckoning with systems that spread and sustain falsehoods."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store