
‘A tragic completeness': Ukrainian novelist awarded Orwell prize posthumously for unfinished final book
Two years after she was killed in a Russian missile strike, Victoria Amelina, a Ukrainian novelist who became a war crimes researcher after Russia's full-scale invasion of her country, was posthumously awarded the Orwell Prize for Political Writing for her unfinished work, Looking at Women, Looking at War: A War and Justice Diary from Ukraine.
The book, released by HarperCollins with a foreword by Margaret Atwood, was described by prize judges as 'technically unfinished but with a tragic completeness.' Atwood, writing in the introduction, calls the war 'Russia's appalling and brutal campaign to annihilate Ukraine,' and reflects that 'in the middle of a war, there is little past or future … there is only the white heat of the moment.'
It is in this white heat that Amelina's final book lives, between being witness of the violence, preserving fragments of memory, and brief moments of calm and camaraderie .
Born in Lviv in 1986, Amelina trained as a computer scientist before turning to literature. Her debut novel The Fall Syndrome was published in 2014, and her follow-up, Dom's Dream Kingdom (2017), established her as one of Ukraine's leading young literary voices. She also wrote children's books, ran literary festivals, and was raising her young son when Russia launched its full-scale invasion in February 2022.
At the time, Amelina was at work on a novel. Within weeks, she had set it aside. 'The quest for justice has turned me from a novelist and mother to a war crimes reporter,' she would later write. She joined Truth Hounds, a Ukrainian human rights organisation, and began documenting war crimes: interviewing witnesses, photographing the ruins of cultural sites, and writing.
The book she eventually began was part memoir and part chronicle and traced the lives of Ukrainian women who fell prey to wartime brutality. Among them were Evgenia, a lawyer-turned-soldier; Oleksandra Matviichuk, who helped document war crimes and won a Nobel Peace Prize in 2022; and Yulia, a librarian who helped expose the abduction and murder of a children's book author.
The manuscript Amelina left behind — roughly 60 percent complete — included essays, field notes, and fragments, some with no more than a title. The first chapter, titled The Shell Hole in the Fairy Tale, opens with the author preparing for a vacation to Egypt. Her newly purchased handgun looks out of place lying near colourful dresses and swimsuits. 'A full-scale Russian invasion has been postponed for the last eight years since 2014,' she writes, still half-believing that war might be avoided.
'Amelina is setting off for a holiday with her young son as the war comes chasing after her and everyone else in Ukraine,' the Orwell Foundation noted in its citation. 'She is finishing a funding application for a literary festival while standing in the airport security line, checking the news and thinking about her new gun.'
On the night of June 27, 2023, Amelina was dining with a group of international writers in Kramatorsk, a city in the embattled Donetsk region, when a Russian cruise missile struck the restaurant. She suffered critical head injuries and died four days later. She was 37.
Her husband, Alex Amelin, accepted the £3,000 award at a ceremony in London this week, held on George Orwell's birthday. The prize money will support the New York Literary Festival in Donetsk, which Amelina founded. The town, ironically named after the American city, now lies close to the front lines.
The Orwell Prize, awarded annually by the Orwell Foundation, honours work that exemplifies George Orwell's values of integrity, decency, and truth-telling in political writing. It seeks to fulfill Orwell's enduring ambition 'to make political writing into an art.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
44 minutes ago
- Hindustan Times
Ukrainian pilot killed, F-16 downed amid heavy Russian missile attack
Jun 29, 2025 11:39 AM IST A Ukrainian pilot died and an F-16 fighter jet was lost while defending against a large Russian missile and drone attack at night, the Ukrainian military said on Sunday, reported news agency Reuters. One of the last two promised F16 fighter jets is leaving Volkel Air Base on its way to be handed over to Ukraine, in Volkel, on May 26, 2025. Ukraine received its first F-16 jets from the Netherlands and Denmark in 2024.(AFP) The Ukrainian Air Force said the pilot shot down seven air targets during Russia's missile attack, but his aircraft was damaged while taking down the last one and began to lose altitude. "The pilot used all of his onboard weapons and shot down seven air targets. While shooting down the last one, his aircraft was damaged and began to lose altitude," the Ukrainian Air Force said in a post on Telegram, reported Reuters. According to the military, this is the third time an F-16 has been lost during the war.

Time of India
2 hours ago
- Time of India
Russia Claims Major Gains in Ukraine, Says Eight Settlements Captured in Kharkiv, Donetsk
/ Jun 29, 2025, 09:36AM IST Russia on Saturday claimed that its military "liberated" eight communities in Kharkiv and Donetsk regions of Ukraine. The Russian defence ministry said Novaya Kruglyakovka and Petrovskoye in Kharkiv and Dyleyevka in Donetsk were "liberated" by its forces. The military offensive comes amid Vladimir Putin's statement that Russia was ready for a third round of peace talks with Ukraine.


Indian Express
3 hours ago
- Indian Express
AI is starting to wear down democracy
Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and — for the first time, officials and experts said — begun to have an impact on election results. Free and easy to use, AI tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online. The technology has amplified social and partisan divisions and bolstered anti-government sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal. In Romania, a Russian influence operation using AI tainted the first round of last year's presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which AI played a decisive role in the outcome. It is unlikely to be the last. As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function. Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania's capital, Bucharest, said there was no question that the technology was already 'being used for obviously malevolent purposes' to manipulate voters. 'These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,' she said. 'What can compete with this?' In the unusually concentrated wave of elections that took place in 2024, AI was used in more than 80%, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland. It documented 215 instances of AI in elections that year, based on government statements, research and news reports. Already this year, AI has played a role in at least nine more major elections, from Canada to Australia. Not all uses were nefarious. In 25% of the cases the panel surveyed, candidates used AI for themselves, relying on it to translate speeches and platforms into local dialects and to identify blocs of voters to reach. In India, the practice of cloning candidates became commonplace — 'not only to reach voters but also to motivate party workers,' according to a study by the Center for Media Engagement at the University of Texas at Austin. At the same time, however, dozens of deepfakes — photographs or videos that re-create real people — used AI to clone voices of candidates or news broadcasts. According to the International Panel on the Information Environment's survey, AI was characterized as having a harmful role in 69% of the cases. There were numerous malign examples in last year's U.S. presidential election, prompting public warnings by officials at the Cybersecurity and Infrastructure Security Agency, the Office of the Director of National Intelligence and the FBI. Under Trump, the agencies have dismantled the teams that led those efforts. 'In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse,' said Inga Kristina Trauthig, a professor at Florida International University, who led the international panel's survey. The most intensive deceptive uses of AI have come from autocratic countries seeking to interfere in elections outside their borders, like Russia, China and Iran. The technology has allowed them to amplify support for candidates more pliant to their worldview — or simply to discredit the idea of democratic governance itself as an inferior political system. One Russian campaign tried to stoke anti-Ukraine sentiment before last month's presidential election in Poland, where many Ukrainian refugees have relocated. It created fake videos that suggested the Ukrainians were planning attacks to disrupt the voting. In previous elections, foreign efforts were cumbersome and costly. They relied on workers in troll farms to generate accounts and content on social media, often using stilted language and cultural malapropisms. With AI, these efforts can be done at a speed and on a scale that were unimaginable when broadcast media and newspapers were the main sources of political news. Saman Nazari, a researcher with the Alliance 4 Europe, an organization that studies digital threats to democracies, said this year's elections in Germany and Poland showed for the first time how effective the technology had become for foreign campaigns as well as domestic political parties. 'AI will have a significant impact on democracy going forward,' he said. Advances in commercially available tools like Midjourney's image maker and Google's new AI audio-video generator, Veo, have made it even harder to distinguish fabrications from reality — especially at a swiping glance. Grok, the AI chatbot and image generator developed by Elon Musk, will readily reproduce images of popular figures, including politicians. These tools have made it harder for governments, companies and researchers to identify and trace increasingly sophisticated campaigns. Before AI, 'you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low-quality,' said Isabelle Frances-Wright, director of technology and society with the Institute for Strategic Dialogue. 'Now you can have both, and that's really scary territory to be in.' The major social media platforms, including Facebook, X, YouTube and TikTok, have policies governing the misuse of AI and have taken action in several cases that involved elections. At the same time, they are operated by companies with a vested interest in anything that keeps users scrolling, according to researchers who say the platforms should do more to restrict misleading or harmful content. In India's election, for example, little of the AI content on Meta's platform was marked with disclaimers, as required by the company, according to the study by the Center for Media Engagement. Meta did not respond to a request for comment. It goes beyond just fake content. Researchers at the University of Notre Dame found last year that inauthentic accounts generated by AI tools could readily evade detection on eight major social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X and Meta's three platforms: Facebook, Instagram and Threads. The companies leading the wave of generative AI products also have policies against manipulative uses. In 2024, OpenAI disrupted five influence operations aimed at voters in Rwanda, the United States, India, Ghana and the European Union during its parliamentary races, according to the company's reports. This month, the company disclosed that it had detected a Russian influence operation that used ChatGPT during Germany's election in February. In one instance, the operation created a bot account on X that amassed 27,000 followers and posted content in support of the far-right party, Alternative for Germany, or AfD. The party, once viewed as fringe, surged into second place, doubling the number of its seats in parliament. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.) The most disruptive case occurred in Romania's presidential election late last year. In the first round of voting in November, a little-known far-right candidate, Calin Georgescu, surged to the lead with the help of a covert Russian operation that, among other things, coordinated an inauthentic campaign on TikTok. Critics, including the American vice president, JD Vance, and Musk, denounced the court's subsequent nullification of the vote itself as undemocratic. 'If your democracy can be destroyed with a few hundred thousands of dollars of digital advertising from a foreign country,' Vance said in February, 'then it wasn't very strong to begin with.' The court ordered a new election last month. Georgescu, facing a criminal investigation, was barred from running again, clearing the way for another nationalist candidate, George Simion. A similar torrent of manipulated content appeared, including the fake video that made Trump appear to criticize the country's current leaders, according to researchers from the Bulgarian-Romanian Observatory of Digital Media. Nicusor Dan, the centrist mayor of Bucharest, prevailed in a second round of voting May 18. The European Union has opened an investigation into whether TikTok did enough to restrict the torrent of manipulative activity and disinformation on the platform. It is also investigating the platform's role in election campaigns in Ireland and Croatia. In statements, TikTok has claimed that it moved quickly to take down posts that violated its policies. In two weeks before the second round of voting in Romania, it said, it removed more than 7,300 posts, including ones generated by AI but not identified as such. It declined to comment beyond those statements. Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence, said he was concerned about more than just the potential for deepfakes to fool voters. AI, he warned, is so muddling the public debate that people are becoming disillusioned. 'The pollution of the information ecosystem is going to be one of the most difficult things to overcome,' he said. 'And I'm not really sure there's much of a way back from that.'