logo
Russia detains suspect in car bomb killing of general near Moscow, World News

Russia detains suspect in car bomb killing of general near Moscow, World News

AsiaOne27-04-2025
MOSCOW - Russia's FSB security service said on Saturday (April 26) it had detained a suspect over the killing of a senior Russian military officer on Friday by a car bomb.
The Kremlin blamed Ukraine for the killing of 59-year-old Yaroslav Moskalik, deputy head of the Main Operations Directorate of the General Staff of the Russian Armed Forces.
There was no official comment from Kyiv on Moskalik's death.
The FSB named the suspect as Ignat Kuzin, saying he was "an agent of the Ukrainian special services".
Moskalik was killed in the town of Balashikha, east of Moscow, hours before US President Donald Trump's envoy Steve Witkoff met President Vladimir Putin in Moscow.
ALSO READ: Russian missile strike kills 34 in Ukraine's Sumy, Kyiv says
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

EU urges Ukraine to uphold independent anti-corruption bodies; Zelenskiy signals swift action
EU urges Ukraine to uphold independent anti-corruption bodies; Zelenskiy signals swift action

Straits Times

timean hour ago

  • Straits Times

EU urges Ukraine to uphold independent anti-corruption bodies; Zelenskiy signals swift action

Find out what's new on ST website and app. FILE PHOTO: European Commission President Ursula von der Leyen speaks with Ukraine's President Volodymyr Zelenskiy prior to a bilateral meeting in Rome, Italy, April 26, 2025. Andrew Medichini/Pool via REUTERS/File Photo BRUSSELS - European Commission President Ursula von der Leyen called on Sunday for President Volodymyr Zelenskiy to uphold independent anti-corruption bodies, with the Ukrainian leader signaling that supporting legislation could be adopted within days. "Ukraine has already achieved a lot on its European path. It must build on these solid foundations and preserve independent anti-corruption bodies, which are cornerstones of Ukraine's rule of law," von der Leyen said in a post on X after a call with Zelenskiy. After a rare outburst of public criticism, Zelenskiy on Thursday submitted draft legislation to restore the independence of Ukraine's anti-corruption agencies - reversing course of an earlier bill aimed at stripping their autonomy. "I thanked the European Commission for the provided expertise," Zelenskiy said in a post on X after his Sunday call with von der Leyen. "We share the same vision: it is important that the bill is adopted without delay, as early as next week." Von der Leyen also promised continued support for Ukraine on its path to EU membership. "Ukraine can count on our support to deliver progress on its European path," she added. REUTERS Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021

Truth in the age of AI
Truth in the age of AI

Straits Times

time2 hours ago

  • Straits Times

Truth in the age of AI

AI is causing seismic changes in how we understand what is true and what is not. It can have serious implications for important events such as elections. In today's world, artificial intelligence (AI) has transformed the way we live, work and play. Algorithms power our social media feeds, and bots can make our work more efficient. AI is the ability of machines to think and act like humans by learning, solving problems, and making decisions. With its ability to process and analyse vast amounts of data in seconds, AI has become a powerful tool in sectors like healthcare, finance and banking, manufacturing and supply chains. But as AI proliferates, it is also silently causing seismic changes in how we understand what is true and what is not. The digital world is seeing an explosion of synthetic content that muddies the line between truth and fiction, which can have serious implications for important events such as elections. Deepfakes – hyper-realistic videos created using deep learning – are perhaps the most high-profile example of this. A 2022 deepfake video of Ukrainian President Volodymyr Zelensky urging his troops to surrender during the Russia-Ukraine war was widely circulated before being debunked. The minute-long video briefly sowed confusion and panic. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 In 2024 during India's general election, political parties 'resurrected' deceased leaders and used deepfake avatars to influence voters . For instance, the former Tamil Nadu chief minister M. Karunanidhi, who died in 2018, appeared in AI-generated videos endorsing his son's political run. In Britain, more than 100 deepfake videos featuring then British Prime Minister Rishi Sunak ran as ads on Facebook before the 2024 election. The ads appeared to be viewed by 400,000 in a month, and payments for the ads originated overseas. When voters see such manipulated videos making controversial or false statements, it can damage reputations or sway opinions – even after the deepfake is debunked. The threat is not just about altering individual votes – it is about eroding trust in the electoral process altogether. When voters begin to doubt everything they see or hear, apathy and cynicism can take hold, weakening democratic institutions. With its ability to blur the distinction between what is real or not, AI's impact on truth is more insidious than being able to tell black from white, fact from fiction. NewsGuard, a media literacy tool that rates the reliability of online sources, found that by May 2025, more than 1,200 AI-generated news and information sites were operating with little to no human oversight, a number that had increased by more than 20 times in two years. Many of these websites even appeared to be credible. Reliable media organisations have also come under fire for using AI-generated news summaries that are sometimes inaccurate. Apple faced calls earlier in 2025 to remove its AI-generated news alerts on iPhones that were in some instances completely false and 'hallucinated'. In its Global Risks Report 2024, the World Economic Forum said: 'Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors alike will leverage misinformation and disinformation to further widen societal and political divides.' AI will serve only to amplify those divides through its widespread use by bad actors to spread misinformation that appears to be credible, using algorithms that emphasise engagement, even to those adept at navigating news sites. He heard what sounded like his son crying and fell for the scam Beyond elections and political influence, AI is also being used by scammers to target individuals. Voice cloning technology is increasingly being deployed by fraudsters in impersonation scams. With just a short sample of someone's voice – easily sourced from a TikTok video, a podcast clip, or even a voicemail – AI tools can convincingly replicate it. In India, Mr Himanshu Shekhar Singh fell prey to an elaborate scheme after receiving a phone call from a purported police officer, who claimed that his 18-year-old son had been caught with a gang of rapists and needed 30,000 rupees (S$444) before his name could be cleared. He heard what sounded like his son crying over the phone, and made an initial payment of 10,000 rupees, only to find out that his son was unharmed, and he had been duped. In Hong Kong, the police said that an unnamed multinational company was scammed of HK$200 million (S$32.6 million) after an employee attended a video conference call with deepfake recreations of the company's Britain-based chief financial officer and other employees. The employee was duped into making the transfers following instructions from the scammers. Scammers are also using generative AI to produce phishing e-mails and scam messages that are far more convincing than traditional spam, which is more likely to contain incorrect grammar and suspicious-looking links. Cyber-security firm Barracuda, together with researchers from Columbia University and the University of Chicago, found in a study published on June 18 that 51 per cent of malicious and spam e-mails are now generated using AI tools. The research team examined a dataset of spam e-mails flagged by Barracuda between February 2022 and April 2025. Using trained detection tools, they assessed whether each malicious or unwanted message had been produced by AI. Their analysis revealed a consistent increase in the share of AI-generated spam e-mails starting from November 2022 and continuing until early 2024. Notably, November 2022 marked the public release of ChatGPT. Can AI be a force for good? But just as AI is being used to deceive, it is also being used to defend the truth. For example, newsrooms around the world are increasingly turning to AI to enhance their fact-checking capabilities and stay ahead of misinformation. Reuters, for example, has developed News Tracer, a tool powered by machine learning and natural language processing that monitors social media platforms like X to detect and assess the credibility of breaking news stories in real time. It assigns credibility scores to emerging narratives, helping journalists filter out false leads quickly. Meanwhile, major news organisations like the BBC and The New York Times have collaborated with partners like Microsoft and Media City Bergen under an initiative called Project Origin to use AI to track the provenance of digital content and verify its authenticity. Tech companies are also contributing to efforts to combat the rise of misinformation. Google's Jigsaw unit has developed tools such as 'About this image', which helps users trace an image's origin, and detect whether it was AI-generated or manipulated. Microsoft has also contributed to the fight against deception with its Video Authenticator tool, which detects deepfakes by identifying giveaway signs invisible to the human eye that an image has been artificially generated. For example, in a video where someone's face has been mapped on another person's body, this includes subtle fading or greyscale pixels at the boundary of where the images have been merged. Social media companies are slowly stepping up too. Meta has introduced labels for AI-generated political ads, and YouTube has rolled out a new tool that requires creators to disclose to viewers when realistic content is made with altered or synthetic media. The rise of AI has undeniably made it harder to distinguish fact from fiction, but it has also opened new frontiers for safeguarding the truth. Legislation can establish protective guard rails Whether AI becomes a conduit for clarity or confusion will also be shaped by the guard rails and regulations that governments and societies put in place. To that end, the European Union is a front runner in AI regulation. The EU Artificial Intelligence Act was first proposed in 2021, and approved in August 2024. The legislation classifies AI by risk and places strict rules on systems that affect public rights and democracy. For example, AI such as social scoring systems and manipulative AI is prohibited because of its unacceptable risk. High-risk systems include those that profile individuals to assess their work performance or economic situation, for example. High-risk AI providers need to establish a risk management system and conduct data governance to ensure that testing data sets are relevant and free of errors as much as possible. This helps to address risks that AI poses to truth, especially around misinformation and algorithmic manipulation. Countries such as Singapore, Canada, and Britain have also published governance frameworks or set up regulatory sandboxes to guide ethical AI use. Societies must be equipped to navigate the AI era. Public education on how deepfakes, bot-generated content, and algorithms can skew perception would be essential. When citizens understand how AI-generated misinformation works, they are less likely to be misled. In the EU, media literacy is a core pillar of the Digital Services Act, which requires major online platforms to support educational campaigns that help users recognise disinformation and manipulative content. Finland has integrated AI literacy into its 2025 school curriculum from early childhood to vocational training. The aim is to prepare students for a future where AI is increasingly prevalent and to help them build critical thinking skills and expose them to ethical considerations around AI. But mitigating the impact of AI is not just the job of governments and tech companies – individuals can also take steps to protect themselves from deception. Take care to verify the source of information, especially when it comes through social media. Be wary of sensational photos or videos and the likelihood that the content could have been manipulated. When in doubt, consult trusted news sources or channels. Individuals themselves can also play their part by using AI responsibly – such as avoiding the sharing of unverified content generated by chatbots or image tools. By staying cautious and curious, people can push back against AI-powered misinformation and create a safer digital space. How Singapore tackles AI risks Singapore was among the first few countries to introduce a national AI strategy in 2019, with projects in areas like border clearance operations and chronic disease prediction. But with the rapid development of generative AI that saw the public roll-out of large language models like ChatGPT, the nation updated its strategy in 2023. The National AI Strategy 2.0 focuses on nurturing talent, promoting a thriving AI industry and sustaining it with world-leading infrastructure and research that ensures AI serves the public good. To nurture talent here, Singapore aims to triple its number of AI practitioners to 15,000 by training locals and hiring from overseas. While the nation is eager to harness the benefits of AI to boost its digital economy, it is also wary of the manipulation, misinformation, and ethical risks involved with the technology. To mitigate such risks, the country launched the first edition of the Model AI Governance Framework in January 2019. The voluntary framework is a guide for private sector organisations to address key ethical and governance issues when deploying traditional AI. The framework explains how AI systems work, and how to build good data accountability practices, and create open and transparent communication. The framework was updated in 2020 and then again in May 2024, when the Model AI Governance Framework for Generative AI was rolled out, building on the initial frameworks to take into account new risks posed by generative AI. This includes things like hallucinations, where an AI model generates information that is incorrect or not based in reality; and concerns around copyright infringement. To combat such challenges, the framework encourages industry players to offer transparency around the safety and hygiene measures taken when developing the AI tool. This can include bias correction techniques, for instance. The framework also touches on the need for transparency around how AI-generated content is created to enable users to consume content in an informed manner, and how companies and communities should come together on digital literacy initiatives. In the country's recent general election held in May 2025, a new law banning fake or digitally altered online material that misrepresents candidates during the election period was put in place for the first time. In passing the Elections (Integrity of Online Advertising) (Amendment) Bill in October 2024, Minister for Digital Development and Information Josephine Teo said that it does not matter if the content is favourable or unfavourable to any candidate. The publication of misinformation generated using AI during the election, and the boosting, sharing and reposting of existing content, was made an offence. While it was not used during the recent general election, the legal instrument provides a lever to ensure electoral integrity in Singapore. Overall, Singapore is eager to use AI as a driver of growth. In regulating the technology, it prefers an incremental approach, developing and updating voluntary governance frameworks, and drawing sector-specific guidelines instead of an overall mandate. But where there is a risk of AI being used to misinform and manipulate the public, it will not hesitate to pass laws against this happening, as it did ahead of the 2025 General Election. Singapore's governance approach combines strong ethical foundations, industry collaboration, and global engagement to ensure AI is used safely and fairly.

EU's von der Leyen: 15% the 'best we could get'
EU's von der Leyen: 15% the 'best we could get'

Straits Times

time3 hours ago

  • Straits Times

EU's von der Leyen: 15% the 'best we could get'

Find out what's new on ST website and app. European Commission President Ursula von der Leyen sits with U.S. President Donald Trump, after the announcement of a trade deal between the U.S. and EU, in Turnberry, Scotland, Britain, July 27, 2025. REUTERS/Evelyn Hockstein PRESTWICK, Scotland - European Commission President Ursula von der Leyen defended the trade deal clinched with United States on Sunday as "the best we could get" and not to be underestimated given the looming threat of 30% tariffs that had been hanging over the EU. A baseline tariff rate of 15% on EU goods imported into the United States would apply to most goods including cars, semiconductors and pharmaceutical goods, von der Leyen said. Meanwhile, a zero-for-zero tariff rate had been agreed for certain strategic products, including aircraft and aircraft parts, certain chemicals, and certain generic drugs. No decision had been taken on a rate for wine and spirits, she added. Asked if she considered 15% a good deal for European carmakers, von der Leyen told reporters: "15% is not to be underestimated, but it is the best we could get." The European Union committed to purchasing $750 billion worth U.S. LNG and nuclear fuel over three years. "We still have too much Russian LNG that is coming through the back door," she said. The European Commission has proposed phasing out all Russian gas imports by Jan 1, 2028. "Today's deal creates certainty in uncertain times, delivers stability and predictability," von der Leyen told reporters before leaving Scotland. REUTERS Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store