Truth in the age of AI
In today's world, artificial intelligence (AI) has transformed the way we live, work and play. Algorithms power our social media feeds, and bots can make our work more efficient.
AI is the ability of machines to think and act like humans by learning, solving problems, and making decisions.
With its ability to process and analyse vast amounts of data in seconds, AI has become a powerful tool in sectors like healthcare, finance and banking, manufacturing and supply chains.
But as AI proliferates, it is also silently causing seismic changes in how we understand what is true and what is not.
The digital world is seeing an explosion of synthetic content that muddies the line between truth and fiction, which can have serious implications for important events such as elections.
Deepfakes – hyper-realistic videos created using deep learning – are perhaps the most high-profile example of this.
A 2022 deepfake video of Ukrainian President Volodymyr Zelensky urging his troops to surrender during the Russia-Ukraine war was widely circulated before being debunked. The minute-long video briefly sowed confusion and panic.
Top stories
Swipe. Select. Stay informed.
Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide
Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts
Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole
Asia Singapore-only car washes will get business licences revoked, says Johor govt
World Food airdropped into Gaza as Israel opens aid routes
Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close
Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE
Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021
In 2024 during India's general election,
political parties 'resurrected' deceased leaders and used deepfake avatars to influence voters . For instance, the former Tamil Nadu chief minister M. Karunanidhi, who died in 2018, appeared in AI-generated videos endorsing his son's political run.
In Britain, more than 100 deepfake videos featuring then British Prime Minister Rishi Sunak ran as ads on Facebook before the 2024 election. The ads appeared to be viewed by 400,000 in a month, and payments for the ads originated overseas.
When voters see such manipulated videos making controversial or false statements, it can damage reputations or sway opinions – even after the deepfake is debunked.
The threat is not just about altering individual votes – it is about eroding trust in the electoral process altogether. When voters begin to doubt everything they see or hear, apathy and cynicism can take hold, weakening democratic institutions.
With its ability to blur the distinction between what is real or not, AI's impact on truth is more insidious than being able to tell black from white, fact from fiction.
NewsGuard, a media literacy tool that rates the reliability of online sources, found that by May 2025, more than 1,200 AI-generated news and information sites were operating with little to no human oversight, a number that had increased by more than 20 times in two years. Many of these websites even appeared to be credible.
Reliable media organisations have also come under fire for using AI-generated news summaries that are sometimes inaccurate. Apple faced calls earlier in 2025 to remove its AI-generated news alerts on iPhones that were in some instances completely false and 'hallucinated'.
In its Global Risks Report 2024, the World Economic Forum said: 'Emerging as the most severe global risk anticipated over the next two years, foreign and domestic actors alike will leverage misinformation and disinformation to further widen societal and political divides.'
AI will serve only to amplify those divides through its widespread use by bad actors to spread misinformation that appears to be credible, using algorithms that emphasise engagement, even to those adept at navigating news sites.
He heard what sounded like his son crying and fell for the scam
Beyond elections and political influence, AI is also being used by scammers to target individuals.
Voice cloning technology is increasingly being deployed by fraudsters in impersonation scams. With just a short sample of someone's voice – easily sourced from a TikTok video, a podcast clip, or even a voicemail – AI tools can convincingly replicate it.
In India, Mr Himanshu Shekhar Singh fell prey to an elaborate scheme after receiving a phone call from a purported police officer, who claimed that his 18-year-old son had been caught with a gang of rapists and needed 30,000 rupees (S$444) before his name could be cleared.
He heard what sounded like his son crying over the phone, and made an initial payment of 10,000 rupees, only to find out that his son was unharmed, and he had been duped.
In Hong Kong, the police said that an unnamed multinational company was scammed of HK$200 million (S$32.6 million) after an employee attended a video conference call with deepfake recreations of the company's Britain-based chief financial officer and other employees. The employee was duped into making the transfers following instructions from the scammers.
Scammers are also using generative AI to produce phishing e-mails and scam messages that are far more convincing than traditional spam, which is more likely to contain incorrect grammar and suspicious-looking links.
Cyber-security firm Barracuda, together with researchers from Columbia University and the University of Chicago, found in a study published on June 18 that 51 per cent of malicious and spam e-mails are now generated using AI tools.
The research team examined a dataset of spam e-mails flagged by Barracuda between February 2022 and April 2025. Using trained detection tools, they assessed whether each malicious or unwanted message had been produced by AI.
Their analysis revealed a consistent increase in the share of AI-generated spam e-mails starting from November 2022 and continuing until early 2024. Notably, November 2022 marked the public release of ChatGPT.
Can AI be a force for good?
But just as AI is being used to deceive, it is also being used to defend the truth.
For example, newsrooms around the world are increasingly turning to AI to enhance their fact-checking capabilities and stay ahead of misinformation.
Reuters, for example, has developed News Tracer, a tool powered by machine learning and natural language processing that monitors social media platforms like X to detect and assess the credibility of breaking news stories in real time. It assigns credibility scores to emerging narratives, helping journalists filter out false leads quickly.
Meanwhile, major news organisations like the BBC and The New York Times have collaborated with partners like Microsoft and Media City Bergen under an initiative called Project Origin to use AI to track the provenance of digital content and verify its authenticity.
Tech companies are also contributing to efforts to combat the rise of misinformation.
Google's Jigsaw unit has developed tools such as 'About this image', which helps users trace an image's origin, and detect whether it was AI-generated or manipulated.
Microsoft has also contributed to the fight against deception with its Video Authenticator tool, which detects deepfakes by identifying giveaway signs invisible to the human eye that an image has been artificially generated.
For example, in a video where someone's face has been mapped on another person's body, this includes subtle fading or greyscale pixels at the boundary of where the images have been merged.
Social media companies are slowly stepping up too.
Meta has introduced labels for AI-generated political ads, and YouTube has rolled out a new tool that requires creators to disclose to viewers when realistic content is made with altered or synthetic media.
The rise of AI has undeniably made it harder to distinguish fact from fiction, but it has also opened new frontiers for safeguarding the truth.
Legislation can establish protective guard rails
Whether AI becomes a conduit for clarity or confusion will also be shaped by the guard rails and regulations that governments and societies put in place.
To that end, the European Union is a front runner in AI regulation. The EU Artificial Intelligence Act was first proposed in 2021, and approved in August 2024.
The legislation classifies AI by risk and places strict rules on systems that affect public rights and democracy.
For example, AI such as social scoring systems and manipulative AI is prohibited because of its unacceptable risk. High-risk systems include those that profile individuals to assess their work performance or economic situation, for example.
High-risk AI providers need to establish a risk management system and conduct data governance to ensure that testing data sets are relevant and free of errors as much as possible.
This helps to address risks that AI poses to truth, especially around misinformation and algorithmic manipulation.
Countries such as Singapore, Canada, and Britain have also published governance frameworks or set up regulatory sandboxes to guide ethical AI use.
Societies must be equipped to navigate the AI era.
Public education on how deepfakes, bot-generated content, and algorithms can skew perception would be essential. When citizens understand how AI-generated misinformation works, they are less likely to be misled.
In the EU, media literacy is a core pillar of the Digital Services Act, which requires major online platforms to support educational campaigns that help users recognise disinformation and manipulative content.
Finland has integrated AI literacy into its 2025 school curriculum from early childhood to vocational training. The aim is to prepare students for a future where AI is increasingly prevalent and to help them build critical thinking skills and expose them to ethical considerations around AI.
But mitigating the impact of AI is not just the job of governments and tech companies – individuals can also take steps to protect themselves from deception.
Take care to verify the source of information, especially when it comes through social media. Be wary of sensational photos or videos and the likelihood that the content could have been manipulated. When in doubt, consult trusted news sources or channels.
Individuals themselves can also play their part by using AI responsibly – such as avoiding the sharing of unverified content generated by chatbots or image tools.
By staying cautious and curious, people can push back against AI-powered misinformation and create a safer digital space.
How Singapore tackles AI risks
Singapore was among the first few countries to introduce a national AI strategy in 2019, with projects in areas like border clearance operations and chronic disease prediction. But with the rapid development of generative AI that saw the public roll-out of large language models like ChatGPT, the nation updated its strategy in 2023.
The National AI Strategy 2.0 focuses on nurturing talent, promoting a thriving AI industry and sustaining it with world-leading infrastructure and research that ensures AI serves the public good.
To nurture talent here, Singapore aims to triple its number of AI practitioners to 15,000 by training locals and hiring from overseas.
While the nation is eager to harness the benefits of AI to boost its digital economy, it is also wary of the manipulation, misinformation, and ethical risks involved with the technology.
To mitigate such risks, the country launched the first edition of the Model AI Governance Framework in January 2019. The voluntary framework is a guide for private sector organisations to address key ethical and governance issues when deploying traditional AI.
The framework explains how AI systems work, and how to build good data accountability practices, and create open and transparent communication.
The framework was updated in 2020 and then again in May 2024, when the Model AI Governance Framework for Generative AI was rolled out, building on the initial frameworks to take into account new risks posed by generative AI. This includes things like hallucinations, where an AI model generates information that is incorrect or not based in reality; and concerns around copyright infringement.
To combat such challenges, the framework encourages industry players to offer transparency around the safety and hygiene measures taken when developing the AI tool. This can include bias correction techniques, for instance.
The framework also touches on the need for transparency around how AI-generated content is created to enable users to consume content in an informed manner, and how companies and communities should come together on digital literacy initiatives.
In the country's recent general election held in May 2025,
a new law banning fake or digitally altered online material that misrepresents candidates during the election period was put in place for the first time.
In passing the
Elections (Integrity of Online Advertising) (Amendment) Bill in October 2024, Minister for Digital Development and Information Josephine Teo said that it does not matter if the content is favourable or unfavourable to any candidate.
The publication of misinformation generated using AI during the election, and the boosting, sharing and reposting of existing content, was made an offence.
While it was not used during the recent general election, the legal instrument provides a lever to ensure electoral integrity in Singapore.
Overall, Singapore is eager to use AI as a driver of growth. In regulating the technology, it prefers an incremental approach, developing and updating voluntary governance frameworks, and drawing sector-specific guidelines instead of an overall mandate.
But where there is a risk of AI being used to misinform and manipulate the public, it will not hesitate to pass laws against this happening, as it did ahead of the 2025 General Election.
Singapore's governance approach combines strong ethical foundations, industry collaboration, and global engagement to ensure AI is used safely and fairly.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
an hour ago
- Straits Times
Russia's night attack on Kyiv leaves eight injured, including child, Ukraine says
KYIV - A Russian overnight air attack on Kyiv wounded eight residents of an apartment building, including a three-year-old child, authorities in the Ukrainian capital said on Monday. Four of those injured in the attack, which took place soon after midnight on Monday, have been hospitalised, with one person in serious condition, the head of Kyiv's military administration, Tymur Tkachenko, said on the Telegram messaging app. Kyiv's Mayor Vitali Klitschko said that all of the people were residents of a multi-storey apartment building in the city's Darnytskyi district on the left bank of the Dnipro River. "The blast wave damaged windows from the 6th to the 11th floor," Klitschko said in a post on Telegram. The capital and most of Ukraine were under air raid alerts for several hours overnight following Ukrainian Air Force warnings of Russian missile and drone attacks. With the threat of missile strikes on western parts of Ukraine that border Poland - a NATO member - Polish armed forces scrambled aircraft to ensure the safety of Polish airspace. The central Ukrainian city of Kropyvnytskyi came under an attack, regional Governor Andriy Raikovych said, adding that emergency services were working on the site and information about potential damage will be released later on Monday. Top stories Swipe. Select. Stay informed. Singapore Tanjong Katong sinkhole backfilled; road to be repaved after LTA tests Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Authorities say access to Changi intertidal areas unaffected by reclamation, in response to petition Singapore SIA flights between S'pore and Cambodia, S'pore and Thailand, operating normally amid border dispute Singapore Police statements by doctor in fake vaccine case involving Iris Koh allowed in court: Judge Singapore New Mandai North Crematorium, ash-scattering garden to open on Aug 15 Singapore Not feasible for S'pore to avoid net‑zero; all options to cut energy emissions on table: Tan See Leng Singapore With regional interest in nuclear energy rising, S'pore must build capabilities too: Tan See Leng The full scale of the Russian attack on Ukraine was not immediately known. Reuters' witnesses heard loud blasts shaking the city of Kyiv overnight in what sounded like air defence units in operation. There was no comment from Russia on the attack. Both sides deny targeting civilians in their strikes in the war that Russia started in February 2022. But thousands of civilians have died in the conflict, the vast majority of them Ukrainian. REUTERS


AsiaOne
2 hours ago
- AsiaOne
'We will not tolerate it': Car wash operators serving only Singapore vehicles risk losing business licences, says Johor govt, Singapore News
Local authorities in JB have been instructed to revoke the business licences of car wash operators who refuse to serve locals and cater exclusively to foreign customers, including those driving Singapore-registered cars. Speaking at the closing ceremony of the Johor Property Expo 2025 at Angsana Johor Bahru Mall on Thursday (July 24), State Housing and Local Government Committee Chairman Datuk Mohd Jafni Md Shukor said he viewed such behaviour seriously. "If someone is doing business in Johor but only prioritises foreigners while sidelining locals just for bigger profits, then we will not tolerate it," said Datuk Mohd Jafni, reported The Star. "I will instruct local councils to cancel the licences of any car wash operators who insist on only accepting foreign-registered vehicles, especially those from Singapore." Malaysian allegedly turned away by operator The move comes after a Malaysian claimed on social media that his car was refused service because the operators only catered to Singaporean customers. According to the man, he approached a car wash operator to get his car cleaned but was turned away, with one staff member allegedly telling him, "Singaporean cars only". It was also reported that the car wash operator, staffed by foreign workers, also turned away local customers, claiming all slots had been booked by Singaporean customers. According to the New Straits Times, Datuk Mohd Jafni also issued a warning to all business operators who prioritise large profits without considering the needs of locals. "I wish to remind these business operators not to focus solely on making money by prioritising foreigners. Yes, we understand they want to maximise profits, but they also have to understand their corporate social responsibility," he said. Datuk Mohd Jafni further stated that local authorities would be instructed to investigate such businesses and take firm action if the practice continues. [[nid:720205]]


CNA
4 hours ago
- CNA
Bib Gourmand recipients say accolade doesn't guarantee sustained business
Hawkers who were on Michelin's Bib Gourmand list say the accolade does not guarantee sustained business or future survival. Although it may momentarily bring more business, some say they also cannot cope with the sudden crowds. It is also still hard to pass the business on. Over 70 per cent of this year's 89 recipients are food hawkers. Caitlin Ng with more.