logo
Three in four in Singapore not able to identify deepfake content: Cyber Security Agency survey

Three in four in Singapore not able to identify deepfake content: Cyber Security Agency survey

The Star7 days ago
SINGAPORE: Only one in four people here are able to distinguish between deepfake and legitimate videos, even though a majority said they are confident in identifying deepfake content.
This is one of the key findings of a survey released on July 2 by the Cyber Security Agency (CSA) of Singapore.
Questions related to deepfakes are new in the Cybersecurity Awareness Survey 2024 given the prevalence of generative artificial intelligence tools that make it easier to create fake content to scam unsuspecting victims.
Overall, 1,050 respondents aged 15 and above were polled in October 2024 on their attitude towards issues such as cyber incidents and mobile security, and adoption of cyber hygiene practices.
Nearly 80 per cent said they are confident in identifying deepfakes, citing telltale signs such as suspicious content and unsynchronised lip movements. However, only a quarter of them could correctly distinguish between deepfake and legitimate videos when they were put to the test.
'With cyber criminals constantly devising new scam tactics, we need to be vigilant, and make it harder for them to scam us,' said CSA's chief executive David Koh.
'Always stop and check with trusted sources before taking any action, so that we can protect what is precious to us.'
Compared with an earlier survey conducted in 2022, more people know what phishing is.
But when tested on their ability to distinguish between phishing and legitimate content, only 13 per cent of the respondents were able to correctly identify them, a drop from 24 per cent in 2022.
There has been an increase in the installation of cybersecurity apps and adoption of two-factor authentication (2FA) over the years.
More respondents have installed security apps in 2024, with 63 per cent having at least one app installed, up from 50 per cent in 2022.
The adoption of 2FA across all online accounts and apps also increased from 35 per cent in 2022 to 41 per cent in 2024.
Though 36 per cent of respondents in 2024 accepted their mobile devices' updates immediately, 32 per cent preferred to continue using their devices and update later. Those who choose not to update their devices remained low at three per cent, down from four per cent in 2022.
Around one quarter of respondents in the 2024 survey said they have been hit with at least one cyber incident, a slight drop from 30 per cent in 2022.
There was also a drop in percentage of respondents who perceived that their devices were likely to be compromised by virus or malware, from 60 per cent in 2022 to 57 per cent in 2024.
Nearly 40 per cent of people perceived themselves as being at risk of falling for online scams, down from 43 per cent in 2022. - The Straits Times/ANN
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trapped in echo chambers
Trapped in echo chambers

The Star

time3 days ago

  • The Star

Trapped in echo chambers

Tharman giving his opening address at the International Conference on Cohesive Societies on June 24. — The Straits Times/Asia News Network NO one can escape the reality that much of what is seen on social media has been shaped by algorithms designed by tech giants – and these algorithms, more often than not, deepen divisions rather than bridge them, says Singapore president Tharman Shan-mugaratnam. In his speech at the International Conference on Cohesive Societies 2025, Tharman said citizens in many societies no longer have a shared reality, a shared framework of facts upon which they form different views. 'We no longer live in that world because increasingly, a more divided media space and social media algorithms are leading to a more divided public and more divisive politics. 'Studies have shown that if people have regular exposure to a feed of stories that accord with their ideological preferences or views, it strengthens their preferences. 'It makes them more partisan and it polarises society. In other words, it's not like just another consumer good that is meeting people's preferences. Here, it is accentuating preference and it is a polarising force.' He also points out that the advertising-based social media business model also has an incentive to maximise attention. 'Studies here too have shown that to maximise attention, you propagate negative messages. And wait for what is coming. Advances in artificial intelligence (AI) are setting in motion further changes. 'AI-driven search interfaces and chatbots may very well create a flood of synthetic media of dubious provenance. It's not yet prevalent, but it's coming.' He outlines a reminder that both government and civil society have to actively work together – with the tech companies that run the largest social media platforms – to make democracy safer and more sustainable. Tharman cites the European Union's new Digital Services Act as a good example, as he says it holds the social media platforms accountable for content and requires quick removal of hate speech. 'We do essentially the same in Singapore and Australia, and a few other countries. The EU has also gone further to address the systemic risks posed by social media algorithms. 'They require the larger platforms to dial back the risks of algorithmic amplification of disinformation. It's not easy, because a lot of the onus is on the platforms themselves, but the laws are in place and it's an important start.' Tharman says he is aware that some may describe the measure as 'over-regulation'. 'It is more regulation than the big tech players are used to. One can debate the specific mechanisms, but an unregulated media landscape will only see democracy gradually unravel. 'There is no easy step off from this race between the leading tech companies and the platforms that they run. They have an incentive to keep people and traffic within their own platforms. They have the incentive to maximise attention through negative news. 'So this is an market-driven algorithmic treadmill, and there's no easy stepping off. It can only be addressed through regulation of the market – regulation set by the public sector, but with significant engagement of both civil society and the tech companies. In fact, in the case of the European Union's Digital Services Act, civil society was very actively involved in working with public sector officials in formulating the plans.' He also says established news media will also have to respond to the challenge of a fragmenting landscape. 'If they can show that they have a brand of journalism that is built on accuracy and transparency; if they can show that they are reporting the world as it is, and separating news from opinion; and if they can show that when they publish opinions, they're providing different perspectives for people to assess – that will help restore trust and the value of their brands.'

Three in four in Singapore not able to identify deepfake content: Cyber Security Agency survey
Three in four in Singapore not able to identify deepfake content: Cyber Security Agency survey

The Star

time7 days ago

  • The Star

Three in four in Singapore not able to identify deepfake content: Cyber Security Agency survey

SINGAPORE: Only one in four people here are able to distinguish between deepfake and legitimate videos, even though a majority said they are confident in identifying deepfake content. This is one of the key findings of a survey released on July 2 by the Cyber Security Agency (CSA) of Singapore. Questions related to deepfakes are new in the Cybersecurity Awareness Survey 2024 given the prevalence of generative artificial intelligence tools that make it easier to create fake content to scam unsuspecting victims. Overall, 1,050 respondents aged 15 and above were polled in October 2024 on their attitude towards issues such as cyber incidents and mobile security, and adoption of cyber hygiene practices. Nearly 80 per cent said they are confident in identifying deepfakes, citing telltale signs such as suspicious content and unsynchronised lip movements. However, only a quarter of them could correctly distinguish between deepfake and legitimate videos when they were put to the test. 'With cyber criminals constantly devising new scam tactics, we need to be vigilant, and make it harder for them to scam us,' said CSA's chief executive David Koh. 'Always stop and check with trusted sources before taking any action, so that we can protect what is precious to us.' Compared with an earlier survey conducted in 2022, more people know what phishing is. But when tested on their ability to distinguish between phishing and legitimate content, only 13 per cent of the respondents were able to correctly identify them, a drop from 24 per cent in 2022. There has been an increase in the installation of cybersecurity apps and adoption of two-factor authentication (2FA) over the years. More respondents have installed security apps in 2024, with 63 per cent having at least one app installed, up from 50 per cent in 2022. The adoption of 2FA across all online accounts and apps also increased from 35 per cent in 2022 to 41 per cent in 2024. Though 36 per cent of respondents in 2024 accepted their mobile devices' updates immediately, 32 per cent preferred to continue using their devices and update later. Those who choose not to update their devices remained low at three per cent, down from four per cent in 2022. Around one quarter of respondents in the 2024 survey said they have been hit with at least one cyber incident, a slight drop from 30 per cent in 2022. There was also a drop in percentage of respondents who perceived that their devices were likely to be compromised by virus or malware, from 60 per cent in 2022 to 57 per cent in 2024. Nearly 40 per cent of people perceived themselves as being at risk of falling for online scams, down from 43 per cent in 2022. - The Straits Times/ANN

Think you can spot a deepfake? Most Singaporeans can't, survey reveals
Think you can spot a deepfake? Most Singaporeans can't, survey reveals

Malay Mail

time7 days ago

  • Malay Mail

Think you can spot a deepfake? Most Singaporeans can't, survey reveals

SINGAPORE, July 2 — Just one in four people here could correctly identify deepfake videos, despite nearly 80 per cent expressing confidence in spotting them. This finding comes from the Cybersecurity Awareness Survey 2024, released on July 2 by the Cyber Security Agency (CSA) of Singapore, The Straits Times reported. The survey introduced deepfake-related questions for the first time, in response to growing use of generative artificial intelligence to create convincing fake content. Conducted in October 2024, the survey polled 1,050 respondents aged 15 and above on topics including cyber hygiene, cyber incidents, and mobile security. Most respondents cited visual clues like unsynchronised lip movements and suspicious content as ways to detect deepfakes, but only a quarter passed a practical test. 'With cyber criminals constantly devising new scam tactics, we need to be vigilant, and make it harder for them to scam us,' CSA chief executive David Koh was quoted saying. While awareness of phishing increased compared to 2022, the ability to accurately identify phishing messages fell to 13 per cent from 24 per cent. Cybersecurity habits have improved, with 63 per cent using security apps and 41 per cent enabling two-factor authentication, but perceived risks of scams and malware have declined slightly since 2022.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store