logo
AI action figure trend: Kaspersky warns about data privacy

AI action figure trend: Kaspersky warns about data privacy

Arab News23-04-2025

As the latest social media trend sees users jumping on the AI action figure craze by uploading personal information and photos to generate hyper-realistic AI dolls or action figures of themselves, Kaspersky urges individuals to be cautious about the personal information they share online.
The trend, while entertaining, raises concerns about data privacy and digital safety. Uploading images and personal information linked to the likes of nicknames, work, hobbies and family, to AI platforms may seem harmless but can inadvertently expose users to cyberthreats such as identity theft, phishing attacks, and unauthorized use of biometric data.
A Kaspersky study highlighted the paradox in users' approach to digital privacy. While 45 percent of respondents in the Kingdom cover their webcams to maintain privacy, and 44 percent rely on incognito mode for secure browsing, a significant number still engage in risky online behaviors. Notably, 47 percent of respondents admitted to sharing personal details with unverified sources to access online games and quizzes. This is often done without considering the potential security implications.
'Participating in viral trends such as AI action figure or anime-style images inspired by Studio Ghibli can be fun, but it is essential to understand the potential risks involved,' said Brandon Muller, technical expert for the MEA region at Kaspersky. 'It's important to keep in mind that this data could be accessed by cyberattackers. By sharing detailed personal information and images, users may unknowingly provide scammers with the data needed to compromise their digital identities or create social engineering messages.'
Kaspersky offers the following recommendations to safeguard personal data:
• Review privacy policies: Before using AI-powered tools, read and understand their privacy terms to know how your data will be used and stored and whether it may be shared with third parties.
• Limit personal information sharing: Avoid uploading sensitive photos or details that could be exploited, such as addresses or financial information.
• Use generic images: If possible, use generic images or landscape photos instead of high-resolution close-ups of your face, as facial data can be used for biometric profiling.
• Be cautious with permissions: Only grant necessary permissions to apps and platforms and be wary of those requesting excessive access, such as access to your contacts or location.
• Use trusted security solutions: Protect your devices with reliable cybersecurity software, such as Kaspersky Premium, to detect and prevent potential threats.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What it means to build local AI
What it means to build local AI

Arab News

time3 hours ago

  • Arab News

What it means to build local AI

Following OpenAI's public launch of ChatGPT in November 2022, the underpinnings of artificial intelligence large language models seemed firmly 'WIRED' — Western, industrialized, rich, educated, and democratic. Everyone assumed that if large language models spoke a particular language and reflected a particular worldview, it would be a Western one. OpenAI even acknowledged ChatGPT's skew toward Western views and the English language. But even before OpenAI's US competitors (Google and Anthropic) released their own large language models the following year, Southeast Asian developers had recognized the need for AI tools that would speak to their own region in its many languages — no small task, given that it has more than 1,200 of them. Moreover, in a region where distant civilizational memories often collide with contemporary, postcolonial histories, language is profoundly political. Even seemingly monolingual countries belie marked diversity: Cambodians speak nearly 30 languages; Thais, roughly 70; and Vietnamese, more than 100. This is also a region where communities mix languages seamlessly, where nonverbal cues speak volumes, and where oral traditions are sometimes more prevalent than textual means of capturing the deep cultural and historical nuances that have been encoded in language. Not surprisingly, those trying to build truly local AI models for a region with so many underrepresented languages have faced many obstacles, from a paucity of high-quality, high-quantity annotated data to a lack of access to the computing power needed to build and train models from scratch. In some cases, the challenges are even more basic, reflecting a shortage of native speakers and standardized orthography or frequent electricity supply disruptions. Given these constraints, many of the region's AI developers have settled for fine-tuning established models built by foreign incumbents. This involves taking a pretrained model that has been fed large quantities of data and training it on a smaller dataset for a specific skill or task. Between 2020 and 2023, Southeast Asian language models such as PhoBERT (Vietnamese), IndoBERT (Indonesian) and Typhoon (Thai) were derived from much larger models such as Google's BERT, Meta's RoBERTa (later LLaMA) and France's Mistral. Even the early versions of SeaLLM, a suite of models optimized for regional languages and released by Alibaba's DAMO Academy, were built on Meta, Mistral, and Google's architecture. But in 2024, Alibaba Cloud's Qwen disrupted this Western dominance, offering Southeast Asia a wider set of options. A Carnegie Endowment for International Peace study found that five of the 21 regional models launched that year were built on Qwen. Ironically, efforts to localize AI could deepen developers' dependence on much larger players, at least in the initial stages. Elina Noor Still, just as Southeast Asian developers previously had to account for a latent Western bias in the available foundation models, now they must be mindful of the ideologically filtered perspectives embedded in pretrained Chinese models. Ironically, efforts to localize AI and ensure greater agency for Southeast Asian communities could deepen developers' dependence on much larger players, at least in the initial stages. Nonetheless, Southeast Asian developers have begun to address this problem, too. Multiple models, including SEA-LION (a collection of 11 official regional languages), PhoGPT (Vietnamese) and MaLLaM (Malay), have been pre-trained from scratch on a large, generic dataset of each particular language. This key step in the machine-learning process will allow these models to be further fine-tuned for specific tasks. Although SEA-LION continues to rely on Google's architecture for its pre-training, its use of a regional language dataset has facilitated the development of homegrown models such as Sahabat-AI, which communicates in Indonesian, Sundanese, Javanese, Balinese, and Bataknese. Sahabat-AI proudly describes itself as 'a testament to Indonesia's commitment to AI sovereignty.' But representing native perspectives also requires a strong base of local knowledge. We cannot faithfully present Southeast Asian perspectives and values without understanding the politics of language, traditional sense-making and historical dynamics. For example, time and space — widely understood in the modern context to be linear, divisible and measurable for the purposes of maximizing productivity — are perceived differently in many indigenous communities. Balinese historical writings that defy conventional patterns of chronology might be viewed as myths or legends in Western terms, but they continue to shape how these communities make sense of the world. Historians of the region have cautioned that applying a Western lens to local texts heightens the risk of misinterpreting indigenous perspectives. In the 18th and 19th centuries, Indonesia's colonial administrators frequently read their own understanding of Javanese chronicles into translated reproductions. As a result, many biased British and European observations of Southeast Asians have come to be treated as valid historical accounts, and ethnic categorizations and stereotypes from official documents have been internalized. If AI is trained on this data, the biases could end up further entrenched. Data is not knowledge. Since language is inherently social and political — reflecting the relational experiences of those who use it — asserting agency in the age of AI must go beyond the technical sufficiency of models that communicate in local languages. It requires consciously filtering legacy biases, questioning assumptions about our identity and rediscovering indigenous knowledge repositories in our languages. We cannot project our cultures faithfully through technology if we barely understand them in the first place.

DeepSeek faces expulsion from app stores in Germany
DeepSeek faces expulsion from app stores in Germany

Al Arabiya

timea day ago

  • Al Arabiya

DeepSeek faces expulsion from app stores in Germany

Germany has taken steps towards blocking Chinese AI startup DeepSeek from the Apple and Google app stores due to concerns about data protection, according to a data protection authority commissioner in a statement on Friday. DeepSeek has been reported to the two US tech giants as illegal content, said commissioner Meike Kamp, and the companies must now review the concerns and decide whether to block the app in Germany. 'DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union,' she said. 'Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies,' she added. The move comes after Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations. DeepSeek, which shook the technology world in January with claims that it had developed an AI model that rivaled those from US firms such as ChatGPT creator OpenAI at much lower cost, says it stores numerous personal data, such as requests to the AI or uploaded files, on computers in China.

DeepSeek Faces Expulsion from App Stores in Germany
DeepSeek Faces Expulsion from App Stores in Germany

Asharq Al-Awsat

timea day ago

  • Asharq Al-Awsat

DeepSeek Faces Expulsion from App Stores in Germany

Germany has taken steps towards blocking Chinese AI startup DeepSeek from the Apple and Google app stores due to concerns about data protection, according to a data protection authority commissioner in a statement on Friday. DeepSeek has been reported to the two US tech giants as illegal content, said commissioner Meike Kamp, and the companies must now review the concerns and decide whether to block the app in Germany, Reuters reported. "DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union," she said. "Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies," she added. The move comes after Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations. DeepSeek, which shook the technology world in January with claims that it had developed an AI model that rivaled those from US firms such as ChatGPT creator OpenAI at much lower cost, says it stores numerous personal data, such as requests to the AI or uploaded files, on computers in China.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store