
Poll finds public turning to AI bots for news updates
People are increasingly turning to generative artificial intelligence (AI) chatbots like ChatGPT to follow day-to-day news, a respected media report published on Tuesday found.
The yearly survey from the Reuters Institute for the Study of Journalism found 'for the first time' that significant numbers of people were using chatbots to get headlines and updates, director Mitali Mukherjee wrote.
Attached to Britain's Oxford University, the Reuters Institute annual report is seen as unmissable for people following the evolution of media.
Just seven per cent of people report using AI to find news, according to the poll of 97,000 people in 48 countries, carried out by YouGov.
But the proportion is higher among the young, at 12 per cent of under-35s and 15 per cent of under-25s.
The biggest-name chatbot - OpenAI's ChatGPT - is the most widely used, followed by Google's Gemini and Meta's Llama.
Respondents appreciated relevant, personalised news from chatbots.
Many more used AI to summarise (27 per cent), translate (24 per cent) or recommend (21 per cent) articles, while almost one in five asked questions about current events.
Distrust remains, with those polled on balance saying AI risked making the news less transparent, less accurate and less trustworthy.
Rather than being programmed, today's powerful AI 'large language models' (LLMs) are 'trained' on vast quantities of data from the web and other sources - including news media like text articles or video reports.
Once trained, they are able to generate text and images in response to users' natural-language queries.
But they present problems including 'hallucinations' - the term used when AI invents information that fits patterns in their training data but is not true.
Scenting a chance at revenue in a long-squeezed market, some news organisations have struck deals to share their content with developers of AI models.
Agence France-Presse (AFP) allows the platform of French AI firm Mistral to access its archive of news stories going back decades.
Other media have launched copyright cases against AI makers over alleged illegal use of their content, for example the New York Times against ChatGPT developer OpenAI.
The Reuters Institute report also pointed to traditional media - TV, radio, newspapers and news sites - losing ground to social networks and video-sharing platforms.
Almost half of 18-24-year-olds report that social media like TikTok is their main source of news, especially in emerging countries like India, Brazil, Indonesia and Thailand.
The institute found that many are still using Elon Musk-owned social media platform X for news, despite a rightward shift since the world's richest man took it over.
'Many more right-leaning people, notably young men, have flocked to the network, while some progressive audiences have left or are using it less frequently,' the authors wrote.
Some 23 per cent of people in the United States reported using X for news, up eight per cent on 2024's survey, with usage also rising in countries like Australia and Poland.
By contrast, 'rival networks like Threads, Bluesky and Mastodon are making little impact globally, with reach of two percent or less for news', the Reuters Institute found.
FACSIMILES OF THE DEAD:
Christopher Pelkey was shot and killed in a road range incident in 2021.
On May 8, 2025, at the sentencing hearing for his killer, an AI video reconstruction of Pelkey delivered a victim impact statement.
The trial judge reported being deeply moved by this performance and issued the maximum sentence for manslaughter.
As part of the ceremonies to mark Israel's 77th year of independence on April 30, 2025, officials had planned to host a concert featuring four iconic Israeli singers.
All four had died years earlier. The plan was to conjure them using AI-generated sound and video.
The dead performers were supposed to sing alongside Yardena Arazi, a famous and still very much alive artist. In the end Arazi pulled out, citing the political atmosphere, and the event didn't happen.
In April, the BBC created a deep-fake version of the famous mystery writer Agatha Christie to teach a 'maestro course on writing.'
Fake Agatha would instruct aspiring murder mystery authors and 'inspire' their 'writing journey.'
The use of artificial intelligence to 'reanimate' the dead for a variety of purposes is quickly gaining traction.
Over the past few years, the moral implications of AI is under study at the Center for Applied Ethics at the University of Massachusetts, Boston, and these AI reanimations are found to be morally problematic.
The first moral quandary the technology raises has to do with consent: Would the deceased have agreed to do what their likeness is doing? Would the dead Israeli singers have wanted to sing at an Independence ceremony organized by the nation's current government?
Agencies
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
34 minutes ago
- Arabian Post
Danes to Gain Copyright Control Over Voice and Likeness
Copenhagen's Parliament has approved legislation granting individuals automatic copyright over their face, body and voice, empowering them to demand takedowns and compensation for unauthorised AI-generated deepfakes. Culture Minister Jakob Engel‑Schmidt underlined the urgency of this measure, warning that 'Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that.' Legislators across the political spectrum endorsed the amendment, regarded as Europe's first to recognise personal likeness as copyrighted content. It aims to cover hyper-realistic representations—photos, videos or voice clones—created without consent. Offending platforms could face significant fines, while individuals and artists may pursue damages. The bill will exclude content clearly marked as parody or satire. With Denmark assuming the EU Council presidency on 1 July 2025, the government plans to submit the draft for consultation by late summer and push for parliamentary passage in the autumn. Officials indicated this will provide time for broader European discussion of similar laws. ADVERTISEMENT While non-consensual deepfakes have appeared around the world—affecting public figures such as Taylor Swift and Pope Francis and fueling disinformation campaigns—the bulk of coercive content remains exploitative in nature. A 2019 report from Sensity AI estimated that 95% of online deepfakes are non-consensual pornography, with 90% featuring women. Under the proposed law, Danes would have the right to request swift removal of any infringing content. Platforms that fail to act may be fined, and affected individuals, including performers whose voice or image is replicated, could seek financial redress. Parody and satire are shielded under an exemption, though how enforcement will distinguish legitimate content from misuse remains to be clarified. Experts warn that the law may face legal scrutiny on grounds of compatibility with freedom of expression and existing EU regulations, such as the GDPR and Digital Services Act. Critics have flagged the possibility of overreach, especially concerning public discourse and artistic expression. Officials maintain that these protections are specifically targeted at unconsented deepfakes, not legitimate creative or critical content. The initiative places Denmark at the vanguard of deepfake regulation. Its focus on granting individuals proprietary control over their personal attributes, directly enshrined in copyright law, marks a novel strategy. It contrasts with initiatives in the United States—such as Tennessee's ELVIS Act or federal measures like the Take It Down Act—that focus primarily on sexual exploitation and do not offer such sweeping rights over likeness. Tech platforms anticipated to face challenges adapting to this framework, as they may need to integrate consent verification systems and proactive takedown processes. Generative AI firms may have to overhaul internal policies to ensure content featuring Danish citizens is handled lawfully. The planned fines and potential legal exposure are expected to incentivise rapid compliance. Consumer advocates welcome the measure, asserting it reinforces personal autonomy in the digital age. However, some legal scholars caution that policing deepfakes globally and balancing the boundary between misuse and satire will require detailed guidelines and pragmatic enforcement mechanisms. Denmark's move is likely to inspire parallel efforts across the EU. With its presidency platform, the country intends to encourage member states to replicate its approach. Key to this will be the harmonisation of legal standards across jurisdictions and clarity on enforcement tools under both copyright and broader EU law.


Arabian Post
35 minutes ago
- Arabian Post
Singapore AI‑Chip Fraud Trial Paused Until August
A Singapore court has postponed the trial of three men accused of illegally redirecting Nvidia AI chips to China until 22 August, after prosecutors stressed the need for additional time to analyse fresh documents and obtain international cooperation. The adjournment allows police to deepen their review of evidence and reach out to overseas authorities for responses. Charged with fraud, the defendants—Singaporeans Aaron Woon Guo Jie, 41, and Alan Wei Zhaolun, 49, along with Chinese national Li Ming, 51—stand accused of falsifying end‑user information to secure servers during purchases in 2023 and 2024. Those servers, allegedly equipped with high-end Nvidia chips, were then shipped via Singapore to Malaysia before possibly continuing to China. Political pressure surrounds the case, as the United States banned exports of leading-edge chips to China in 2022 over military and intelligence concerns. A senior U.S. official has asserted that DeepSeek, the Chinese AI firm implicated, supports military and intelligence operations. Home Affairs Minister K. Shanmugam confirmed that Singapore authorities pursued the investigation independently after an anonymous tip-off, and preliminary findings indicate the servers may indeed contain Nvidia's chips. The equipment, originally sourced from Dell Technologies and Super Micro Computer via Singapore‑based firms, was rerouted to Malaysia, though the final destination remains uncertain. ADVERTISEMENT This case forms part of a broader probe involving 22 individuals and companies alleged to have falsified end‑user data in order to bypass export restrictions. Singapore's position as a regional invoicing hub—recording 18% of Nvidia's fiscal year revenues despite accounting for less than 2% of physical shipments—underscores its vulnerability as a transit point in such schemes. Observers note that policing such complex supply chains is increasingly difficult, especially when high‑performance AI hardware carries dual-use potential with applications in advanced military or surveillance systems. Singapore's legal actions and multilateral engagements will be closely watched as the court reconvenes late in August.


Zawya
41 minutes ago
- Zawya
OPSWAT and SentinelOne enter OEM partnership to further strengthen multi-layered malware detection with AI
Dubai, United Arab Emirates – OPSWAT, a global leader in critical infrastructure protection, and SentinelOne® (NYSE:S) today announced their OEM partnership with the integration of SentinelOne's industry-leading AI-powered detection capabilities into OPSWAT's Metascan™ Multiscanning technology. This collaboration elevates malware detection across platforms, empowering enterprises to combat modern cyber threats with even greater precision and speed. With SentinelOne's AI/ML detection capabilities now part of OPSWAT's Metascan Multiscanning, joint customers benefit from: Enhanced detection accuracy through industry-leading AI capabilities Cross-platform functionality, supporting both Windows and Linux deployments Stronger ransomware and zero-day threat defense with autonomous, cloud-independent operation "OPSWAT's mission is to ensure the secure and compliant flow of data across the world's critical infrastructure," said Tom Mullen, Senior Vice President, Business Development, OPSWAT. "Integrating SentinelOne's AI detections strengthens Metascan's multilayered defense, giving our customers faster, smarter protection against today's most sophisticated threats." The inclusion of SentinelOne's AI/ML detections in Metascan Multiscanning provides unmatched malware detection through simultaneous scanning with over 30 leading anti-malware engines, utilizing signature, heuristic, and machine learning techniques to achieve over 99% detection accuracy. The integration of SentinelOne's AI/ML detections further amplifies this capability by identifying threats that bypass traditional defenses such as polymorphic malware. 'Our collaboration with OPSWAT reflects a shared commitment to strengthening cybersecurity through innovation,' said Melissa K. Smith, Vice President, Strategic Technology Partnerships and Initiatives, SentinelOne. 'By integrating our AI/ML detections with Metascan Multiscanning, we're delivering joint value that helps organizations elevate their threat detection strategies and better protect critical infrastructure across complex environments.' This integration is available immediately as part of the latest Metascan Multiscanning release and supports key OPSWAT products both on-premises and cloud, including MetaDefender Core, MetaDefender ICAP Server, and MetaDefender Kiosk. About OPSWAT For the last 20 years OPSWAT, a global leader in IT, OT, and ICS critical infrastructure cybersecurity, has continuously evolved an end-to-end solutions platform that gives public and private sector organizations and enterprises the critical advantage needed to protect their complex networks and ensure compliance. Empowered by a 'Trust no file. Trust no device.™' philosophy, OPSWAT solves customers' challenges around the world with solutions and patented technologies across every level of their infrastructure, securing their networks, data, and devices, and preventing known and unknown threats, zero-day attacks, and malware. Discover how OPSWAT protects the world's critical infrastructure and helps secure our way of life; visit About SentinelOne SentinelOne is a leading AI-powered cybersecurity platform. Built on the first unified Data Lake, SentinelOne empowers the world to run securely by creating intelligent, data-driven systems that think for themselves, stay ahead of complexity and risk, and evolve on their own. Leading organizations—including Fortune 10, Fortune 500, and Global 2000 companies, as well as prominent governments—trust SentinelOne to Secure Tomorrow™. Learn more at