logo
Real or fake? Study finds that X's Grok has trouble sorting fact from fiction amid misinformation

Real or fake? Study finds that X's Grok has trouble sorting fact from fiction amid misinformation

NZ Herald6 days ago

Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said today, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly using AI-powered chatbots -

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Cloudflare blocks AI crawlers to support content creators
Cloudflare blocks AI crawlers to support content creators

Techday NZ

timean hour ago

  • Techday NZ

Cloudflare blocks AI crawlers to support content creators

Cloudflare has implemented a new default setting to block AI crawlers from accessing website content without explicit permission or compensation, making it the first internet infrastructure provider to do so. With this change, every new customer and domain on Cloudflare's platform will start with a setting that blocks AI crawlers by default, shifting the responsibility to AI companies to request access and clarify the crawler's intended purpose, such as training, inference, or search. This new approach replaces the previous opt-out system with an opt-in model, giving more power to content creators and publishers over the use of their work. Cloudflare is also developing a feature called "Pay Per Crawl," which would allow content creators to request payment from AI companies seeking to use their content, thereby creating potential new revenue streams. This move addresses concerns about AI companies scraping web content without consent or compensation—a practice that many publishers and stakeholders argue threatens the future economic sustainability of the internet. Shifting value in online content The longstanding model of the internet has been based on a cycle in which search engines index web content, drive traffic to original websites, and provide revenue to creators through advertising. However, according to Cloudflare, the growing use of AI crawlers that extract information for large language models and other generative applications has disrupted this cycle by delivering answers without redirecting users to the original source. This change means creators may lose both the financial benefits and audience engagement historically generated by their work. Matthew Prince, Cloudflare's Co-founder and CEO, commented, "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone – creators, consumers, tomorrow's AI founders, and the future of the web itself. Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it. AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone." This sentiment has been echoed by several publishers and content platforms. Roger Lynch, CEO of Condé Nast, stated, "Cloudflare's innovative approach to block AI crawlers is a game-changer for publishers and sets a new standard for how content is respected online. When AI companies can no longer take anything, they want for free, it opens the door to sustainable innovation built on permission and partnership. This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable." Neil Vogel, CEO of Dotdash Meredith, remarked, "We have long said that AI platforms must fairly compensate publishers and creators to use our content. We can now limit access to our content to those AI partners willing to engage in fair arrangements. We're proud to support Cloudflare and look forward to using their tools to protect our content and the open web." Renn Turiano, Chief Consumer and Product Officer at Gannett Media, noted, "As the largest publisher in the country, comprised of USA TODAY and over 200 local publications throughout the USA TODAY Network, blocking unauthorised scraping and the use of our original content without fair compensation is critically important. As our industry faces these challenges, we are optimistic the Cloudflare technology will help combat the theft of valuable IP." Other technology companies have also spoken in support of the new permission-based system. Bill Ready, CEO of Pinterest, stated, "Creators and publishers around the world leverage Pinterest to expand their businesses, reach new audiences and directly measure their success. As AI continues to reshape the digital landscape, we are committed to building a healthy Internet infrastructure where content is used for its intended purpose, so creators and publishers can thrive." Steve Huffman, Reddit's Co-founder and CEO, pointed out, "AI companies, search engines, researchers, and anyone else crawling sites have to be who they say they are. And any platform on the web should have a say in who is taking their content for what. The whole ecosystem of creators, platforms, web users and crawlers will be better when crawling is more transparent and controlled, and Cloudflare's efforts are a step in the right direction for everyone." Vivek Shah, CEO of Ziff Davis, added, "We applaud Cloudflare for advocating for a sustainable digital ecosystem that benefits all stakeholders — the consumers who rely on credible information, the publishers who invest in its creation, and the advertisers who support its dissemination." Default enforcement Prior to this change, Cloudflare had already offered a one-click option to block AI crawlers since mid-2024. Over a million customers have enabled this option. With the latest move, every new website signing up to Cloudflare will be prompted to decide whether to allow or deny AI crawler access, streamlining the decision process and ensuring the default favours content owner control. Industry support More than 30 publishers, media, and technology companies have voiced their support for the new permission-based crawling model, including ADWEEK, The Associated Press, TIME, The Atlantic, Reddit, Pinterest, Quora, Sky News Group, and Universal Music Group, among others. Will Lee, CEO of ADWEEK, stated, "As the front page and homepage for marketing, advertising and media industry leaders, ADWEEK's position has been clear that we must be compensated for our investment grade journalism and information. I am thrilled Cloudflare has created a marketplace and mechanism that will enable us to properly participate in the promise LLMs have for our industry." Paul Edmondson, CEO of The Arena Group, said, "We think of our writers and content creators as entrepreneurs. Their work deserves protection. By blocking unauthorized AI crawlers, Cloudflare is not just defending content – it's defending the future of creators and storytellers. This is a vital move toward a digital economy built on trust, permission and fair value." Additional key supporters include BuzzFeed, PMC, Quora, Stack Overflow, News/Media Alliance, and Webflow, all of whom have commented on the significance of the move for the digital economy and the rights of creators. Technical implementation Cloudflare has also put forward new ways for AI bots to authenticate themselves and for webmasters to identify them, including participating in the development of industry protocols for bot identification and authentication. These mechanisms aim to increase transparency in the operation of AI crawlers, allowing website owners to make more informed choices about access to their content. AI companies are now required to obtain clear, explicit permission from websites prior to scraping content for AI training or generation purposes. Existing and future customers of Cloudflare can review and modify their crawler settings as required.

AI drives 80 percent of phishing with USD $112 million lost in India
AI drives 80 percent of phishing with USD $112 million lost in India

Techday NZ

time8 hours ago

  • Techday NZ

AI drives 80 percent of phishing with USD $112 million lost in India

Artificial intelligence has become the predominant tool in cybercrime, according to recent research and data from law enforcement and the cybersecurity sector. AI's growing influence A June 2025 report revealed that AI is now utilised in 80 percent of all phishing campaigns analysed this year. This marks a shift from traditional, manually created scams to attacks fuelled by machine-generated deception. Concurrently, Indian police recorded that criminals stole the equivalent of USD $112 million in a single state between January and May 2025, attributing the sharp rise in financial losses to AI-assisted fraudulent operations. These findings are reflected in the daily experiences of security professionals, who observe an increasing use of automation in social engineering, malware development, and reconnaissance. The pace at which cyber attackers are operating is a significant challenge for current defensive strategies. Methods of attack Large language models are now being deployed to analyse public-facing employee data and construct highly personalised phishing messages. These emails replicate a victim's communication style, job role and business context. Additionally, deepfake technology has enabled attackers to create convincing audio and video content. Notably, an incident in Hong Kong this year saw a finance officer send HK $200 million after participating in a deepfake video call bearing the likeness of their chief executive. Generative AI is also powering the development of malware capable of altering its own code and behaviour within hours. This constant mutation enables it to bypass traditional defences like endpoint detection and sandboxing solutions. Another tactic, platform impersonation, was highlighted by Check Point, which identified fake online ads for a popular AI image generator. These ads redirected users to malicious software disguised as legitimate installers, merging advanced loader techniques with sophisticated social engineering. The overall result is a landscape where AI lowers the barriers to entry for cyber criminals while amplifying the reach and accuracy of their attacks. Regulatory landscape Regulators are under pressure to keep pace with the changing threat environment. The European Union's AI Act, described as the first horizontal regulation of its kind, became effective last year. However, significant obligations affecting general-purpose AI systems will begin from August 2025. Industry groups in Brussels have requested a delay on compliance deadlines due to uncertainty over some of the rules, but firms developing or deploying AI will soon be subject to financial penalties for not adhering to the regulations. Guidance issued under the Act directly links the risks posed by advanced AI models to cybersecurity, including the creation of adaptive malware and the automation of phishing. This has created an expectation that security and responsible AI management are now interrelated priorities for organisations. Company boards are expected to treat the risks associated with generative models with the same seriousness as data protection or financial governance risks. Defensive measures A number of strategies have been recommended in response to the evolving threat environment. Top of the list is the deployment of behaviour-based detection systems that use machine learning in conjunction with threat intelligence, as traditional signature-based tools struggle against ever-changing AI-generated malware. Regular vulnerability assessments and penetration testing, ideally by CREST-accredited experts, are also regarded as essential to expose weaknesses overlooked by both automated and manual processes. Verification protocols for audio and video content are another priority. Using additional communication channels or biometric checks can help prevent fraudulent transactions initiated by synthetic media. Adopting zero-trust architectures, which strictly limit user privileges and segment networks, is advised to contain potential breaches. Teams managing AI-related projects should map inputs and outputs, track possible abuse cases, and retain detailed logs in order to meet audit obligations under the forthcoming EU regulations. Staff training programmes are also shifting focus. Employees are being taught to recognise subtle cues and nuanced context, rather than relying on spotting poor grammar or spelling mistakes as indicators of phishing attempts. Training simulations must evolve alongside the sophistication of modern cyber attacks. The human factor Despite advancements in technology, experts reiterate that people remain a core part of the defence against AI-driven cybercrime. Attackers are leveraging speed and scale, but defenders can rely on creativity, expertise, and interdisciplinary collaboration. "Technology alone will not solve AI‑enabled cybercrime. Attackers rely on speed and scale, but defenders can leverage creativity, domain expertise and cross‑disciplinary thinking. Pair seasoned red‑teamers with automated fuzzers; combine SOC analysts' intuition with real‑time ML insights; empower finance and HR staff to challenge 'urgent' requests no matter how realistic the voice on the call," said Himali Dhande, Cybersecurity Operations Lead at Borderless CS. The path ahead There is a consensus among experts that the landscape has been permanently altered by the widespread adoption of AI. It is increasingly seen as necessary for organisations to shift from responding to known threats to anticipating future methods of attack. Proactive security, embedded into every project and process, is viewed as essential not only for compliance but also for continued protection. Borderless CS stated it, "continues to track AI‐driven attack vectors and integrate them into our penetration‐testing methodology, ensuring our clients stay ahead of a rapidly accelerating adversary. Let's shift from reacting to yesterday's exploits to pre‐empting tomorrow's."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store