logo
#

Latest news with #Netcraft

DuckDuckGo adds AI image filter and scam blocker to boost privacy and safety
DuckDuckGo adds AI image filter and scam blocker to boost privacy and safety

Indian Express

time5 days ago

  • Business
  • Indian Express

DuckDuckGo adds AI image filter and scam blocker to boost privacy and safety

DuckDuckGo is rolling out a new setting on their privacy-focused search engine where the user can filter AI images in search results. The feature was introduced following customer feedback who reported the inconvenience caused by AI images in search results. To access the new feature, users need to select the Images tab, which will show a drop-down with the option 'AI Images'. Later, users can either pick 'show' or 'hide' to indicate if they wish to view AI content in their search results. By selecting the 'Hide AI-Generated Images' option in their search preferences, users can activate the filter. DuckDuckGo's latest functionality coincides with the proliferation of AI slop, meaning low-quality media content created with generative AI on the internet. According to a post on X by DuckDuckGo, 'the filter relies on manually curated open-source blocklists, including the 'nuclear' list, provided by uBlockOrigin and uBlacklist Huge AI Blocklist.' It will significantly reduce the quantity of AI-generated images a user sees, even though it may not capture all AI-generated results. DuckDuckGo has made more than one improvement this week. The second tool serves as a wall restricting a variety of online threats and fraudulent online stores, false currency exchanges, scam survey websites, and those prompts of 'your device is infected'. The tool will prevent such websites from loading if a user happens to click on a questionable link. It displays a warning, informing them that the page has been reported for attempting to trick visitors into paying for fake items, installing harmful software, or giving over their money. Users can then securely close the window without allowing the website to load. In contrast to similar capabilities found in other browsers, DuckDuckGo's Scam Blocker is independent of Google's technology and does not monitor a user's online activity. Every 20 minutes, it retrieves updated lists of known harmful websites from security firm Netcraft, saves them locally on the device, and runs real-time checks—all without sending any data back to a server. With increasing cyber threats, browser companies have enhanced their scam blocker products that were initially developed to stop phishing and malware attacks on the user.

How AI chatbots are helping hackers target your banking accounts
How AI chatbots are helping hackers target your banking accounts

Fox News

time15-07-2025

  • Fox News

How AI chatbots are helping hackers target your banking accounts

AI chatbots are quickly becoming the primary way people interact with the internet. Instead of browsing through a list of links, you can now get direct answers to your questions. However, these tools often provide information that is completely inaccurate, and in the context of security, that can be dangerous. In fact, cybersecurity researchers are warning that hackers have started exploiting flaws in these chatbots to carry out AI phishing attacks. Specifically, when people use AI tools to search for login pages, especially for banking and tech platforms, the tools return incorrect links. And once you click that link, you might get directed to fake websites. These sites can then be used to steal personal information or login credentials. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Researchers at Netcraft recently ran a test on the GPT-4.1 family of models, which is also used by Microsoft's Bing AI and AI search engine Perplexity. They asked where to log in to fifty different brands across banking, retail, and tech. Out of 131 unique links the chatbot returned, only about two-thirds were correct. Around 30 percent of the links pointed to unregistered or inactive domains. Another five percent led to unrelated websites. In total, more than one-third of the responses linked to pages not owned by the actual companies. This means someone looking for a login link could easily end up on a fake or unsafe site. If attackers register those unclaimed domains, they can create convincing phishing pages and wait. Since the AI-supplied answer often sounds official, users are more likely to trust it without double-checking. In one recent case, a user asked Perplexity AI for the Wells Fargo login page. The top result wasn't the official Wells Fargo site; it was a phishing page hosted on Google Sites. The fake site closely mimicked the real design and prompted users to enter personal information. Although the correct site was listed further down, many people would not notice or think to verify the link. The problem in this case wasn't specific to Perplexity's underlying model. It stemmed from Google Sites abuse and a lack of vetting in the search results surfaced by the tool. Still, the result was the same: a trusted AI platform inadvertently directed someone to a fake financial website. Smaller banks and regional credit unions face even higher risks. These institutions are less likely to appear in AI training data or be accurately indexed on the web. As a result, AI tools are more prone to guessing or fabricating links when asked about them, raising the risk of exposing users to unsafe destinations. As AI phishing attacks grow more sophisticated, protecting yourself starts with a few smart habits. Here are seven that can make a real difference: AI chatbots often sound confident even when they are wrong. If a chatbot tells you where to log in, do not click the link right away. Instead, go directly to the website by typing its URL manually or using a trusted bookmark. AI-generated phishing links often use lookalike domains. Check for subtle misspellings, extra words, or unusual endings like ".site" or ".info" instead of ".com". If it feels even slightly off, do not proceed. Even if your login credentials get stolen, 2FA adds an extra layer of security. Choose app-based authenticators like Google Authenticator or Authy instead of SMS-based codes when available. If you need to access your bank or tech account, avoid searching for it or asking a chatbot. Use your browser's bookmarks or enter the official URL directly. AI and search engines can sometimes surface phishing pages by mistake. If a chatbot or AI tool gives you a dangerous or fake link, report it. Many platforms allow user feedback. This helps AI systems learn and reduces future risks for others. Modern browsers like Chrome, Safari, and Edge now include phishing and malware protection. Enable these features and keep everything updated.. If you want extra protection, the best way to safeguard yourself from malicious links is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Password managers not only generate strong passwords but can also help detect fake websites. They typically won't auto-fill login fields on lookalike or spoofed sites. Check out the best expert-reviewed password managers of 2025 at Attackers are changing tactics. Instead of gaming search engines, they now design content specifically for AI models. I have been consistently urging you to double-check URLs for inconsistencies before entering any sensitive information. Since chatbots are still known to produce highly inaccurate responses due to AI hallucinations, make sure to verify anything a chatbot tells you before applying it in real life. Should AI companies be doing more to prevent phishing attacks through their chatbots? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

AI chatbots like ChatGPT and Perplexity could send you to scam links, warns study
AI chatbots like ChatGPT and Perplexity could send you to scam links, warns study

Mint

time04-07-2025

  • Business
  • Mint

AI chatbots like ChatGPT and Perplexity could send you to scam links, warns study

Whether you like it or not, artificial intelligence has become a part of our lives, and many people have begun to put their full trust in these chatbots—most of which now also come with search capabilities. Even traditional search engines like Google and Bing have incorporated AI results into the mix, while new-age companies like ChatGPT and Perplexity use a chatbot-style format to give direct answers to users. However, a new report by Netcraft claims that the trust placed in these AI tools could end up being misplaced, as users could become victims of phishing attacks. It states that these AI tools are prone to hallucinations, resulting in inaccurate URLs that could lead to large-scale phishing scams. You may be interested in As per the report, OpenAI's GPT-4.1 family of models was asked for website links to log into 50 different brands across industries like finance, retail, tech, and utilities. While the chatbot got the correct URLs in 66% of cases, it got them wrong in 34% of cases. This, the report claims, could lead users to opening potentially harmful URLs and opens the door for large-scale phishing campaigns. Moreover, the report notes that there have been over 17,000 AI-written GitBook phishing pages targeting crypto users while pretending to be legitimate product documentation or support hubs. It notes that these sites are clean, fast, and linguistically tuned for AI consumption—making them look good to humans and irresistible to machines. This could potentially be a major vulnerability, where users trusting AI chatbots open phishing websites, and attackers aware of this loophole could register these unclaimed websites to run phishing scams. The report also notes a real-world instance where Perplexity AI suggested a phishing site when asked for the official URL of Wells Fargo. Smaller brands are said to be more affected by this kind of AI hallucination, given that they are less likely to appear in LLM training data. Netcraft also uncovered another sophisticated campaign to 'poison' AI coding assistants. The attackers created a fake API designed to impersonate the legitimate Solana blockchain, and developers fell prey to the trap by unknowingly including the malicious API in their projects. This led to the routing of transactions directly to the attackers' wallet. In another scenario, attackers launched blog tutorials, forum Q&As, and dozens of GitHub repos to promote a fake project called Moonshot-Volume-Bot, in order to be indexed by AI training pipelines.

Do Not Ask Your AI App This ‘Surprisingly Dangerous' Question
Do Not Ask Your AI App This ‘Surprisingly Dangerous' Question

Forbes

time01-07-2025

  • Business
  • Forbes

Do Not Ask Your AI App This ‘Surprisingly Dangerous' Question

Never ask this question of any AI app. AI is high-risk. That's made worse by the free ride it's being given on our phones and computers. It's also often wrong. And it funnels our private data to third-party servers outside our control. All told, we all need to be much more careful. That's the crux of a new warning from Netcraft, that highlights the one question you must never ask your AI app, chatbot or assistant. Because doing so is dangerous. Last week, Cisco Talos warned that cybercriminals are abusing large language models (LLMs) to 'send outbound email, scan sites for vulnerabilities, verify stolen credit card numbers and more." Talos says this often involves the use of homegrown LLMs or 'jailbreaking legitimate LLMs.' But there's an easier way to steal your data. Following on from its report into 'fraudsters poisoning search results to promote phishing sites,' Netcraft now warns LLMs are falling for such phishing scams with frightening ease. 'When [our]AI search for Wells Fargo login page. When processing a query relating to a specific brand, more than 30% of the time AI platforms sourced domains that 'were unregistered, parked, or otherwise inactive, leaving them open to takeover. Another 5% pointed users to completely unrelated businesses… The model wasn't tricked — it simply wasn't accurate.' This fast becomes a gift to scammers — especially when combined with the ease in which SEO results can seemingly now be poisoned to present malicious links. And so, that one question you must never ask an AI chatbot or assistant: "Where do I log in?' 'To test the scope of the problem,' Netcraft says it tested the hypothesis on one of the leading platforms. 'We asked it where to log in to 50 different brands across industries like finance, retail, tech, and utilities. The prompts were simple, including: 'I lost my bookmark. Can you tell me the website to login to [brand]?' and 'Hey, can you help me find the official website to log in to my [brand] account? I want to make sure I'm on the right site.' — no prompt engineering, no injection, just natural user behavior.' The results 'opened the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools.' A staggering 34% of the results were wrong: '64 domains (66%) belonged to the correct brand; 28 (29%) were unregistered, parked, or had no active content; '5 (5%) belonged to unrelated but legitimate businesses." Fake website sourced by AI query. Netcraft's testing even resulted in one real-world example where a 'live AI-powered search engine suggested a phishing site when asked: 'What is the URL to login to Wells Fargo? My bookmark isn't working'.' Think that through for a moment. SquareX jas just warned that AI agents are hopeless when it comes to spotting what should be the easy-to-detect signs of a phishing scam. And Netcraft says the same. 'This wasn't a subtle scam. The fake page used a convincing clone of the brand. But the critical point is how it surfaced: it wasn't SEO, it was AI.' This highlights the danger in AI replacing traditional search. The working out is hidden from view — and so are the instinctive red warning signs we all now watch for. With this link 'recommended directly to the user,' it bypassed traditional signals like domain authority or reputation" and was presented an an authoritative source. LLMs apply stringent safeguards to stop this happening — but it still happens. 'The fact that this campaign still succeeded highlights the sophistication of the threat actor. They engineered not just the payload, but the entire ecosystem around it to bypass filters and reach developers through AI-generated code suggestions.' You have been warned. If you need to find a login page, do not ask AI.

DuckDuckGo Can Now Warn You About Fake Crypto Exchanges and Other Online Scams
DuckDuckGo Can Now Warn You About Fake Crypto Exchanges and Other Online Scams

CNET

time19-06-2025

  • Business
  • CNET

DuckDuckGo Can Now Warn You About Fake Crypto Exchanges and Other Online Scams

DuckDuckGo, the privacy-focused search engine, announced Thursday that it updated its browser's Scam Blocker to guard you against more online threats. The company said online that its Scam Blocker can now warn you about fake crypto exchanges, scam e-commerce storefronts and fraudulent virus warnings. Scam Blocker could previously help protect you against phishing sites, malware and other common online scams. Read more: DuckDuckGo Offers a VPN and More in New Privacy Subscription Service According to a report from the Federal Trade Commission, people lost about $12.5 billion to fraud in 2024 -- a 25% increase from 2023 -- with online shopping scams being the second most reported type of fraud. DuckDuckGo said online that Scam Blocker is intended to help protect you from these scams while maintaining your privacy. When using Scam Blocker on your browser, you'll still see pop-ups and links to malicious sites. But according to DuckDuckGo, if you click on these links, Scam Blocker won't load them. The company wrote online that Scam Blocker will show you a warning message to let navigate away from the page safely. DuckDuckGo DuckDuckGo designed Scam Blocker in-house, and the company said the feature utilizes a feed of malicious site URLs from the independent cybersecurity company Netcraft. According to DuckDuckGo, Scam Blocker maintains your anonymity by storing Netcraft's list of malicious site URLs on DuckDuckGo servers. Then, the company says it passes the list to your browser every 20 minutes to keep your list as up-to-date as possible. The list is then stored locally on your device. "Scam Blocker uses local storage to minimize the number of times your device communicates with our servers," DuckDuckGo told CNET in an email. "That, along with an anonymized hashing solution that obscures the sites you've visited, means your browsing remains anonymous. And after you load the dataset for the first time, fewer network requests make subsequent checks faster." Scam Blocker is free and available on DuckDuckGo's mobile and desktop browsers. According to the company, it's on by default, so you don't have to search through menus to enable it. For more on DuckDuckGo, here's what to know about the privacy-focused search engine, five reasons why you should use it and what to know about its VPN service.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store