
Turn AI To Your Advantage In The New Age Of Fraud
Is Artificial Intelligence a friend or foe when it comes to fraud? The answer, right now, is both. The battle lines have been drawn, and the pace of innovation is going to determine the ultimate winner.
We are undeniably living in a new era of fraud, driven by deepfake and generative AI technologies, which empower sophisticated synthetic IDs and even the ability to bypass verification systems altogether. AI has begun rapidly replacing more rudimentary fraud methods of decades gone by.
According to a recent Prosper Insight & Analytics survey, 58% of consumers are extremely/very concerned about their personal information being used for fraudulent activities. This figure continues to steadily rise alongside the awareness of how AI tools, such as ChatGPT, can be manipulated for more nefarious means.
Prosper - How Concerned are You About Your Privacy Being Violated From AI Using Your Data
In fact, one in four consumers believe that identity verification systems are not strong enough to protect their data. This erosion of trust is prompting 47% of consumers to say they would abandon a brand entirely if they fell victim to fraud through that platform.
This year alone, reports show generative AI has been used extensively to create life-like fake passports, capable of bypassing digital verification systems that were once considered top-of-the-line.
To understand this evolving problem and how businesses must stay protected, I spoke with Jimmy Roussel, CEO of identity fraud leader IDScan.net. As the company verifies an average of 18 million IDs per month, Roussel has a front-row seat at the rapidly expanding battleground that is digital fraud.
The new face of identity fraud
Businesses around the world are looking at AI as a tool to improve efficiency, but as the rate of adoption increases, so too does the number of cybercriminals seeking to weaponize it to commit fraud. 'We are seeing everything from AI-generated IDs to deepfakes used during onboarding and verification processes,' Roussel said. 'This isn't a hypothetical threat – it's already here.'
One category Roussel believes is scaling faster than most is synthetic ID fraud. Unlike traditional identity fraud, where someone steals an individual's real information, synthetic fraud involves creating a completely new identity using a mix of real and fake data. These identities are then amplified through AI-generated documents, like passports or driver's licenses, which make detection difficult for traditional Know Your Customer (KYC) systems.
'Some of these fake passports are so good, even experienced border officials would have a hard time spotting them,' Roussel added.
A failing legacy system
A 2024 report from McKinsey highlighted that global cybercrime costs could reach $10.5 trillion annually by 2025, with identity fraud representing one of the fastest-growing contributors. Many of today's verification tools weren't designed to deal with AI-generated deception. Until now, it seems the fraudsters have been adapting and innovating at a faster pace.
Systems that rely solely on image scans, basic metadata, or selfie-video checks are hopelessly outmatched by fraudsters who are testing, tweaking, and improving fake assets. Companies are constantly playing catch-up.
According to Roussel, most businesses are relying on outdated verification tools. 'They're using 2015-era KYC systems to take on a 2025-era fraud problem. These systems weren't built with deepfakes or generative AI in mind, so they're easy to get around.'
This security problem is dangerous in many sectors of business, including financial, travel, and e-commerce.
Building a Smarter Defense
All is not lost. While many scammers have access to advanced tools, so do businesses, but it may mean replacing legacy products with previously acceptable standards. One technology that Roussel believes will become increasingly critical is Near Field Communication (NFC) scanning.
This technology isn't something you can fake in Photoshop. It verifies the data that's stored in the chip itself, making it one of the most reliable defenses against ID fraud.
A recent report from IDScan.net found that 84% of the AI-generated fake IDs were identified with image and symbology analysis alone. When combined with third-party database checks, IDScan.net was able to catch 99.6% of the fraudulent IDs.
Other innovations include behavioral biometrics, which measure unique user behavior, such as typing patterns and mouse movements, and liveness detection tools that differentiate between real people and deepfakes. Alongside the technological risk, there is also a considerable financial incentive to act now. A recent Gartner survey revealed that 53% of consumers would prefer to use a brand that incorporates stronger AI-based ID verification, even if it added a few seconds to their user journey. This shows businesses can protect both their bottom line and user experience by implementing smarter tools.
Companies that fail to update their fraud prevention protocols can also result in regulatory breaches, reputational damage, and broken customer trust, particularly in industries where consumer data security is paramount.
Roussel warns that too many businesses are stuck in a reactive mode. 'What we're seeing is companies only getting serious about upgrading their fraud prevention after a major incident. But by then, the damage is already done. What they need is a proactive mindset. One that treats identity verification as a strategic priority, not a box to tick.'
Bridging the knowledge gap
Education is one of the most important tools for organizations to combat fraud. Many decision makers are unaware of how powerful AI has become, particularly in the wrong hands, and how easy it is for cyber scammers to access information.
Within the last year alone, IDScan.net has reported an increase of 34% from the previous year in fraudulent activity in the financial and banking sectors. March 2024 showed a 37% increase in fraud volume, with April just behind March at 21.4% over annual averages.
A recent Nationwide survey shows a staggering 86% of consumers do not feel informed enough to protect themselves from AI-driven identity theft. This gap in consumer knowledge should serve as a wake-up call for businesses to double down on education and transparency as part of their fraud prevention strategies.
The accessibility of information is what makes fraudulent activity so dangerous. Fraud isn't just something that's done by advanced criminal groups anymore. Businesses need to understand that anyone can be a potential threat and need to keep ahead of the game in order to keep their data safe.
Securing the Future with Digital Trust
Looking ahead, Roussel believes the verification space will split into two distinct camps: Those who modernize and those who remain vulnerable. The organizations that integrate advanced verification methods into their user journeys, without compromising the customer experience, will be better equipped to compete, innovate, and grow securely.
'Digital trust is everything now,' he concluded. 'Whether you're onboarding new customers, verifying employees, or securing access to sensitive systems, your ability to confirm someone's identity accurately and instantly is foundational. Businesses that get this right will thrive.'
In a world where artificial intelligence is being used to both build up and break down systems, the stakes for identity verification have never been higher. However, businesses that have a powerful tool at their disposal will be the victors. By utilizing these tools and investing properly, they will be able to not only survive but thrive.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
11 minutes ago
- Business Insider
Your chats with Meta's AI might end up on Google — just like ChatGPT until it turned them off
OpenAI's ChatGPT raised some eyebrows this week when people realized that certain chats were able to be found by Google search. Although people had checked a box to share the chats publicly, it seemed likely that not everyone understood what they were doing. On Thursday, OpenAI said that it would stop having shared chats be indexed by Google. Meanwhile, Meta's stand-alone MetaAI app also allows users to share their chats — and it will continue to allow Google to index them, meaning that they can show up in a search. I did a bunch of Google searches and found lots of MetaAI conversations in the results. The Meta AI app, launched this spring, lets people share chats to a "Discover" feed. Google crawlers can "index" that feed and then serve up the results when people use Google search. So, for instance, if you do a site-specific search on Google for " and the keyword "balloons," you might come up with a chat someone had with the MetaAI bot about where to get the best birthday balloons — if that person tapped the button to allow the chat to be shared. As Business Insider reported in June, the Meta AI Discover feed had been full of examples of chats that seemed personal in nature — medical questions, specific career advice, relationship matters. Some contained identifying information like phone numbers, email addresses, or full names. Although all of these people did click to share, based on the personal nature of some of the chats, I could only guess that people might have misunderstood what it meant to share the conversation. After Business Insider wrote about this a few weeks ago, the Meta AI app made some tweaks to warn users more clearly about how the Discover feed works. Now, when you choose to share a conversation, you get a pop-up with the warning: "Conversations on feed are public so anyone can see them and engage." The additional warning seems to be working. Scrolling through the Discover feed, I now see mainly instances of people using it for image creation and far fewer accidental private text conversations (although there seemed to still be at least a few of those). Meanwhile, Daniel Roberts, a representative for Meta, confirmed that Meta AI chats that were shared to its Discover feed would continue to be indexed by Google. He reiterated the multi-step process I just described. For now, Meta AI can only be used via its mobile app, not the web. This might lead people to think that even the Discover feed exists as a sort of walled garden, separate from "the internet" and existing only within the Meta AI app. But posts from the Discover feed (and only those public posts) can be shared as links around the web — and that's where the Google indexing comes in. If this sounds slightly confusing, it is. That may also be confusing to users. Now, it's possible that some people really do want to share their AI chats with the general public, and are happy to have those chats show up on Google searches along with their Instagram or Facebook handles. But I'm still not sure I'd understand why anyone would want to share their interactions — or why anyone else would want to read them.
Yahoo
13 minutes ago
- Yahoo
Federal Reserve economists aren't sold that AI will actually make workers more productive, saying it could be a one-off invention like the light bulb
A new Federal Reserve Board staff paper concludes that generative artificial intelligence (genAI) holds significant promise for boosting U.S. productivity, but cautions that its widespread economic impact will depend on how quickly and thoroughly firms integrate the technology. Titled 'Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope?' the paper, authored by Martin Neil Baily, David M. Byrne, Aidan T. Kane, and Paul E. Soto, explores whether genAI represents a fleeting innovation or a groundbreaking force akin to past general-purpose technologies (GPTs) such as electricity and the internet. The Fed economists ultimately conclude their 'modal forecast is for a noteworthy contribution of genAI to the level of labor productivity,' but caution they see a wide range of plausible outcomes, both in terms of its total contribution to making workers more productive and how quickly that could happen. To return to the light-bulb metaphor, they write that 'some inventions, such as the light bulb, temporarily raise productivity growth as adoption spreads, but the effect fades when the market is saturated; that is, the level of output per hour is permanently higher but the growth rate is not.' Here's why they regard it as an open question whether genAI may end up being a fancy tech version of the light bulb. GenAI: a tool and a catalyst According to the authors, genAI combines traits of GPTs—those that trigger cascades of innovation across sectors and continue improving over time—with features of 'inventions of methods of invention' (IMIs), which make research and development (R&D) more efficient. The authors do see potential for genAI to be a GPT like the electric dynamo, which continually sparked new business models and efficiencies, or an IMI like the compound microscope, which revolutionized scientific discovery. The Fed economists did cautioning that it is early in the technology's development, writing 'the case that generative AI is a general-purpose technology is compelling, supported by the impressive record of knock-on innovation and ongoing core innovation.' Since OpenAI launched ChatGPT in late 2022, the authors said genAI has demonstrated remarkable capabilities, from matching human performance on complex tasks to transforming frontline work in writing, coding, and customer service. That said, the authors said they're finding scant evidence about how many companies are actually using the technology. Limited but growing adoption Despite such promise, the paper stresses that most gains are so far concentrated in large corporations and digital-native industries. Surveys indicate high genAI adoption among big firms and technology-centric sectors, while small businesses and other functions lag behind. Data from job postings shows only modest growth in demand for explicit AI skills since 2017. 'The main hurdle is diffusion,' the authors write, referring to the process by which a new technology is integrated into widespread use. They note that typical productivity booms from GPTs like computers and electricity took decades to unfold as businesses restructured, invested, and developed complementary innovations. 'The share of jobs requiring AI skills is low and has moved up only modestly, suggesting that firms are taking a cautious approach,' they write. 'The ultimate test of whether genAI is a GPT will be theprofitability of genAI use at scale in a business environment and such stories are hard to come by at present.' They know that many individuals are using the technology, 'perhaps unbeknownst to their employers,' and they speculate that future use of the technology may become so routine and 'unremarkable' that companies and workers no longer know how much it's being used. Knock-on and complementary technologies The report details how genAI is already driving a wave of product and process innovation. In healthcare, AI-powered tools draft medical notes and assist with radiology. Finance firms use genAI for compliance, underwriting, and portfolio management. The energy sector uses it to optimize grid operations, and information technology is seeing multiples uses, with programmers using GitHub Copilot completing tasks 56% faster. Call center operators using conversational AI saw a 14% productivity boost as well. Meanwhile, ongoing advances in hardware, notably rapid improvements in the chips known as graphics processing units, or GPUs, suggest genAI's underlying engine is still accelerating. Patent filings related to AI technologies have surged since 2018, coinciding with the rise of the Transformer architecture—a backbone of today's large language models. 'Green shoots' in research and development The paper also finds genAI increasingly acting as an IMI, enhancing observation, analysis, communication, and organization in scientific research. Scientists now use genAI to analyze data, draft research papers, and even automate parts of the discovery process, though questions remain about the quality and originality of AI-generated output. The authors highlight growing references to AI in R&D initiatives, both in patent data and corporate earnings calls, as further evidence that genAI is gaining a foothold in the innovation ecosystem. Cautious optimism—and open questions While the prospects for a genAI-driven productivity surge are promising, the authors warn against expecting overnight transformation. The process will require significant complementary investments, organizational change, and reliable access to computational and electric power infrastructure. They also emphasize the risks of investing blindly in speculative trends—a lesson from past tech booms. 'GenAI's contribution to productivity growth will depend on the speed with which that level is attained, and historically, the process for integrating revolutionary technologies into the economy is a protracted one,' the report concludes. Despite these uncertainties, the authors believe genAI's dual role—as a transformative platform and as a method for accelerating invention—bodes well for long-term economic growth if barriers to widespread adoption can be overcome. Still, what if it's just another light bulb? For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Engadget
40 minutes ago
- Engadget
OpenAI is removing ChatGPT conversations from Google
OpenAI has removed a feature that made shared ChatGPT conversations appear in search results. The "short-lived experiment" was based on the chatbot's link creation option. After complaints, OpenAI's chief information security officer, Dane Stuckey, said the company is working to remove the chats from search engines. The public outrage stems from a Fast Company article from earlier this week (via Ars Technica ). Fast Company said it found thousands of ChatGPT conversations in Google search results. The indexed chats didn't explicitly include identifying information. But in some cases, their contents reportedly contained specific details that could point to the source. To be clear, this wasn't a hack or leak. It was tied to a box users could tick when creating a shareable URL directing to a chat. In the pop-up for creating a public link, the option to "Make this chat discoverable" appeared. The more direct explanation ("allows it to be shown in web searches") appeared in a smaller, grayer font below. Users had to tick that box to make the chat indexed. You may wonder why people creating a public link to a chat would have a problem with its contents being public. But Fast Company noted that people could have made the URLs to share in messaging apps or as an easy way to revisit the chats later. Regardless, the public discoverability option is gone now. In Fast Company 's report, Stuckey defended the feature's labeling as "sufficiently clear." But after the outcry grew, OpenAI relented. "Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," Stuckey announced on Thursday.