Latest news with #RealityDefender


Indian Express
20-06-2025
- Entertainment
- Indian Express
Can you trust what you see? How AI videos are taking over your social media
A few days ago, a video that claimed to show a lion approaching a man asleep on the streets of Gujarat, sniffing him and walking away, took social media by storm. It looked like it was CCTV footage. The clip was dramatic, surreal, but completely fake. It was made using Artificial Intelligence (AI), but that didn't stop it from going viral. The video was even picked up by some news outlets, and reported as if it was a real incident, without any verification. The video originated from a YouTube channel – The world of beasts, which inconspicuously mentioned 'AI-assisted designs' in its bio. In another viral clip, a kangaroo – allegedly an emotional support animal – was seen attempting to board a flight with its human. Again, viewers were fascinated, many believing the clip to be real. The video first appeared on the Instagram account 'Infinite Unreality,' which openly brands itself as 'Your daily dose of unreality.' The line between fiction and reality, now more than ever, isn't always obvious to idle users. From giant anacondas swimming freely through rivers to a cheetah saving a woman from danger, AI-generated videos are flooding platforms, often blurring the boundary between the unbelievable and the impossible. With AI tools becoming more advanced and accessible, these creations are only growing in number and becoming sophisticated. To understand just how widespread the problem of AI-generated videos is, and why it matters, The Indian Express spoke to experts working at the intersection of technology, media, and misinformation. 'Not just the last year, not just the last month, even in the last couple of weeks, I've seen the volume of such videos increase,' said Ben Colman, CEO of deepfake detection firm Reality Defender. He gave a recent example – a 30-second commercial by betting platform Kalshi that aired a couple of weeks ago, during Game 3 of the 2025 NBA Finals. The video was made using Google's new AI video tool, Veo 3. 'It's blown past the uncanny valley, meaning it's infinitely more believable, and more videos like this are being posted to social platforms today compared to the day prior and so on,' Colman said. Sam Gregory, executive director of WITNESS, a non-profit that trains activists in using tech for human rights, said, 'The quantity and quality of synthetic audio have rapidly increased over the past year, and now video is catching up. New tools like Veo generate photorealistic content that follows physical laws, matches visual styles like interviews or news broadcasts, and syncs with controllable audio prompts.'. The reason behind platforms like Instagram, Facebook, TikTok, and YouTube pushing AI-generated videos, beyond technical novelty, is not very complex – such videos grab user attention, something all platforms are desperate for. Colman said, 'These videos make the user do a double‑take. Negative reactions on social media beget more engagement and longer time on site, which translates to more ads consumed.' 'Improvements in fidelity, motion, and audio have made it easier to create realistic memetic content. People are participating in meme culture using AI like never before,' said Gregory. According to Ami Kumar, founder of Social & Media Matters, 'The amplification is extremely high, unfortunately, platform algorithms prioritise quantity over quality, promoting videos that generate engagement regardless of their accuracy or authenticity.' Gregory, however, said that demand plays a role. 'Once you start watching AI content, your algorithm feeds you more. 'AI slop' is heavily monetised,' he said. 'Our own PhDs have failed to distinguish real photos or videos from deepfakes in internal tests,' Colman admitted. Are the big platforms prepared to put labels and checks on AI-generated content? Not yet. Colman said most services rely on 'less‑than‑bare‑minimum provenance watermark checks,' which many generators ignore or can spoof. Gregory warned that 'research increasingly shows the average person cannot distinguish between synthetic and real audio, and now, the same is becoming true for video.' When it comes to detection, Gregory pointed to an emerging open standard, C2PA (Coalition for Content Provenance and Authenticity), that could track the origins of images, audio and video, but it is 'not yet adopted across all platforms.' Meta, he noted, has already shifted from policing the use of AI to policing only content deemed 'deceptive and harmful.' Talking about AI-generated video detection, Kumar said, 'The gap is widening. Low-quality fakes are still detectable, but the high-end ones are nearly impossible to catch without advanced AI systems like the one we're building at Contrails.' However, he is cautiously optimistic that the regulatory tide, especially in Europe and the US, will force platforms to label AI output. 'I see the scenario improving in the next couple of years, but sadly loads of damage will be done by then,' he said. A good question to ask is, 'Who is making all these clips?' And the answer is, 'Everyone'. 'My kids know how to create AI-generated videos and the same tools are used by hobbyists, agencies, and state actors,' Colman said. Gregory agreed. 'We are all creators now,' he said. 'AI influencers, too, are a thing. Every new model spawns fresh personalities,' he said, adding that there is a growing trend of commercial actors producing AI-slop – cheap, fantastical content designed to monetise attention. Also Read | Canva rolls out new AI video clip feature powered by Google's Veo 3 model Kumar estimated that while 90 per cent of such content is made for fun, the remaining 10 per cent is causing real-world harm through financial, medical, or political misinformation. A case in point is the falsified footage of United Kingdom-based activist Tommy Robinson's viral migrant‑landing video. According to Colman, AI is a creative aid – not a replacement – and insisted that intentional deception should be clearly separated from artistic expression. 'It becomes manipulation when people's emotions or beliefs are deliberately exploited,' he said. Gregory pointed out one of the challenges – satire and parody can easily be misinterpreted when stripped of context. Kumar had a pragmatic stance: 'Intent and impact matter most. If either is negative, malicious, or criminal, it's manipulation.' The stakes leap when synthetic videos enter conflict zones and elections. Gregory recounted how AI clips have misrepresented confrontations between protesters and US troops in Los Angeles. 'One fake National Guard video racked up hundreds of thousands of views,' he said. Kumar said deepfakes have become routine in wars from Ukraine to Gaza and in election cycles from India to the US. Colman called for forward-looking laws: 'We need proactive legislation mandating detection or prevention of AI content at the point of upload. Otherwise, we're only penalising yesterday's problems while today's spiral out of control.' Gregory advocated for tools that reveal a clip's full 'recipe' across platforms, while warning of a 'detection-equity problem'. Current tools often fail to catch AI content in non-English languages or compressed formats. Kumar demanded 'strict laws and heavy penalties for platforms and individuals distributing AI-generated misinformation.' 'If we lose confidence in the evidence of our eyes and ears, we will distrust everything,' Gregory warned. 'Real, critical content will become just another drop in a flood of AI slop. And this scepticism can be weaponised to discredit real journalism, real documentation, and real harm.' Synthetic content is, clearly, here to stay. Whether it becomes a tool for creativity or a weapon of mass deception will depend on the speed at which platforms, lawmakers and technologists can build, and adopt, defences that keep the signal from being drowned by the deepfake noise.


New York Post
05-06-2025
- Business
- New York Post
Inside NYNext groundbreaking AI event at New York Tech Week
On Tuesday night as part of New York Tech Week, NYNext joined forces with Tech:NYC and PensarAI to host our first-ever event. The night celebrated the key players — from scrappy startups to giants such as Google and IBM — that are making big moves in artificial intelligence. Nearly 150 people took part in NY AI Demo Night. Founders and venture capitalists snacked on figs and tuna tartar and sipped rosé — as well as our new favorite non-alcoholic beverage Töst — at the Domino Sugar Factory in Williamsburg. The factory has gotten a major facelift and now houses a number of startups as well as a sweeping view of Manhattan. Eight AI companies presented their newest ideas to the audience with the goal of getting people to download their apps and invest in their companies 4 Julie Samuels, who runs Tech: NYC — which plays a major role in hosting Tech Week — addressed attendees. Emmy Park 'One of the most unique aspects of the NY tech scene is the ability to bring together and showcase tech heavyweights implementing AI at scale alongside startups in deep builder mode,' Caroline McKechnie, Director of Platform at Tech:NYC, told me. 'We saw a real need for an event that gives founders and engineers a window into what's being built across the city's AI landscape — all against the iconic skyline. The energy of having established players and emerging talent demo side by side is something you can only capture in a city like New York.' Reality Defender, which detects deepfakes, showed just how effective it is in finding AI-generated images among a slew of photos. Founder Ben Colman told me it would have made the plot of HBO's 'Mountainhead' — a film based on the premise that deep fakes are destroying the world — completely null. 4 More than 150 guests came to our New York Tech Week event. Tech Week has ballooned to more than 1,000 events this year. Emmy Park PromptLayer, which aims to empower lay people to create their own apps with AI, demonstrated how seamless it is for anyone to prompt AI to build a product. Founder Jared Zoneraich said, 'The best AI builders, the best prompt engineers are not machine learning engineers … they're subject matter experts.' Representatives from IBM presented their newest insights into AI. But the company also made headlines this week with its newly unveiled watsonx AI Labs in NYC. 'This isn't your typical corporate lab. watsonx AI Labs is where the best AI developers gain access to world-class engineers and resources and build new businesses and applications that will reshape AI for the enterprise,' Ritika Gunnar, General Manager, Data & AI, IBM, told me. 'By anchoring this mission in New York City, we are investing in a diverse, world‑class talent pool and a vibrant community whose innovations have long shaped the tech landscape.' 4 NYNext co-hosted the evening with PensarAI, Two Trees, and Tech: NYC. Emmy Park Other presenters included Flora, an AI tool for creatives; a podcast and newsletter network powered by AI; Superblocks, an AI platform building software; Run Loop AI, which helps companies scale coding; and Google's Deepmind. This story is part of NYNext, an indispensable insider insight into the innovations, moonshots and political chess moves that matter most to NYC's power players (and those who aspire to be). The event was just one piece of what has become a sprawling and celebratory week for anyone in technology. 4 The event was hosted in Williamsburg where the Domino Sugar Refinery has gotten a major facelift — and now houses dozens of tech startups. Emmy Park The idea for a tech week came from Andreessen Horowitz (a16z). The firm launched with a Tech Week in Los Angeles in 2022. In 2023, they expanded to San Francisco and New York City. Since the first New York Tech Week in 2023, the seven-day conference has ballooned to more than 1,000 events with 60,000 RSVPs. This year, over half of the events focused on AI. 'The energy that is in this room, the startups that we're going to hear from, these are the ideas that are going to propel New York's economy for generations to come,' Tech:NYC CEO Julie Samuels told me. 'These are the idea that are gonna change the way we all live, we all work, we all do business, we communicate. We are on the cusp of such an exciting time for New York, and tonight is just a little bit of a flavor of that.' Send NYNext a tip: nynextlydia@
Yahoo
02-05-2025
- Business
- Yahoo
I scammed my bank. All it took was an AI voice generator and a phone call.
I may be a tech reporter, but I am not tech savvy. Something breaks, I turn it off and back on, and then I give up. But even I was able to deepfake my own bank with relative ease. Generative AI has made it way easier to impersonate people's voices. For years, there have been deepfakes of politicians, celebrities, and the late pope made to sow disinformation on social media. Lately, hackers have been able to deepfake people like you and me. All they need is a few seconds of your voice, which they might find in video posts on Instagram or TikTok, and maybe some information like your phone or debit card number, which they might be able to find in data leaks on the dark web. In my case — for the purposes of this story — I downloaded the audio of a radio interview I sat for a few weeks ago, trained a voice generator on it after subscribing to a service for a few dollars, and then used a text-to-voice function to chat with my bank in a voice that sounded a bit robotic but eerily similar to my own. Over the course of a five-minute call, first with the automated system and then a human representative, my deepfake seemingly triggered little to no suspicion. It's a tactic scammers are increasingly adopting. They take advantage of cheap, widely available generative-AI tools to deepfake people and gain access to their bank accounts, or even open accounts in someone else's name. These deepfakes are not only getting easier to make but also getting harder to detect. Last year, a financial worker in Hong Kong mistakenly paid out $25 million to scammers after they deepfaked the company's chief financial officer and other staff members in a video call. That's one major oopsie, but huge paydays aren't necessarily the goal. The tech allows criminal organizations to imitate people at scale, automating deepfake voice calls they use to scam smaller amounts from tons of people. A report from Deloitte predicts that fraud losses in the US could reach $40 billion by 2027 as generative AI bolsters fraudsters, which would be a jump from $12.3 billion in 2023. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. These scammers can take gen-AI tools and target accounts at a massive scale. "They're the best engineers, the best product managers, the best researchers," says Ben Colman, the CEO of Reality Defender, a company that makes software for governments, financial institutions, and other businesses to detect the likelihood that content was generated by AI in real time. "If they can automate fraud, they will use every single tool." In addition to stealing your voice or image, they can use gen AI to falsify documents, either to steal an identity or make an entirely new, fake one to open accounts for funneling money. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. The scammers are playing a numbers game. Even when a financial institution blocks them, they can try another account or another service. By automating the attempts, "the attackers don't have to be right very often to do well," Colman says. And they don't care about going after only the richest people; scamming lots of people out of small amounts of money can be even more lucrative over time. According to the FBI's Internet Crime Complaint Center, the average online scam in 2024 came out to just under $20,000 across more than 250,000 complaints the FBI received from people of all ages (those over 60 filed the most complaints and saw the biggest losses, but even people under 20 lost a combined $22.5 million). "Everybody is equally a target," he says. Colman says some banks have tried to get ahead of the deepfake problem in the past few years, while others didn't see it as a pressing issue. Now, more and more are using software to protect their clients. A 2024 survey of business executives (who worked across industries, not just in banking) found that more than 10% had faced an attempted or successful deepfake fraud. More than half said that their employees had not been trained to identify or address such attacks. I reached out to several of the largest banks in the US, asking them what they're doing to detect and shut down deepfake fraud. Several did not respond. Citi declined to share any details of its fraud detection methods and technology. Darius Kingsley, the head of consumer banking practices at JPMorgan Chase, told me the bank sees "the challenges posed by rapidly evolving technologies that can be exploited by bad actors" and is "committed to staying ahead by continuously advancing our security protocols and investing in cutting-edge solutions to protect our customers." Spotting deepfakes is tricky work. Even OpenAI discontinued its AI-writing detector shortly after launching it in 2023, reasoning that its accuracy was too low to even reliably detect whether something was generated by its own ChatGPT. Image, video, and audio generation have all been rapidly improving over the past two years as tools become more sophisticated: If you remember how horrifying and unrealistic AI Will Smith eating spaghetti looked just two years ago, you'll be shocked to see what OpenAI's text-to-video generator, Sora, can do now. Generative AI has gotten leaps and bounds better at covering its tracks, which is great news for scammers. On my deepfake's call with my bank, I had fake me read off information like my debit card number and the last four digits of my Social Security number. Obviously, this was info I had on hand, but it's disturbingly easy these days for criminals to buy this kind of personal data on the dark web, as it may have been involved in a data leak. I generated friendly phrases that asked my bank to update my email address, please, or change my PIN. Fake me repeatedly begged the automated system to connect me to a representative, and then gave a cheery, "I'm doing well today, how are you?" greeting to the person on the other line. I had deepfake me ask for more time to dig up confirmation codes sent to my phone and then thank the representative for their help. Authorities are starting to sound the alarm on how easy and widespread deepfakes are becoming. In November, the Financial Crimes Enforcement Network put out an alert to financial institutions about gen AI, deepfakes, and the risk of identity fraud. Speaking at the Federal Reserve Bank of New York in April, Michael Barr, a governor of the Federal Reserve, said that the tech "has the potential to supercharge identity fraud" and that deepfake attacks had increased twentyfold in the past three years. Barr said that we'll need new policies that raise the cost for the attacker and lower the burden on banks. Right now, it's relatively low risk and low cost for scammer organizations to carry out a massive number of attacks, and impossible for banks to catch each and every one. It's not just banks getting odd calls; scammers will also use deepfakes to call up people and impersonate someone they know or a service they use. There are steps we can take if suspicious requests come our way. "These scams are a new flavor of an old-school method that relies on unexpected contact and a false sense of urgency to trick people into parting with their money," Ashwin Raghu, the head of scam policy and innovation at Citi, tells me in an email. Raghu says people should be suspicious of urgent requests and unexpected calls — even if they're coming from someone who sounds like a friend or family member. Try to take time to verify the caller or contact the person in a different way. If the call seems to be from your bank, you may want to hang up and call the bank back using the phone number on your card to confirm it. For all the data on you that scammers can dig up using AI, there will be things that only two people can ever know. This past summer, an executive at Ferrari was able to catch a scammer deepfaking the company CEO's voice when he asked the caller what book he had recommended just days earlier. Limiting what you share on social media and to whom is one way to crack down on the likelihood you'll become a target, as are tools like two-factor authentication and password managers that store complex and varied passwords. But there's no foolproof way to avoid becoming a target of the scams. Barr's policy ideas included creating more consistency in cybercrime laws internationally and more coordination among law enforcement agencies, which would make it more difficult for criminal rings to operate undetected. He also called for increasing penalties on those who attempt to use generative AI for fraud. But those won't be the quickest of fixes to keep up with how rapidly the tech has changed. Even though this tech is readily available, sometimes in free apps and sometimes for purchases of just a few dollars, the problem is less a proliferation of lone wolf hackers, says Jason Ioannides, the vice president of global fintech and sponsor banking at Alloy, a fraud prevention platform. These are often carried out by big, organized crime rings that are able to move in large numbers and are bolstered by automation to carry out thousands of attacks. If they try 1,000 times to get through and make it once, they'll then focus their efforts on chipping away at that same institution, until the bank notices a trend and comes up with fixes to stop it. "They look for a weakness, and then they attack it," Ioannides says. He says banks should "stay nimble" and have "layered approaches" to detect quickly evolving fraud. "You're never going to stop 100% of fraud," he says. And banks generally won't be perfect, but their defense lies in making themselves "less attractive to a bad actor" than other institutions. Ultimately, I wasn't able to totally hack my bank. I tried to change my debit card PIN and my email address during the phone calls, but I was told I had to do the first at an ATM and the second online. I was able to hear my account balance, and with a bit more prep and expertise, I may have been able to move some money. Each bank has different systems and rules in place, and some might allow people to change personal information, like emails, over the phone, which could give a scammer much easier access to the account. Whether my bank caught on to my use of a generated voice, I'm not sure, but I do sleep a little bit better knowing there are some protections in place. Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends. Read the original article on Business Insider

Business Insider
02-05-2025
- Business
- Business Insider
I scammed my bank
I may be a tech reporter, but I am not tech savvy. Something breaks, I turn it off and back on, and then I give up. But even I was able to deepfake my own bank with relative ease. Generative AI has made it way easier to impersonate people's voices. For years, there have been deepfakes of politicians, celebrities, and the late pope made to sow disinformation on social media. Lately, hackers have been able to deepfake people like you and me. All they need is a few seconds of your voice, which they might find in video posts on Instagram or TikTok, and maybe some information like your phone or debit card number, which they might be able to find in data leaks on the dark web. In my case — for the purposes of this story — I downloaded the audio of a radio interview I sat for a few weeks ago, trained a voice generator on it after subscribing to a service for a few dollars, and then used a text-to-voice function to chat with my bank in a voice that sounded a bit robotic but eerily similar to my own. Over the course of a five-minute call, first with the automated system and then a human representative, my deepfake seemingly triggered little to no suspicion. It's a tactic scammers are increasingly adopting. They take advantage of cheap, widely available generative-AI tools to deepfake people and gain access to their bank accounts, or even open accounts in someone else's name. These deepfakes are not only getting easier to make but also getting harder to detect. Last year, a financial worker in Hong Kong mistakenly paid out $25 million to scammers after they deepfaked the company's chief financial officer and other staff members in a video call. That's one major oopsie, but huge paydays aren't necessarily the goal. The tech allows criminal organizations to imitate people at scale, automating deepfake voice calls they use to scam smaller amounts from tons of people. A report from Deloitte predicts that fraud losses in the US could reach $40 billion by 2027 as generative AI bolsters fraudsters, which would be a jump from $12.3 billion in 2023. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. These scammers can take gen-AI tools and target accounts at a massive scale. "They're the best engineers, the best product managers, the best researchers," says Ben Colman, the CEO of Reality Defender, a company that makes software for governments, financial institutions, and other businesses to detect the likelihood that content was generated by AI in real time. "If they can automate fraud, they will use every single tool." In addition to stealing your voice or image, they can use gen AI to falsify documents, either to steal an identity or make an entirely new, fake one to open accounts for funneling money. In a recent Accenture survey of 600 cybersecurity executives at banks, 80% of respondents said they believed gen AI was ramping up hackers' abilities faster than banks could respond. The scammers are playing a numbers game. Even when a financial institution blocks them, they can try another account or another service. By automating the attempts, "the attackers don't have to be right very often to do well," Colman says. And they don't care about going after only the richest people; scamming lots of people out of small amounts of money can be even more lucrative over time. According to the FBI's Internet Crime Complaint Center, the average online scam in 2024 came out to just under $20,000 across more than 250,000 complaints the FBI received from people of all ages (those over 60 filed the most complaints and saw the biggest losses, but even people under 20 lost a combined $22.5 million). "Everybody is equally a target," he says. Colman says some banks have tried to get ahead of the deepfake problem in the past few years, while others didn't see it as a pressing issue. Now, more and more are using software to protect their clients. A 2024 survey of business executives (who worked across industries, not just in banking) found that more than 10% had faced an attempted or successful deepfake fraud. More than half said that their employees had not been trained to identify or address such attacks. I reached out to several of the largest banks in the US, asking them what they're doing to detect and shut down deepfake fraud. Several did not respond. Citi declined to share any details of its fraud detection methods and technology. Darius Kingsley, the head of consumer banking practices at JPMorgan Chase, told me the bank sees "the challenges posed by rapidly evolving technologies that can be exploited by bad actors" and is "committed to staying ahead by continuously advancing our security protocols and investing in cutting-edge solutions to protect our customers." Spotting deepfakes is tricky work. Even OpenAI discontinued its AI-writing detector shortly after launching it in 2023, reasoning that its accuracy was too low to even reliably detect whether something was generated by its own ChatGPT. Image, video, and audio generation have all been rapidly improving over the past two years as tools become more sophisticated: If you remember how horrifying and unrealistic AI Will Smith eating spaghetti looked just two years ago, you'll be shocked to see what OpenAI's text-to-video generator, Sora, can do now. Generative AI has gotten leaps and bounds better at covering its tracks, which is great news for scammers. On my deepfake's call with my bank, I had fake me read off information like my debit card number and the last four digits of my Social Security number. Obviously, this was info I had on hand, but it's disturbingly easy these days for criminals to buy this kind of personal data on the dark web, as it may have been involved in a data leak. I generated friendly phrases that asked my bank to update my email address, please, or change my PIN. Fake me repeatedly begged the automated system to connect me to a representative, and then gave a cheery, "I'm doing well today, how are you?" greeting to the person on the other line. I had deepfake me ask for more time to dig up confirmation codes sent to my phone and then thank the representative for their help. Authorities are starting to sound the alarm on how easy and widespread deepfakes are becoming. In November, the Financial Crimes Enforcement Network put out an alert to financial institutions about gen AI, deepfakes, and the risk of identity fraud. Speaking at the Federal Reserve Bank of New York in April, Michael Barr, a governor of the Federal Reserve, said that the tech "has the potential to supercharge identity fraud" and that deepfake attacks had increased twentyfold in the past three years. Barr said that we'll need new policies that raise the cost for the attacker and lower the burden on banks. Right now, it's relatively low risk and low cost for scammer organizations to carry out a massive number of attacks, and impossible for banks to catch each and every one. It's not just banks getting odd calls; scammers will also use deepfakes to call up people and impersonate someone they know or a service they use. There are steps we can take if suspicious requests come our way. "These scams are a new flavor of an old-school method that relies on unexpected contact and a false sense of urgency to trick people into parting with their money," Ashwin Raghu, the head of scam policy and innovation at Citi, tells me in an email. Raghu says people should be suspicious of urgent requests and unexpected calls — even if they're coming from someone who sounds like a friend or family member. Try to take time to verify the caller or contact the person in a different way. If the call seems to be from your bank, you may want to hang up and call the bank back using the phone number on your card to confirm it. For all the data on you that scammers can dig up using AI, there will be things that only two people can ever know. This past summer, an executive at Ferrari was able to catch a scammer deepfaking the company CEO's voice when he asked the caller what book he had recommended just days earlier. Limiting what you share on social media and to whom is one way to crack down on the likelihood you'll become a target, as are tools like two-factor authentication and password managers that store complex and varied passwords. But there's no foolproof way to avoid becoming a target of the scams. Barr's policy ideas included creating more consistency in cybercrime laws internationally and more coordination among law enforcement agencies, which would make it more difficult for criminal rings to operate undetected. He also called for increasing penalties on those who attempt to use generative AI for fraud. But those won't be the quickest of fixes to keep up with how rapidly the tech has changed. Even though this tech is readily available, sometimes in free apps and sometimes for purchases of just a few dollars, the problem is less a proliferation of lone wolf hackers, says Jason Ioannides, the vice president of global fintech and sponsor banking at Alloy, a fraud prevention platform. These are often carried out by big, organized crime rings that are able to move in large numbers and are bolstered by automation to carry out thousands of attacks. If they try 1,000 times to get through and make it once, they'll then focus their efforts on chipping away at that same institution, until the bank notices a trend and comes up with fixes to stop it. "They look for a weakness, and then they attack it," Ioannides says. He says banks should "stay nimble" and have "layered approaches" to detect quickly evolving fraud. "You're never going to stop 100% of fraud," he says. And banks generally won't be perfect, but their defense lies in making themselves "less attractive to a bad actor" than other institutions. Ultimately, I wasn't able to totally hack my bank. I tried to change my debit card PIN and my email address during the phone calls, but I was told I had to do the first at an ATM and the second online. I was able to hear my account balance, and with a bit more prep and expertise, I may have been able to move some money. Each bank has different systems and rules in place, and some might allow people to change personal information, like emails, over the phone, which could give a scammer much easier access to the account. Whether my bank caught on to my use of a generated voice, I'm not sure, but I do sleep a little bit better knowing there are some protections in place.
Yahoo
17-04-2025
- Yahoo
How to avoid deep fake scams
Imagine getting a call from a loved one terrified, desperate, begging for help. But what if that voice wasn't real? Scammers now use powerful AI voice-cloning apps to steal voices or mimic someone you trust to pull off convincing scams. Consumer Reports investigates the rise of deepfakes, revealing how these high-tech scams work and what you can do right now to protect yourself and your family. ALSO READ: Many online reviews are fake or written by AI; can you spot them? Deepfake technology is becoming more convincing every day. Ben Colman, Co-founder and CEO of Reality Defender, a deep-fake detection company, says it's the number one digital risk people should be worried about. He says, 'Over the last few years, there's been an explosion of calls claiming that we have your daughter. She's in trouble, send money, or else. Well, what's happened recently is the call comes in and says, we are your daughter; hi, I'm your daughter. I'm in trouble, send money right now.' So, what exactly is a deepfake, and how does it work? A deepfake is taking anyone's likeness, whether it's their face, a single image from LinkedIn or online, or a few seconds of audio, and using a pre-trained model to replicate their likeness to make them say or do anything you want. The deepfakes are so advanced it's even hard for experts to tell the difference. And what's worse, there are no federal laws to stop someone from cloning your voice without your permission. Consumer Reports reviewed six popular voice cloning apps, uncovering a troubling trend. Four of the six apps had no meaningful way to ensure that the user had the original speaker's consent to voice clone them. The two other apps were better and had more safeguards, but found ways around them. While it's practically impossible to erase your digital footprint, CR says there are some steps you can take to protect yourself: The first thing is knowing that deep fake scams like this exist. The second thing is using two-factor authentication on all of your financial accounts. That means having an extra security feature on your smartphone device that requires you to input a security code or respond to an email when trying to gain access to your bank accounts. Third, be wary of calls, texts, or emails that ask for your personal financial information or data. And finally, do a gut check. Does what you're hearing or seeing make sense? By default, you should not believe anything you see online. You should always follow standard common sense. Read more about Consumer Reports revealing deepfake investigation here. WATCH BELOW: Consumer Reports weighs in on wise ways to spend your tax refund