
AI Deepfakes Are Stealing Millions Every Year — Who's Going to Stop Them?
Your CFO is on the video call asking you to transfer $25 million. He gives you all the bank info. Pretty routine. You got it.
But, What the — ? It wasn't the CFO? How can that be? You saw him with your own eyes and heard that undeniable voice you always half-listen for. Even the other colleagues on the screen weren't really them. And yes, you already made the transaction.
Ring a bell? That's because it actually happened to an employee at the global engineering firm Arup last year, which lost $25 million to criminals. In other incidents, folks were scammed when "Elon Musk" and "Goldman Sachs executives" took to social media enthusing about great investment opportunities. And an agency leader at WPP, the largest advertising company in the world at the time, was almost tricked into giving money during a Teams meeting with a deepfake they thought was the CEO Mark Read.
Experts have been warning for years about deepfake AI technology evolving to a dangerous point, and now it's happening. Used maliciously, these clones are infesting the culture from Hollywood to the White House. And although most businesses keep mum about deepfake attacks to prevent client concern, insiders say they're occurring with increasing alarm. Deloitte predicts fraud losses from such incidents to hit $40 billion in the United States by 2027.
Related: The Advancement Of Artificial Intelligence Is Inevitable. Here's How We Should Get Ready For It.
Obviously, we have a problem — and entrepreneurs love nothing more than finding something to solve. But this is no ordinary problem. You can't sit and study it, because it moves as fast as you can, or even faster, always showing up in a new configuration in unexpected places.
The U.S. government has started to pass regulations on deepfakes, and the AI community is developing its own guardrails, including digital signatures and watermarks to identify their content. But scammers are not exactly known to stop at such roadblocks.
That's why many people have pinned their hopes on "deepfake detection" — an emerging field that holds great promise. Ideally, these tools can suss out if something in the digital world (a voice, video, image, or piece of text) was generated by AI, and give everyone the power to protect themselves. But there is a hitch: In some ways, the tools just accelerate the problem. That's because every time a new detector comes out, bad actors can potentially learn from it — using the detector to train their own nefarious tools, and making deepfakes even harder to spot.
So now the question becomes: Who is up for this challenge? This endless cat-and-mouse game, with impossibly high stakes? If anyone can lead the way, startups may have an advantage — because compared to big firms, they can focus exclusively on the problem and iterate faster, says Ankita Mittal, senior consultant of research at The Insight Partners, which has released a report on this new market and predicts explosive growth.
Here's how a few of these founders are trying to stay ahead — and building an industry from the ground up to keep us all safe.
Related: 'We Were Sucked In': How to Protect Yourself from Deepfake Phone Scams.
Image Credit: Terovesalainen
If deepfakes had an origin story, it might sound like this: Until the 1830s, information was physical. You could either tell someone something in person, or write it down on paper and send it, but that was it. Then the commercial telegraph arrived — and for the first time in human history, information could be zapped over long distances instantly. This revolutionized the world. But wire transfer fraud and other scams soon followed, often sent by fake versions of real people.
Western Union was one of the first telegraph companies — so it is perhaps appropriate, or at least ironic, that on the 18th floor of the old Western Union Building in lower Manhattan, you can find one of the earliest startups combatting deepfakes. It's called Reality Defender, and the guys who founded it, including a former Goldman Sachs cybersecurity nut named Ben Colman, launched in early 2021, even before ChatGPT entered the scene. (The company originally set out to detect AI avatars, which he admits is "not as sexy.")
Colman, who is CEO, feels confident that this battle can be won. He claims that his platform is 99% accurate in detecting real-time voice and video deepfakes. Most clients are banks and government agencies, though he won't name any (cybersecurity types are tight-lipped like that). He initially targeted those industries because, he says, deepfakes pose a particularly acute risk to them — so they're "willing to do things before they're fully proven." Reality Defender also works with firms like Accenture, IBM Ventures, and Booz Allen Ventures — "all partners, customers, or investors, and we power some of their own forensics tools."
So that's one kind of entrepreneur involved in this race. On Zoom, a few days after visiting Colman, I meet another: He is Hany Farid, a professor at the University of California, Berkeley, and cofounder of a detection startup called GetReal Security. Its client list, according to the CEO, includes John Deere and Visa. Farid is considered an OG of digital image forensics (he was part of a team that developed PhotoDNA to help fight online child sexual abuse material, for example). And to give me the full-on sense of the risk involved, he pulls an eerie sleight-of-tech: As he talks to me on Zoom, he is replaced by a new person — an Asian punk who looks 40 years younger, but who continues to speak with Farid's voice. It's a deepfake in real time.
Related: Machines Are Surpassing Humans in Intelligence. What We Do Next Will Define the Future of Humanity, Says This Legendary Tech Leader.
Truth be told, Farid wasn't originally sure if deepfake detection was a good business. "I was a little nervous that we wouldn't be able to build something that actually worked," he says. The thing is, deepfakes aren't just one thing. They are produced in myriad ways, and their creators are always evolving and learning. One method, for example, involves using what's called a "generative adversarial network" — in short, someone builds a deepfake generator, as well as a deepfake detector, and the two systems compete against each other so that the generator becomes smarter. A newer method makes better deepfakes by training a model to start with something called "noise" (imagine the visual version of static) and then sculpt the pixels into an image according to a text prompt.
Because deepfakes are so sophisticated, neither Reality Defender or GetReal can ever definitively say that something is "real" or "fake." Instead, they come up with probabilities and descriptions like strong, medium, weak, high, low, and most likely — which critics say can be confusing, but supporters argue can put clients on alert to ask more security questions.
To keep up with the scammers, both companies run at an insanely fast pace — putting out updates every few weeks. Colman spends a lot of energy recruiting engineers and researchers, who make up 80% of his team. Lately, he's been pulling hires straight out of Ph.D. programs. He also has them do ongoing research to keep the company one step ahead.
Both Reality Defender and GetReal maintain pipelines coursing with tech that's deployed, in development, and ready to sunset. To do that, they're organized around different teams that go back and forth to continually test their models. Farid, for example, has a "red team" that attacks and a "blue team" that defends. Describing working with his head of research on a new product, he says, "We have this very rapid cycle where she breaks, I fix, she breaks — and then you see the fragility of the system. You do that not once, but you do it 20 times. And now you're onto something."
Additionally, they layer in non-AI sleuthing techniques to make their tools more accurate and harder to dodge. GetReal, for example, uses AI to search images and videos for what are known as "artifacts" — telltale flaws that they're made by generative AI — as well as other digital forensic methods to analyze inconsistent lighting, image compression, whether speech is properly synched to someone's moving lips, and for the kind of details that are hard to fake (like, say, if video of a CEO contains the acoustic reverberations that are specific to his office).
"The endgame of my world is not elimination of threats; it's mitigation of threats," Farid says. "I can defeat almost all of our systems. But it's not easy. The average knucklehead on the internet, they're going to have trouble removing an artifact even if I tell 'em it's there. A sophisticated actor, sure. They'll figure it out. But to remove all 20 of the artifacts? At least I'm gonna slow you down."
Related: Deepfake Fraud Is Becoming a Business Risk You Can't Ignore. Here's the Surprising Solution That Puts You Ahead of Threats.
All of these strategies will fail if they don't have one thing: the right data. AI, as they say, is only as good as the data it's trained on. And that's a huge hurdle for detection startups. Not only do you have to find fakes made by all the different models and customized by various AI companies (detecting one won't necessarily work on another), but you also have to compare them against images, videos, and audio of real people, places, and things. Sure, reality is all around us, but so is AI, including in our phone cameras. "Historically, detectors don't work very well once you go to real world data," says Phil Swatton at The Alan Turing Institute, the United Kingdom's national institute for AI and data science. And high-quality, labeled datasets for deepfake detection remain scarce, notes Mittal, the senior consultant from The Insight Partners.
Colman has tackled this problem, in part, by using older datasets to capture the "real" side — say from 2018, before generative AI. For the fake data, he mostly generates it in house. He has also focused on developing partnerships with the companies whose tools are used to make deepfakes — because, of course, not all of them are meant to be harmful. So far, his partners include ElevenLabs (which, for example, translates popular podcaster and neuroscientist Andrew Huberman's voice into Hindi and Spanish, so that he can reach wider audiences) along with PlayAI and Respeecher. These companies have mountains of real-world data — and they like sharing it, because they look good by showing that they're building guardrails and allowing Reality Defender to detect their tools. In addition, this grants Reality Defender early access to the partners' new models, which gives it a jump start in updating its platform.
Colman's team has also gotten creative. At one point, to gather fresh voice data, they partnered with a rideshare company — offering their drivers extra income by recording 60 seconds of audio when they weren't busy. "It didn't work," Colman admits. "A ridesharing car is not a good place to record crystal-clear audio. But it gave us an understanding of artificial sounds that don't indicate fraud. It also helped us develop some novel approaches to remove background noise, because one trick that a fraudster will do is use an AI-generated voice, but then try to create all kinds of noise, so that maybe it won't be as detectable."
Startups like this must also grapple with another real-world problem: How do they keep their software from getting out into the public, where deepfakers can learn from it? To start, Reality Defender's clients have a high bar for whom within the organizations can access their software. But the company has also started to create some novel hardware.
To show me, Colman holds up a laptop. "We're now able to run all of our magic locally, without any connection to the cloud on this," he says. The loaded laptop, only available to high-touch clients, "helps protect our IP, so people don't use it to try to prove they can bypass it."
Related: Nearly Half of Americans Think They Could Be Duped By AI. Here's What They're Worried About.
Some founders are taking a completely different path: Instead of trying to detect fake people, they're working to authenticate real ones.
That's Joshua McKenty's plan. He's a serial entrepreneur who cofounded OpenStack and worked at NASA as Chief Cloud Architect, and this March launched a company called Polyguard. "We said, 'Look, we're not going to focus on detection, because it's only accelerating the arms race. We're going to focus on authenticity,'" he explains. "I can't say if something is fake, but I can tell you if it's real."
To execute that, McKenty built a platform to conduct a literal reality check on the person you're talking to by phone or video. Here's how it works: A company can use Polyguard's mobile app, or integrate it into their own app and call center. When they want to create a secure call or meeting, they use that system. To join, participants must prove their identities via the app on their mobile phone (where they're verified using documents like Real ID, e-passports, and face scanning). Polyguard says this is ideal for remote interviews, board meetings, or any other sensitive communication where identity is critical.
In some cases, McKenty's solution can be used with tools like Reality Defender. "Companies might say 'We're so big, we need both,'" he explains. His team is only five or six people at this point (whereas Reality Defender and GetReal both have about 50 employees), but he says his clients already include recruiters, who are interviewing candidates remotely only to discover that they're deepfakes, law firms wanting to protect attorney-client privilege, and wealth managers. He's also making the platform available to the public for people to establish secure lines with their attorney, accountant, or kid's teacher.
This line of thinking is appealing — and gaining approval from people who watch the industry. "I like the authentication approach; it's much more straightforward," says The Alan Turing Institute's Swatton. "It's focused not on detecting something going wrong, but certifying that it's going right." After all, even when detection probabilities sound good, any margin of error can be scary: A detector that catches 95% of fakes will still allow for a scam 1 out of 20 times.
That error rate is what alarmed Christian Perry, another entrepreneur who's entered the deepfake race. He saw it in the early detectors for text, where students and workers were being accused of using AI when they weren't. Authorship deceit doesn't pose the level of threat that deepfakes do, but text detectors are considered part of the scam-fighting family.
Perry and his cofounder Devan Leos launched a startup called Undetectable in 2023, which now has over 19 million users and a team of 76. It began by building a sophisticated text detector, but then pivoted into image detection, and is now close to launching audio and video detectors as well. "You can use a lot of the same kind of methodology and skill sets that you pick up in text detection," says Perry. "But deepfake detection is a much more complicated problem."
Related: Despite How the Media Portrays It, AI Is Not Really Intelligent. Here's Why.
Finally, instead of trying to prevent deepfakes, some entrepreneurs are seeing the opportunity in cleaning up their mess.
Luke and Rebekah Arrigoni stumbled upon this niche accidentally, by trying to solve a different terrible problem — revenge porn. It started one night a few years ago, when the married couple were watching HBO's Euphoria. In the show, a character's nonconsensual intimate image was shared online. "I guess out of hubris," Luke says, "our immediate response was like, We could fix this."
At the time, the Arrigonis were both working on facial recognition technologies. So as a side project in 2022, they put together a system specifically designed to scour the web for revenge porn — then found some victims to test it with. They'd locate the images or videos, then send takedown notices to the websites' hosts. It worked. But valuable as this was, they could see it wasn't a viable business. Clients were just too hard to find.
Then, in 2023, another path appeared. As the actors' and writers' strikes broke out, with AI being a central issue, Luke checked in with former colleagues at major talent agencies. He'd previously worked at Creative Artists Agency as a data scientist, and he was now wondering if his revenge-porn tool might be useful for their clients — though in a different way. It could also be used to identify celebrity deepfakes — to find, for example, when an actor or singer is being cloned to promote someone else's product. Along with feeling out other talent reps like William Morris Endeavor, he went to law and entertainment management firms. They were interested. So in 2023, Luke quit consulting to work with Rebekah and a third cofounder, Hirak Chhatbar, on building out their side hustle, Loti.
"We saw the desire for a product that fit this little spot, and then we listened to key industry partners early on to build all of the features that people really wanted, like impersonation," Luke says. "Now it's one of our most preferred features. Even if they deliberately typo the celebrity's name or put a fake blue checkbox on the profile photo, we can detect all of those things."
Using Loti is simple. A new client submits three real images and eight seconds of their voice; musicians also provide 15 seconds of singing a cappella. The Loti team puts that data into their system, and then scans the internet for that same face and voice. Some celebs, like Scarlett Johansson, Taylor Swift, and Brad Pitt, have been publicly targeted by deepfakes, and Loti is ready to handle that. But Luke says most of the need right now involves the low-tech stuff like impersonation and false endorsements. A recently-passed law called the Take It Down Act — which criminalizes the publication of nonconsensual intimate images (including deepfakes) and requires online platforms to remove them when reported — helps this process along: Now, it's much easier to get the unauthorized content off the web.
Loti doesn't have to deal with probabilities. It doesn't have to constantly iterate or get huge datasets. It doesn't have to say "real" or "fake" (although it can). It just has to ask, "Is this you?"
"The thesis was that the deepfake problem would be solved with deepfake detectors. And our thesis is that it will be solved with face recognition," says Luke, who now has a team of around 50 and a consumer product coming out. "It's this idea of, How do I show up on the internet? What things are said of me, or how am I being portrayed? I think that's its own business, and I'm really excited to be at it."
Related: Why AI is Your New Best Friend... and Worst Enemy in the Battle Against Phishing Scams
Will it all pay off?
All tech aside, do these anti-deepfake solutions make for strong businesses? Many of the startups in this space are early-stage and venture-backed, so it's not yet clear how sustainable or profitable they can be. They're also "heavily investing in research and development to stay ahead of rapidly evolving generative AI threats," says The Insight Partners' Mittal. That makes you wonder about the economics of running a business that will likely always have to do that.
Then again, the market for these startups' services is just beginning. Deepfakes will impact more than just banks, government intelligence, and celebrities — and as more industries awaken to that, they may want solutions fast. The question will be: Do these startups have first-mover advantage, or will they have just laid the expensive groundwork for newer competitors to run with?
Mittal, for her part, is optimistic. She sees significant untapped opportunities for growth that go beyond preventing scams — like, for example, helping professors flag AI-generated student essays, impersonated class attendance, or manipulated academic records. Many of the current anti-deepfake companies, she predicts, will get acquired by big tech and cybersecurity firms.
Whether or not that's Reality Defender's future, Colman believes that platforms like his will become integral to a larger guardrail ecosystem. He compares it to antivirus software: Decades ago, you had to buy an antivirus program and manually scan your files. Now, these scans are just built into your email platforms, running automatically. "We're following the exact same growth story," he says. "The only problem is the problem is moving even quicker."
No doubt, the need will become glaring at some point soon. Farid at GetReal imagines a nightmare like someone creating a fake earnings call for a Fortune 500 company that goes viral.
If GetReal's CEO, Matthew Moynahan, is right, then 2026 will be the year that gets the flywheel spinning for all these deepfake-fighting businesses. "There's two things that drive sales in a really aggressive way: a clear and present danger, and compliance and regulation," he says. "The market doesn't have either right now. Everybody's interested, but not everybody's troubled." That will likely change with increased regulations that push adoption, and with deepfakes popping up in places they shouldn't be.
"Executives will connect the dots," Moynahan predicts. "And they'll start saying, 'This isn't funny anymore.'"
Related: AI Cloning Hoax Can Copy Your Voice in 3 Seconds—and It's Emptying Bank Accounts. Here's How to Protect Yourself.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
Bitcoin Correction Could Linger for Months: CryptoQuant
Bitcoin is caught in a tug-of-war between profit-taking whales and long-term holders, a standoff that one on-chain report says could shape the market for months. After hitting a record high of $123,300 on July 14, the crypto market has drifted lower, awaiting a fresh catalyst. On-chain data firm CryptoQuant said the pullback marks the third major wave of whale profit-taking since mid-2024, in a report on Thursday. The observation is supported by Sean Dawson, head of research at on-chain options platform Derive, who told Decrypt that the profit-taking came from both "old and new whales.' In crypto markets, 'whales' are large holders whose trades can move the price of an asset. Still, selling pressure wasn't limited to whales, Dawson added. Miners also sold approximately 15,000 BTC immediately after the new all-time high was reached. Bitcoin Rangebound as Market Ignores Good News in 'Textbook Late Cycle Behavior' "The size and scale of these trades suggest these are probably institutions content with their returns and seek to de-risk after forecasting a rough Q3 ahead.' This 'cooling phase' is a key characteristic of a mature bull market, and it aligns with Bitcoin's historical performance, which shows that the third quarter typically produces minimal median returns. The market appears to be preparing for this, according to Dawson, who notes that options traders are 'gearing up for a rough two months' by buying $80,000, $95,000, and $100,000 put options for August and September. Bitcoin Price Awaits Fed Clarity Following Constructive US-China Trade Talks These traders 'are expecting a price reversal of somewhere between 10-30% over the next month,' Dawson said. Charles Edwards, founder of Capriole Fund, pushed back against short-term bearish sentiment on Thursday, calling Bitcoin 'undervalued' based on his Energy Value model. The model, which ties Bitcoin's intrinsic value to the energy used by its mining network, suggests the asset is trading well below its fundamental worth. CryptoQuant expects 'renewed accumulation and a subsequent breakout to a new all-time high,' a view supported by historical trends showing Bitcoin's fourth quarter typically delivers the strongest gains, with a median return of 52%.


Washington Post
28 minutes ago
- Washington Post
How an alleged Ponzi scheme targeting Republicans left investors and politicians reeling
CEDARTOWN, Ga. — A federal receiver is on the hunt to recover $140 million lost in an alleged Ponzi scheme that benefited some Republicans in the top ranks of their party in Georgia and Alabama. He's looking to claw back funds, including almost 1,000 political donations totaling more than $1 million, that often backed far-right Republican insurgents. Some of these same politicians say they too lost money, but others left holding the bag for First Liberty Building & Loan are rank-and-file conservatives, swayed by talk show pundits who promoted it as an opportunity for Christians and 'America First MAGA patriots.' 'I worked my whole life to build up savings and have a little bit of retirement so I could just live comfortably,' said Michael Tinney, a 59-year-old real estate broker from Cedartown, Georgia. Tinney said he deposited $600,000 after hearing First Liberty pitched on shows hosted by conservatives including Erick Erickson, Hugh Hewitt and Charlie Kirk. First Liberty had promised returns up to 16% by making high-interest loans to businesses. Brant Frost IV, an evangelical powerbroker, touted 'Wall Street returns for Main Street investors.' But he skimmed $17 million for himself, his relatives and their affiliated companies, and loaned millions more that borrowers never repaid, a U.S. Securities and Exchange Commission lawsuit claims. 'We've got retired teachers, we've got retired businessmen, we've got retired ministers who have been part of this program as well as doctors, lawyers, everyone else you can imagine,' his son, Brant Frost V, said in 2024 . Tinney said the younger Frost drove to his office to secure his investment. According to a July 21 report from court-appointed receiver S. Gregory Hays, assets now include just $1.2 million in cash along with some Frost family real estate. Hays told The Associated Press it's too early to estimate how much money is recoverable, but he's moving to foreclose on collateral pledged by borrowers who defaulted, including a failed South Carolina factory. Hays also seized and plans to auction Brant Frost IV's Aston Martin sports car. A social media post celebrating that 2022 purchase is particularly scorned by angry investors. But Hays doubts he can get everything back. 'The investors are going to have substantial losses here,' he said. Georgia and Alabama also are investigating. Georgia Secretary of State Brad Raffensperger urged politicians to return campaign cash. Hays said he's already received $110,000, plus a returned $20,000 charitable donation. Frost said on July 11 that he takes 'full responsibility' and would 'spend the rest of my life trying to repay as much as I can to the many people I misled and let down.' But no criminal charges have been announced, and some Frost relatives retain influential positions in the Georgia Republican Party, whose chairman, Josh McKoon , has had the Frosts' political and financial support. McKoon said the party returned nearly $37,000 in Frost donations and he's 'profoundly saddened that members of our conservative movement' lost money. Campaign disclosures show First Liberty, the Frosts and associated companies contributed widely to Republican causes, including more than $700,000 in Georgia, $150,000 in Alabama and nearly $140,000 in Maine, where the Frosts spent $230,000 over multiple years renting a Kennebunkport vacation home. Georgia donations included $1,000 to former party chairman David Shafer's unsuccessful 2018 lieutenant governor campaign, and tens of thousands to the state party. Shafer pushed efforts to overturn President Donald Trump's 2020 defeat in Georgia — leading to an indictment — now stalled on pretrial appeal — against Shafer, Trump and others. A company run by Shafer — Springwood Capital — says in a July 10 lawsuit that it lost $200,000 invested in First Liberty. Its attorney, Brent Herrin, said the company is 'one of hundreds of defrauded investors.' Herrin declined to confirm Shafer owns the company, but financial disclosures show Shafer in 2017 owned at least part of Springwood Capital's parent company. McKoon, who received $4,500 in Frost donations, handled Springwood Capital's incorporation papers . McKoon said he didn't lose any money. Salleigh Grubbs, Georgia GOP first vice-chairman, said on a July 16 radio show that 'a lot of Republican members ... were heavily invested.' In Alabama, Republican state Auditor Andrew Sorrell says he and a political action committee he controls both lost money. He hasn't said how much he lost personally, but records show Alabama Christian Citizens PAC invested $29,000. 'The company had marketed itself through conservative channels as a 'patriotic' and 'Christian' investment opportunity,' Sorrell said, adding he learned a 'tough lesson.' But Sorrell, now running for Alabama Secretary of State, also benefited: He pocketed $55,000 for his campaigns, while Alabama Christian Citizens and Sorrell's federal-level U.S. Christian Citizens PAC each got $12,500. Erickson, an Atlanta-based syndicated radio host, once steered listeners to the Frosts. 'They're active in conservative politics ... good Christian family. I have known them for years. They are wonderful people,' he said in 2020. 'This is how we grow, this is how we fund our movement, and this is how we help out America First MAGA patriots,' radio host John Fredericks said during a June 2024 interview with Brant Frost V. Tinney said the hosts made First Liberty sound 'pretty credible.' Now he calls their warm endorsements a 'recipe for disaster,' and is still waiting for apologies. Fredericks did call the SEC complaint 'disturbing' and 'damning' during a July 16 show. 'I have talked to them many times, never had an inkling that any of that was going on,' Fredericks said, adding: 'They have to have their day to fight the charges.' Brant Frost V, accused Wednesday in a Georgia Ethics Commission complaint of illegally influencing elections , resigned from the state Republican committee Thursday and is resigning as Coweta County GOP chairman, McKoon said. Krista Frost, Brant Frost IV's wife, remains on the state committee and Brant Frost V's sister, Katie Frost, remains 3rd Congressional District GOP chair. McKoon and some allies won party elections in June after a nominating committee led by Katie Frost endorsed them. McKoon's vanquished rival, David Cross, is contesting those results to the Republican National Committee, saying McKoon and the Frosts engaged in skullduggery. Cross, a financial adviser, says he first reported First Liberty's possible misdeeds to state authorities in 2024. Georgia Republican National Committeewoman Amy Kremer, whose daughter was among those defeated, demanded the Frosts' ouster. 'We cannot claim to be the party of law and order while turning a blind eye to financial crimes committed under the banner of Republican leadership,' Kremer said . For his part, Tinney has something else in mind: 'My goal is justice at this point.'
Yahoo
34 minutes ago
- Yahoo
Looking for Stability? Genuine Parts Company (GPC) Could Be a Smart Buy and Hold Choice
Genuine Parts Company (NYSE:GPC) is included among the 10 Best Dividend Stocks to Buy and Hold Forever. A line of mechanics diagnosing a recreation vehicle engine at a repair shop. Genuine Parts Company (NYSE:GPC) runs several distribution and retail brands that specialize in automotive and industrial parts and components. Together, the company operates more than 10,700 locations around the world, including distribution centers, service centers, and retail outlets. Its two main business segments, automotive and industrial, benefit from consistent demand. Genuine Parts Company (NYSE:GPC) is also expanding into fast-growing areas such as electric vehicle parts and services for commercial fleets. With a strong international presence and continued investment in digital infrastructure and research and development, Genuine Parts is well-positioned for long-term growth. Over the past ten years, Genuine Parts Company (NYSE:GPC) has increased its dividend by an average of about 5% annually, suggesting a similar pace of growth may continue. The company holds one of the longest dividend growth streaks in the market, spanning 69 years. Currently, it pays a quarterly dividend of $1.03 per share and has a dividend yield of 3.20%, as of July 31. While we acknowledge the potential of GPC as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: and Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data