logo
#

Latest news with #Arup

AI Deepfakes Are Stealing Millions Every Year — Who's Going to Stop Them?
AI Deepfakes Are Stealing Millions Every Year — Who's Going to Stop Them?

Entrepreneur

time6 days ago

  • Business
  • Entrepreneur

AI Deepfakes Are Stealing Millions Every Year — Who's Going to Stop Them?

This story appears in the July 2025 issue of Entrepreneur. Subscribe » Your CFO is on the video call asking you to transfer $25 million. He gives you all the bank info. Pretty routine. You got it. But, What the — ? It wasn't the CFO? How can that be? You saw him with your own eyes and heard that undeniable voice you always half-listen for. Even the other colleagues on the screen weren't really them. And yes, you already made the transaction. Ring a bell? That's because it actually happened to an employee at the global engineering firm Arup last year, which lost $25 million to criminals. In other incidents, folks were scammed when "Elon Musk" and "Goldman Sachs executives" took to social media enthusing about great investment opportunities. And an agency leader at WPP, the largest advertising company in the world at the time, was almost tricked into giving money during a Teams meeting with a deepfake they thought was the CEO Mark Read. Experts have been warning for years about deepfake AI technology evolving to a dangerous point, and now it's happening. Used maliciously, these clones are infesting the culture from Hollywood to the White House. And although most businesses keep mum about deepfake attacks to prevent client concern, insiders say they're occurring with increasing alarm. Deloitte predicts fraud losses from such incidents to hit $40 billion in the United States by 2027. Related: The Advancement Of Artificial Intelligence Is Inevitable. Here's How We Should Get Ready For It. Obviously, we have a problem — and entrepreneurs love nothing more than finding something to solve. But this is no ordinary problem. You can't sit and study it, because it moves as fast as you can, or even faster, always showing up in a new configuration in unexpected places. The U.S. government has started to pass regulations on deepfakes, and the AI community is developing its own guardrails, including digital signatures and watermarks to identify their content. But scammers are not exactly known to stop at such roadblocks. That's why many people have pinned their hopes on "deepfake detection" — an emerging field that holds great promise. Ideally, these tools can suss out if something in the digital world (a voice, video, image, or piece of text) was generated by AI, and give everyone the power to protect themselves. But there is a hitch: In some ways, the tools just accelerate the problem. That's because every time a new detector comes out, bad actors can potentially learn from it — using the detector to train their own nefarious tools, and making deepfakes even harder to spot. So now the question becomes: Who is up for this challenge? This endless cat-and-mouse game, with impossibly high stakes? If anyone can lead the way, startups may have an advantage — because compared to big firms, they can focus exclusively on the problem and iterate faster, says Ankita Mittal, senior consultant of research at The Insight Partners, which has released a report on this new market and predicts explosive growth. Here's how a few of these founders are trying to stay ahead — and building an industry from the ground up to keep us all safe. Related: 'We Were Sucked In': How to Protect Yourself from Deepfake Phone Scams. Image Credit: Terovesalainen If deepfakes had an origin story, it might sound like this: Until the 1830s, information was physical. You could either tell someone something in person, or write it down on paper and send it, but that was it. Then the commercial telegraph arrived — and for the first time in human history, information could be zapped over long distances instantly. This revolutionized the world. But wire transfer fraud and other scams soon followed, often sent by fake versions of real people. Western Union was one of the first telegraph companies — so it is perhaps appropriate, or at least ironic, that on the 18th floor of the old Western Union Building in lower Manhattan, you can find one of the earliest startups combatting deepfakes. It's called Reality Defender, and the guys who founded it, including a former Goldman Sachs cybersecurity nut named Ben Colman, launched in early 2021, even before ChatGPT entered the scene. (The company originally set out to detect AI avatars, which he admits is "not as sexy.") Colman, who is CEO, feels confident that this battle can be won. He claims that his platform is 99% accurate in detecting real-time voice and video deepfakes. Most clients are banks and government agencies, though he won't name any (cybersecurity types are tight-lipped like that). He initially targeted those industries because, he says, deepfakes pose a particularly acute risk to them — so they're "willing to do things before they're fully proven." Reality Defender also works with firms like Accenture, IBM Ventures, and Booz Allen Ventures — "all partners, customers, or investors, and we power some of their own forensics tools." So that's one kind of entrepreneur involved in this race. On Zoom, a few days after visiting Colman, I meet another: He is Hany Farid, a professor at the University of California, Berkeley, and cofounder of a detection startup called GetReal Security. Its client list, according to the CEO, includes John Deere and Visa. Farid is considered an OG of digital image forensics (he was part of a team that developed PhotoDNA to help fight online child sexual abuse material, for example). And to give me the full-on sense of the risk involved, he pulls an eerie sleight-of-tech: As he talks to me on Zoom, he is replaced by a new person — an Asian punk who looks 40 years younger, but who continues to speak with Farid's voice. It's a deepfake in real time. Related: Machines Are Surpassing Humans in Intelligence. What We Do Next Will Define the Future of Humanity, Says This Legendary Tech Leader. Truth be told, Farid wasn't originally sure if deepfake detection was a good business. "I was a little nervous that we wouldn't be able to build something that actually worked," he says. The thing is, deepfakes aren't just one thing. They are produced in myriad ways, and their creators are always evolving and learning. One method, for example, involves using what's called a "generative adversarial network" — in short, someone builds a deepfake generator, as well as a deepfake detector, and the two systems compete against each other so that the generator becomes smarter. A newer method makes better deepfakes by training a model to start with something called "noise" (imagine the visual version of static) and then sculpt the pixels into an image according to a text prompt. Because deepfakes are so sophisticated, neither Reality Defender or GetReal can ever definitively say that something is "real" or "fake." Instead, they come up with probabilities and descriptions like strong, medium, weak, high, low, and most likely — which critics say can be confusing, but supporters argue can put clients on alert to ask more security questions. To keep up with the scammers, both companies run at an insanely fast pace — putting out updates every few weeks. Colman spends a lot of energy recruiting engineers and researchers, who make up 80% of his team. Lately, he's been pulling hires straight out of Ph.D. programs. He also has them do ongoing research to keep the company one step ahead. Both Reality Defender and GetReal maintain pipelines coursing with tech that's deployed, in development, and ready to sunset. To do that, they're organized around different teams that go back and forth to continually test their models. Farid, for example, has a "red team" that attacks and a "blue team" that defends. Describing working with his head of research on a new product, he says, "We have this very rapid cycle where she breaks, I fix, she breaks — and then you see the fragility of the system. You do that not once, but you do it 20 times. And now you're onto something." Additionally, they layer in non-AI sleuthing techniques to make their tools more accurate and harder to dodge. GetReal, for example, uses AI to search images and videos for what are known as "artifacts" — telltale flaws that they're made by generative AI — as well as other digital forensic methods to analyze inconsistent lighting, image compression, whether speech is properly synched to someone's moving lips, and for the kind of details that are hard to fake (like, say, if video of a CEO contains the acoustic reverberations that are specific to his office). "The endgame of my world is not elimination of threats; it's mitigation of threats," Farid says. "I can defeat almost all of our systems. But it's not easy. The average knucklehead on the internet, they're going to have trouble removing an artifact even if I tell 'em it's there. A sophisticated actor, sure. They'll figure it out. But to remove all 20 of the artifacts? At least I'm gonna slow you down." Related: Deepfake Fraud Is Becoming a Business Risk You Can't Ignore. Here's the Surprising Solution That Puts You Ahead of Threats. All of these strategies will fail if they don't have one thing: the right data. AI, as they say, is only as good as the data it's trained on. And that's a huge hurdle for detection startups. Not only do you have to find fakes made by all the different models and customized by various AI companies (detecting one won't necessarily work on another), but you also have to compare them against images, videos, and audio of real people, places, and things. Sure, reality is all around us, but so is AI, including in our phone cameras. "Historically, detectors don't work very well once you go to real world data," says Phil Swatton at The Alan Turing Institute, the United Kingdom's national institute for AI and data science. And high-quality, labeled datasets for deepfake detection remain scarce, notes Mittal, the senior consultant from The Insight Partners. Colman has tackled this problem, in part, by using older datasets to capture the "real" side — say from 2018, before generative AI. For the fake data, he mostly generates it in house. He has also focused on developing partnerships with the companies whose tools are used to make deepfakes — because, of course, not all of them are meant to be harmful. So far, his partners include ElevenLabs (which, for example, translates popular podcaster and neuroscientist Andrew Huberman's voice into Hindi and Spanish, so that he can reach wider audiences) along with PlayAI and Respeecher. These companies have mountains of real-world data — and they like sharing it, because they look good by showing that they're building guardrails and allowing Reality Defender to detect their tools. In addition, this grants Reality Defender early access to the partners' new models, which gives it a jump start in updating its platform. Colman's team has also gotten creative. At one point, to gather fresh voice data, they partnered with a rideshare company — offering their drivers extra income by recording 60 seconds of audio when they weren't busy. "It didn't work," Colman admits. "A ridesharing car is not a good place to record crystal-clear audio. But it gave us an understanding of artificial sounds that don't indicate fraud. It also helped us develop some novel approaches to remove background noise, because one trick that a fraudster will do is use an AI-generated voice, but then try to create all kinds of noise, so that maybe it won't be as detectable." Startups like this must also grapple with another real-world problem: How do they keep their software from getting out into the public, where deepfakers can learn from it? To start, Reality Defender's clients have a high bar for whom within the organizations can access their software. But the company has also started to create some novel hardware. To show me, Colman holds up a laptop. "We're now able to run all of our magic locally, without any connection to the cloud on this," he says. The loaded laptop, only available to high-touch clients, "helps protect our IP, so people don't use it to try to prove they can bypass it." Related: Nearly Half of Americans Think They Could Be Duped By AI. Here's What They're Worried About. Some founders are taking a completely different path: Instead of trying to detect fake people, they're working to authenticate real ones. That's Joshua McKenty's plan. He's a serial entrepreneur who cofounded OpenStack and worked at NASA as Chief Cloud Architect, and this March launched a company called Polyguard. "We said, 'Look, we're not going to focus on detection, because it's only accelerating the arms race. We're going to focus on authenticity,'" he explains. "I can't say if something is fake, but I can tell you if it's real." To execute that, McKenty built a platform to conduct a literal reality check on the person you're talking to by phone or video. Here's how it works: A company can use Polyguard's mobile app, or integrate it into their own app and call center. When they want to create a secure call or meeting, they use that system. To join, participants must prove their identities via the app on their mobile phone (where they're verified using documents like Real ID, e-passports, and face scanning). Polyguard says this is ideal for remote interviews, board meetings, or any other sensitive communication where identity is critical. In some cases, McKenty's solution can be used with tools like Reality Defender. "Companies might say 'We're so big, we need both,'" he explains. His team is only five or six people at this point (whereas Reality Defender and GetReal both have about 50 employees), but he says his clients already include recruiters, who are interviewing candidates remotely only to discover that they're deepfakes, law firms wanting to protect attorney-client privilege, and wealth managers. He's also making the platform available to the public for people to establish secure lines with their attorney, accountant, or kid's teacher. This line of thinking is appealing — and gaining approval from people who watch the industry. "I like the authentication approach; it's much more straightforward," says The Alan Turing Institute's Swatton. "It's focused not on detecting something going wrong, but certifying that it's going right." After all, even when detection probabilities sound good, any margin of error can be scary: A detector that catches 95% of fakes will still allow for a scam 1 out of 20 times. That error rate is what alarmed Christian Perry, another entrepreneur who's entered the deepfake race. He saw it in the early detectors for text, where students and workers were being accused of using AI when they weren't. Authorship deceit doesn't pose the level of threat that deepfakes do, but text detectors are considered part of the scam-fighting family. Perry and his cofounder Devan Leos launched a startup called Undetectable in 2023, which now has over 19 million users and a team of 76. It began by building a sophisticated text detector, but then pivoted into image detection, and is now close to launching audio and video detectors as well. "You can use a lot of the same kind of methodology and skill sets that you pick up in text detection," says Perry. "But deepfake detection is a much more complicated problem." Related: Despite How the Media Portrays It, AI Is Not Really Intelligent. Here's Why. Finally, instead of trying to prevent deepfakes, some entrepreneurs are seeing the opportunity in cleaning up their mess. Luke and Rebekah Arrigoni stumbled upon this niche accidentally, by trying to solve a different terrible problem — revenge porn. It started one night a few years ago, when the married couple were watching HBO's Euphoria. In the show, a character's nonconsensual intimate image was shared online. "I guess out of hubris," Luke says, "our immediate response was like, We could fix this." At the time, the Arrigonis were both working on facial recognition technologies. So as a side project in 2022, they put together a system specifically designed to scour the web for revenge porn — then found some victims to test it with. They'd locate the images or videos, then send takedown notices to the websites' hosts. It worked. But valuable as this was, they could see it wasn't a viable business. Clients were just too hard to find. Then, in 2023, another path appeared. As the actors' and writers' strikes broke out, with AI being a central issue, Luke checked in with former colleagues at major talent agencies. He'd previously worked at Creative Artists Agency as a data scientist, and he was now wondering if his revenge-porn tool might be useful for their clients — though in a different way. It could also be used to identify celebrity deepfakes — to find, for example, when an actor or singer is being cloned to promote someone else's product. Along with feeling out other talent reps like William Morris Endeavor, he went to law and entertainment management firms. They were interested. So in 2023, Luke quit consulting to work with Rebekah and a third cofounder, Hirak Chhatbar, on building out their side hustle, Loti. "We saw the desire for a product that fit this little spot, and then we listened to key industry partners early on to build all of the features that people really wanted, like impersonation," Luke says. "Now it's one of our most preferred features. Even if they deliberately typo the celebrity's name or put a fake blue checkbox on the profile photo, we can detect all of those things." Using Loti is simple. A new client submits three real images and eight seconds of their voice; musicians also provide 15 seconds of singing a cappella. The Loti team puts that data into their system, and then scans the internet for that same face and voice. Some celebs, like Scarlett Johansson, Taylor Swift, and Brad Pitt, have been publicly targeted by deepfakes, and Loti is ready to handle that. But Luke says most of the need right now involves the low-tech stuff like impersonation and false endorsements. A recently-passed law called the Take It Down Act — which criminalizes the publication of nonconsensual intimate images (including deepfakes) and requires online platforms to remove them when reported — helps this process along: Now, it's much easier to get the unauthorized content off the web. Loti doesn't have to deal with probabilities. It doesn't have to constantly iterate or get huge datasets. It doesn't have to say "real" or "fake" (although it can). It just has to ask, "Is this you?" "The thesis was that the deepfake problem would be solved with deepfake detectors. And our thesis is that it will be solved with face recognition," says Luke, who now has a team of around 50 and a consumer product coming out. "It's this idea of, How do I show up on the internet? What things are said of me, or how am I being portrayed? I think that's its own business, and I'm really excited to be at it." Related: Why AI is Your New Best Friend... and Worst Enemy in the Battle Against Phishing Scams Will it all pay off? All tech aside, do these anti-deepfake solutions make for strong businesses? Many of the startups in this space are early-stage and venture-backed, so it's not yet clear how sustainable or profitable they can be. They're also "heavily investing in research and development to stay ahead of rapidly evolving generative AI threats," says The Insight Partners' Mittal. That makes you wonder about the economics of running a business that will likely always have to do that. Then again, the market for these startups' services is just beginning. Deepfakes will impact more than just banks, government intelligence, and celebrities — and as more industries awaken to that, they may want solutions fast. The question will be: Do these startups have first-mover advantage, or will they have just laid the expensive groundwork for newer competitors to run with? Mittal, for her part, is optimistic. She sees significant untapped opportunities for growth that go beyond preventing scams — like, for example, helping professors flag AI-generated student essays, impersonated class attendance, or manipulated academic records. Many of the current anti-deepfake companies, she predicts, will get acquired by big tech and cybersecurity firms. Whether or not that's Reality Defender's future, Colman believes that platforms like his will become integral to a larger guardrail ecosystem. He compares it to antivirus software: Decades ago, you had to buy an antivirus program and manually scan your files. Now, these scans are just built into your email platforms, running automatically. "We're following the exact same growth story," he says. "The only problem is the problem is moving even quicker." No doubt, the need will become glaring at some point soon. Farid at GetReal imagines a nightmare like someone creating a fake earnings call for a Fortune 500 company that goes viral. If GetReal's CEO, Matthew Moynahan, is right, then 2026 will be the year that gets the flywheel spinning for all these deepfake-fighting businesses. "There's two things that drive sales in a really aggressive way: a clear and present danger, and compliance and regulation," he says. "The market doesn't have either right now. Everybody's interested, but not everybody's troubled." That will likely change with increased regulations that push adoption, and with deepfakes popping up in places they shouldn't be. "Executives will connect the dots," Moynahan predicts. "And they'll start saying, 'This isn't funny anymore.'" Related: AI Cloning Hoax Can Copy Your Voice in 3 Seconds—and It's Emptying Bank Accounts. Here's How to Protect Yourself.

How Generative AI's 'Deepfake Economy' Is Hobbling Small Businesses
How Generative AI's 'Deepfake Economy' Is Hobbling Small Businesses

Yahoo

time16-07-2025

  • Business
  • Yahoo

How Generative AI's 'Deepfake Economy' Is Hobbling Small Businesses

Over the past few years, the potential uses of generative AI, both positive and negative, have been talked to death. However, there's one application of the technology that small business owners are saying is often overlooked: the deepfake economy. Several small business owners told Business Insider that since ChatCPT's debut three years ago, the deepfake economy has blown up. Now, scammers are using these deepfakes to pose as employees of a company, running cons that are wreaking havoc on the brands' reputations and bottom lines. Don't Miss: Named a TIME Best Invention and Backed by 5,000+ Users, Kara's Air-to-Water Pod Cuts Plastic and Costs — $100k+ in investable assets? – no cost, no obligation. An unnamed finance clerk at engineering firm Arup told the outlet about a time he joined a video call with his AI versions of colleagues. One of these "colleagues," supposedly the company's chief financial officer, asked him to approve a series of overseas transfers worth more than $25 million. Believing that the request came from his boss, the finance clerk approved the transactions. Only after the money had been sent did he learn that the colleagues were actually deepfake recreations of his real coworkers. The finance clerk isn't the only one being deceived by these impressionists. According to data from Chinabuse, TRM Labs' open-source fraud reporting platform, generative AI-enabled scams rose by 456% between May 2024 and April, when compared with the same period the year before. Another survey from Nationwide Insurance released in September found that 12% of small business owners had faced at least one deepfake scam within the previous year. Small businesses, the survey said, are more likely to fall victim to these types of scams because they lack the cybersecurity infrastructure of larger companies. Trending: This AI-Powered Trading Platform Has 5,000+ Users, 27 Pending Patents, and a $43.97M Valuation — Rob Duncan, vice president of strategy at Netcraft, told Business Insider that he isn't surprised at the increase in highly personalized attacks against small businesses. Generative AI has made it much easier for inexperienced scammers to pose as brands and launch these scams. As AI continues to improve, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," he said. Many of the platforms used by small businesses, like Teams and Zoom, are getting better at detecting AI and weeding out accounts that don't have real people behind them. However, many experts worry that improved detection tools are making the AI problem worse. Beyond Identity CEO Jasson Casey told Business Insider that the data collected by platforms like Zoom and Teams is not only used to suss out deepfakes but to train sophisticated AI models. This creates a vicious cycle that becomes "an arms race defenders cannot win.'Casey and Robin Pugh, the executive director of non-profit Intelligence for Good, say that small businesses can best protect themselves from deepfake scams by focusing on confirming identities rather than disproving AI use. They also warn that these generative AI-based scams will not be going away anytime soon. Nina Etemadi, cofounder of a Philadelphia-based small business named Cake Life Bake Shop, agrees, telling Business Insider, 'Doing business online gets more necessary and high risk every year. AI is just part of that." Read Next: Many are using retirement income calculators to check if they're on pace — Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article How Generative AI's 'Deepfake Economy' Is Hobbling Small Businesses originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.

Why Outsmarting AI-Powered Threats Means Upskilling Your Team
Why Outsmarting AI-Powered Threats Means Upskilling Your Team

Forbes

time16-07-2025

  • Business
  • Forbes

Why Outsmarting AI-Powered Threats Means Upskilling Your Team

Vishaal "V8" Hariprasad, CEO and cofounder of Resilience, a leading cyber risk solution company. Global investment in artificial intelligence-based cybersecurity solutions is estimated to top a whopping $135 billion by 2030. But as AI accelerates innovation, it's also dramatically reshaping cybersecurity. Security teams are now fighting on two critical fronts: fending off a wave of AI-powered attacks, while simultaneously trying to navigate and secure the AI systems their own organizations increasingly rely on. One striking example occurred last year when an employee at U.K. engineering firm Arup joined a video call with what appeared to be the company's CFO and other executives. The conversation ended with the employee wiring $25 million to those executives. Except none of the people on the call were real. They were AI-generated deepfakes created to convincingly mimic the voices and faces of trusted team members. Security teams are no longer just fending off human-led intrusions. They're facing AI-enhanced adversaries capable of launching scalable phishing campaigns, crafting flawless social engineering lures and tampering with the AI systems embedded in business workflows. For CISOs and security leaders, the stakes are clear: Either your teams evolve with the technology or they fall behind attackers who already have. So what does real readiness look like in this new environment? It's not just about adding AI tools to the stack, but rewiring how security teams think, train and respond. 1. Don't Assume AI Defends Itself One of the biggest mistakes you can make is believing AI-enabled tools are turnkey, 'set it and forget it' solutions. That's because these attitudes create a false sense of security, leaving exploitable blind spots available for attackers with better AI fluency. While AI can be a powerful force multiplier in threat detection, these tools are ultimately only as effective as the humans behind them. For security teams, that means going beyond basic implementation and developing the skills to interrogate model behavior, understand edge-case vulnerabilities and assess risk across the full AI life cycle. Monitoring, tuning and testing are essential, but so is having the talent in place to know when and how to intervene. Upskilling in this context looks less like learning to code and more like building cross-functional fluency and a working understanding of how AI systems are built, where they're brittle and how they might be misused in the wild. 2. Start With The Basics, Then Build AI may be the newest threat vector, but attackers haven't abandoned the old playbook. Tactics like phishing, credential theft and lateral movement still work—AI just makes them faster, more scalable and harder to detect. That's why core defenses like threat modeling, input validation and incident response remain essential. What's changed is the need to apply them with greater scrutiny, especially around how AI systems are built, deployed and potentially exploited. Encourage your team to study novel risks like deepfake-driven social engineering or LLM manipulation. New frameworks like MAESTRO can also offer an updated lens for understanding AI-specific threat models. And don't keep the conversation siloed in security. Loop in product, engineering and data science teams to surface potential vulnerabilities in AI applications across the business. 3. Be Hands-On Reading about and staying current on the latest AI security trends is important, but the best defense stems from actively engaging with these threats. Create environments where defenders can safely simulate real-world scenarios, experiment with offensive and defensive AI techniques, and apply what they've learned. Whether it's sandbox labs, red-team exercises or AI-specific capture-the-flag competitions, practical immersion beats theoretical instruction every time. Partnering with ethical hackers and AI researchers can also uncover risks your internal team might miss. This kind of immersion builds critical muscle memory and helps defenders better understand how adversaries think. 4. Test Regularly Against Metrics That Actually Matter You can't improve what you don't measure, but it's also true that not all metrics are created equal. As AI becomes more deeply integrated into your security stack, it's crucial to evaluate whether it's genuinely enhancing your team's effectiveness. This means going beyond traditional KPIs and basic compliance checklists. Think of it this way: Adopting AI tools should amplify your existing security posture, not replace it. While the methods may evolve, the core objective of an efficient and effective defense remains the same. Continue to rigorously track indicators like time to detect and respond to threats (whether AI-powered or traditional), the effectiveness of AI-in-the-loop tools, and how well teams perform during simulated incidents. These are the real-world signals that reveal whether your security team is truly evolving and adapting, or simply treading water. Don't let the allure of new AI capabilities overshadow the fundamental need to measure your team's overall response and effectiveness against all forms of cyber risk. Upskilling For The Future AI is fundamentally reshaping both how businesses operate and how they are targeted. The strongest defenders of the future will be the ones who understand how LLMs function, as well as how they fail. They'll be able to detect strange behaviors in high-volume systems and know how to adapt static playbooks into living, learning systems of defense. Another important piece of the puzzle is ensuring defense doesn't happen in a silo. Defending against AI-powered threats demands integrated, agile teams that span traditional departmental structures. It's not just about individual titles, but about fostering collaboration across key roles and functions. Security teams should be working hand in hand with data science and engineering teams, alongside those responsible for product development and IT infrastructure. These diverse skill sets and perspectives must operate in lockstep in order to foster a culture of continuous learning and collaboration. By taking a proactive, integrated approach to upskilling for the AI era, your organization becomes far more adaptable and resilient against the ever-evolving threat landscape. This enables systems that can not only repel new AI-powered attacks but also continue to operate and recover swiftly even when incidents occur. Organizations that prioritize this foundational investment in their people will be the ones best prepared to meet tomorrow's sophisticated AI threats head-on and emerge stronger. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Tech giants scramble to meet AI's looming energy crisis
Tech giants scramble to meet AI's looming energy crisis

France 24

time15-07-2025

  • Business
  • France 24

Tech giants scramble to meet AI's looming energy crisis

AI depends entirely on data centers, which could consume three percent of the world's electricity by 2030, according to the International Energy Agency. That's double what they use today. Experts at McKinsey, a US consulting firm, describe a race to build enough data centers to keep up with AI's rapid growth, while warning that the world is heading toward an electricity shortage. "There are several ways of solving the problem," explained Mosharaf Chowdhury, a University of Michigan professor of computer science. Companies can either build more energy supply -- which takes time and the AI giants are already scouring the globe to do -- or figure out how to consume less energy for the same computing power. Chowdhury believes the challenge can be met with "clever" solutions at every level, from the physical hardware to the AI software itself. For example, his lab has developed algorithms that calculate exactly how much electricity each AI chip needs, reducing energy use by 20-30 percent. 'Clever' solutions Twenty years ago, operating a data center -- encompassing cooling systems and other infrastructure -- required as much energy as running the servers themselves. Today, operations use just 10 percent of what the servers consume, says Gareth Williams from consulting firm Arup. This is largely through this focus on energy efficiency. Many data centers now use AI-powered sensors to control temperature in specific zones rather than cooling entire buildings uniformly. This allows them to optimize water and electricity use in real-time, according to McKinsey's Pankaj Sachdeva. For many, the game-changer will be liquid cooling, which replaces the roar of energy-hungry air conditioners with a coolant that circulates directly through the servers. "All the big players are looking at it," Williams said. This matters because modern AI chips from companies like Nvidia consume 100 times more power than servers did two decades ago. Amazon's world-leading cloud computing business, AWS, last week said it had developed its own liquid method to cool down Nvidia GPUs in its servers - - avoiding have to rebuild existing data centers. "There simply wouldn't be enough liquid-cooling capacity to support our scale," Dave Brown, vice president of compute and machine learning services at AWS, said in a YouTube video. US vs China For McKinsey's Sachdeva, a reassuring factor is that each new generation of computer chips is more energy-efficient than the last. Research by Purdue University's Yi Ding has shown that AI chips can last longer without losing performance. "But it's hard to convince semiconductor companies to make less money" by encouraging customers to keep using the same equipment longer, Ding added. Yet even if more efficiency in chips and energy consumption is likely to make AI cheaper, it won't reduce total energy consumption. "Energy consumption will keep rising," Ding predicted, despite all efforts to limit it. "But maybe not as quickly." In the United States, energy is now seen as key to keeping the country's competitive edge over China in AI. In January, Chinese startup DeepSeek unveiled an AI model that performed as well as top US systems despite using less powerful chips -- and by extension, less energy. DeepSeek's engineers achieved this by programming their GPUs more precisely and skipping an energy-intensive training step that was previously considered essential. China is also feared to be leagues ahead of the US in available energy sources, including from renewables and nuclear.

IWG-Arup: Hybrid work may slash businesses' real estate costs by up to 55%
IWG-Arup: Hybrid work may slash businesses' real estate costs by up to 55%

Independent Singapore

time14-07-2025

  • Business
  • Independent Singapore

IWG-Arup: Hybrid work may slash businesses' real estate costs by up to 55%

Photo: Freepik/freestockcenter SINGAPORE: As much as hybrid work could improve employee productivity and reduce employee turnover, it could also cut businesses' real estate costs by up to 55%, Singapore Business Review reported, citing a US-modelled survey by the International Workplace Group (IWG) and global engineering consultancy Arup. HR Asia reported this could save firms about US$58 billion (S$74 billion) each year by 2030 and up to US$122 billion by 2045. The IWG Hybrid Working Productivity Report also found that hybrid work could add US$219 billion in Gross Value Added (GVA) each year by 2030 and up to US$566 billion by 2045, thanks to increased productivity, lower staff turnover and replacement costs, and portfolio savings. The report said productivity could rise by 11% as employees benefit from shorter commutes, fewer distractions, and more time focusing on their tasks. It also found that up to 40% of the time spent by workers on commuting was spent doing more work — adding about 170 extra productive hours per employee each year. See also SG dollar at record high against MY ringgit; S$1 to RM3.41 In fact, employees in flexible workspaces were 67% more likely to rate their productivity as 'excellent' than those working from home. At the same time, flexible setups could lower voluntary turnover by as much as 20%, leading to potential yearly savings of US$22 billion in recruitment and training costs by 2030 and up to US$45 billion by 2045. Notably, employees are three times more likely to stay in jobs that offer flexible work options. Singapore Business Review reported, citing a Knight Frank survey, that 30% of business leaders in the city-state now consider flexible working a key part of their real estate decisions. Dr Issac Lim, a social scientist and founder of Anthro Insights, said Singapore is 'particularly progressive' when it comes to hybrid work. However, he noted that 'to unlock true productivity, businesses must be intentional—designing flexible structures around the nature of work, technology, and outcomes.' In the city-state, Gen Z workers reported higher job satisfaction (77%) and better work-life balance (34%) through hybrid work. In September, a study by IWG and consultancy Development Economics also found that 76% of workers saved money each month by working closer to home . For example, a 27-year-old office worker in Singapore's Central Business District could save around S$3,900 a year by working closer to home just two days a week. /TISG Read also: Singapore businesses to receive up to S$100,000 grant in October as they face a new tariff environment; SMEs to get 'more generous' support () => { const trigger = if ('IntersectionObserver' in window && trigger) { const observer = new IntersectionObserver((entries, observer) => { => { if ( { lazyLoader(); // You should define lazyLoader() elsewhere or inline here // Run once } }); }, { rootMargin: '800px', threshold: 0.1 }); } else { // Fallback setTimeout(lazyLoader, 3000); } });

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store