Latest news with #GenAI-enabled
Yahoo
15 hours ago
- Business
- Yahoo
Generative AI is making running an online business a nightmare
Sometime last year, Ian Lamont's inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn't opened any new positions, but when he logged onto LinkedIn, he found one for a "Data Entry Clerk" linked to his business's name and logo. Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company's "manager." The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company's site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied. Generative AI's potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it's expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI's ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses. Since ChatGPT's debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it's increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the "industrial revolution for scams" — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets. The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization's chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and assuming the request came through the CFO, he green-lit the transaction. Business Insider spoke with professionals in several industries — including recruitment, graphic design, publishing, and healthcare — who are scrambling to keep themselves and their customers safe against AI's ever-evolving threats. Many feel like they're playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning. Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company's 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it — and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive. It was a rude awakening for Hankiewicz. She's since ramped up the company's cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, "the incident actually strengthened our relationship with many customers who appreciated our proactive approach," she says. Her alarm bells really went off once the interviewer asked her to share her driver's license. Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, isn't surprised at the surge in personalized phishing attacks against small businesses like Oishya. GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand's image and write flawless, convincing scam messages within minutes, he says. With cheap tools, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," Duncan says. Though mainstream AI tools like ChatGPT have precautions in place when you ask them to infringe copyright, there are now plenty of free or inexpensive online services that allow users to replicate a business's website with simple text prompts. Using a tool called Llama Press, I was able to produce a near-exact clone of Hankiewicz's store, and personalize it from a few words of instructions. (Kody Kendall, Llama Press's founder, says cloning a store like Oshiya's doesn't trigger a safety block because there can be legitimate reasons to do so, like when a business owner is trying to migrate their website to a new hosting platform. He adds that Llama Press relies on Anthropic's and OpenAI's built-in safety checks to weed out bad-faith requests.) Text is just one front of the war businesses are fighting against malicious uses of AI. With the latest tools, it takes a solo adversary — again with no technical expertise — as little as an hour to create a convincing fake job candidate to attend a video interview. Tatiana Becker, a tech recruiter based in New York, tells me deepfake job candidates have become an "epidemic." Over the past couple years, she has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews. At this point, she's able to discern some of their telltale signs of fakery, including a glitchy video quality and the candidate's refusal to switch up any element of their appearance during the call, such as taking off their headphones. Now, at the start of every interview she asks for the candidates' ID and poses more open-ended questions, like what they like to do in their free time, to suss out if they're a human. Ironically, she's made herself more robotic at the outset of interviews to sniff out the robots. Nicole Yelland, a PR executive, says she found herself on the opposite end of deepfakery earlier this year. A scammer impersonating a startup recruiter approached her over email saying he was looking for a head of comms, with an offer package that included generous pay and benefits. The purported person even shared with her an exhaustive slide deck, decorated with AI-generated visuals, outlining the role's responsibilities and benefits. Enticed, she scheduled an interview. During the video meeting, however, the "hiring manager" refused to speak, and instead asked Yelland to type her responses to the written questions in the Microsoft Teams chat section. Her alarm bells really went off once the interviewer started asking her to share a series of private documents, including her driver's license. Yelland now runs a background check with tools like Spokeo before engaging with any stranger online. "It's annoying and takes more time, but engaging with a spammer is more annoying and time-consuming; so this is where we are," she says. While videoconferencing platforms like Teams and Zoom are getting better at detecting AI-generated accounts, some experts say the detection itself risks creating an vicious cycle. The data these platforms collect on what's fake is ultimately used to train more sophisticated GenAI models, which will help them get better at escaping fakery detectors and fuel "an arms race defenders cannot win," says Jasson Casey, the CEO of Beyond Identity, a cybersecurity firm that specializes in identity theft. Casey and his company believe the focus should instead be on authenticating a person's identity. Beyond Identity sells tools that can be plugged into Zoom that verify meeting participants through their device's biometrics and location data. If it detects a discrepancy, the tools label the participants' video feed as "unverified." Tramèr Florian, a computer science professor at ETH Zurich, agrees that authenticating identity will likely become more essential to ensure that you're always talking to a legitimate colleague. It's not just fake job candidates entrepreneurs now have to contend with, it's always fake versions of themselves. In late 2024, scammers ran ads on Facebook for a video featuring Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute in Melbourne. Although the person in it looked and sounded exactly like Dr. Shaw, the voice had been deepfaked and edited to say that metformin — a first-line treatment for type 2 diabetes — is "dangerous," and patients should instead switch to an unproven dietary supplement. The fake ad was accompanied by a fake written news interview with Shaw. Several of his clinic's patients, believing the video was genuine, reached out asking how to get a hold of the supplement. "One of my longstanding patients asked me how come I continued to prescribe metformin to him, when 'I' had said on the video that it was a poor drug," Shaw tells me. Eventually he was able to get Facebook to take down the video. Then there's the equally vexing and annoying issue of AI slop — an inundation of low-quality, mass-produced images and text that is flooding the internet and making it ever-more difficult for the average person to tell what's real or fake. In her research, DiResta found instances where social platforms' recommendation engines have promoted malicious slop — where scammers would put up images of items like nonexistent rental properties, appliances, and more that users were frequently falling for it and giving away their payment details. On Pinterest, AI-generated "inspo" posts have plagued people's mood boards — so much so that Philadelphia-based Cake Life Shop now often receives orders from customers asking them to recreate what are actually AI-generated cakes. In one shared with Business Insider, the cake resembles a moss-filled rainforest, and features a functional waterfall. Thankfully for cofounder Nima Etemadi, most customers are "receptive to hearing about what is possible with real cake after we burst their AI bubble," he says. Similarly, AI-generated books have swarmed Amazon and are now hurting publisher sales. Pauline Frommer, the president of the travel guide publisher Frommer Media, says that AI-generated guidebooks have managed to reach the top of lists with the help of fake reviews. An AI publisher buys a few Prime memberships, sets the guidebook's ebook price to zero, and then leaves seemingly "verified reviews" by downloading its copies for free. These practices, she says, "will make it virtually impossible for a new, legitimate brand of guidebook to enter the business right now." Ian Lamont says he received an AI-generated guidebook as a gift last year: a text-only guide to Taiwan, with no pictures or maps. While the FTC now considers it illegal to publish fake, AI-generated product reviews, official policies haven't yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but since it's not error-free yet, some worry these measures may do more harm than good. DiResta fears that a potential unintended consequence of ubiquitous AI labels would be people experiencing "label fatigue," where they blindly assume that unlabeled content is therefore always "real." "It's a potentially dangerous assumption if a sophisticated manipulator, like a state actor's intelligence service, manages to get disinformation content past a labeler," she says. For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they're dealing with an actual human and that the money they're sending is actually going where they intend it to go. Etemadi of Cake Life Shop recognizes that for as much as GenAI can help his business become more efficient, scam artists will ultimately use the same tools to become just as efficient. "Doing business online gets more necessary and high risk every year," he says. "AI is just part of that." Shubham Agarwal is a freelance technology journalist from Ahmedabad, India, whose work has appeared in Wired, The Verge, Fast Company, and more. Read the original article on Business Insider

Business Insider
a day ago
- Business
- Business Insider
Welcome to the Deepfake Economy
Sometime last year, Ian Lamont's inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn't opened any new positions, but when he logged onto LinkedIn, he found one for a "Data Entry Clerk" linked to his business's name and logo. Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company's "manager." The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company's site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied. Generative AI's potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it's expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI's ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses. Since ChatGPT's debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it's increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the "industrial revolution for scams" — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets. The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization's chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and assuming the request came through the CFO, he green-lit the transaction. Business Insider spoke with professionals in several industries — including recruitment, graphic design, publishing, and healthcare — who are scrambling to keep themselves and their customers safe against AI's ever-evolving threats. Many feel like they're playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning. Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company's 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it — and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive. It was a rude awakening for Hankiewicz. She's since ramped up the company's cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, "the incident actually strengthened our relationship with many customers who appreciated our proactive approach," she says. Her alarm bells really went off once the interviewer asked her to share her driver's license. Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, isn't surprised at the surge in personalized phishing attacks against small businesses like Oishya. GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand's image and write flawless, convincing scam messages within minutes, he says. With cheap tools, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," Duncan says. Though mainstream AI tools like ChatGPT have precautions in place when you ask them to infringe copyright, there are now plenty of free or inexpensive online services that allow users to replicate a business's website with simple text prompts. Using a tool called Llama Press, I was able to produce a near-exact clone of Hankiewicz's store, and personalize it from a few words of instructions. (Kody Kendall, Llama Press's founder, says cloning a store like Oshiya's doesn't trigger a safety block because there can be legitimate reasons to do so, like when a business owner is trying to migrate their website to a new hosting platform. He adds that Llama Press relies on Anthropic's and OpenAI's built-in safety checks to weed out bad-faith requests.) Text is just one front of the war businesses are fighting against malicious uses of AI. With the latest tools, it takes a solo adversary — again with no technical expertise — as little as an hour to create a convincing fake job candidate to attend a video interview. Tatiana Becker, a tech recruiter based in New York, tells me deepfake job candidates have become an "epidemic." Over the past couple years, she has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews. At this point, she's able to discern some of their telltale signs of fakery, including a glitchy video quality and the candidate's refusal to switch up any element of their appearance during the call, such as taking off their headphones. Now, at the start of every interview she asks for the candidates' ID and poses more open-ended questions, like what they like to do in their free time, to suss out if they're a human. Ironically, she's made herself more robotic at the outset of interviews to sniff out the robots. Nicole Yelland, a PR executive, says she found herself on the opposite end of deepfakery earlier this year. A scammer impersonating a startup recruiter approached her over email saying he was looking for a head of comms, with an offer package that included generous pay and benefits. The purported person even shared with her an exhaustive slide deck, decorated with AI-generated visuals, outlining the role's responsibilities and benefits. Enticed, she scheduled an interview. During the video meeting, however, the "hiring manager" refused to speak, and instead asked Yelland to type her responses to the written questions in the Microsoft Teams chat section. Her alarm bells really went off once the interviewer started asking her to share a series of private documents, including her driver's license. Yelland now runs a background check with tools like Spokeo before engaging with any stranger online. "It's annoying and takes more time, but engaging with a spammer is more annoying and time-consuming; so this is where we are," she says. While videoconferencing platforms like Teams and Zoom are getting better at detecting AI-generated accounts, some experts say the detection itself risks creating an vicious cycle. The data these platforms collect on what's fake is ultimately used to train more sophisticated GenAI models, which will help them get better at escaping fakery detectors and fuel "an arms race defenders cannot win," says Jasson Casey, the CEO of Beyond Identity, a cybersecurity firm that specializes in identity theft. Casey and his company believe the focus should instead be on authenticating a person's identity. Beyond Identity sells tools that can be plugged into Zoom that verify meeting participants through their device's biometrics and location data. If it detects a discrepancy, the tools label the participants' video feed as "unverified." Tramèr Florian, a computer science professor at ETH Zurich, agrees that authenticating identity will likely become more essential to ensure that you're always talking to a legitimate colleague. It's not just fake job candidates entrepreneurs now have to contend with, it's always fake versions of themselves. In late 2024, scammers ran ads on Facebook for a video featuring Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute in Melbourne. Although the person in it looked and sounded exactly like Dr. Shaw, the voice had been deepfaked and edited to say that metformin — a first-line treatment for type 2 diabetes — is "dangerous," and patients should instead switch to an unproven dietary supplement. The fake ad was accompanied by a fake written news interview with Shaw. Several of his clinic's patients, believing the video was genuine, reached out asking how to get a hold of the supplement. "One of my longstanding patients asked me how come I continued to prescribe metformin to him, when 'I' had said on the video that it was a poor drug," Shaw tells me. Eventually he was able to get Facebook to take down the video. Then there's the equally vexing and annoying issue of AI slop — an inundation of low-quality, mass-produced images and text that is flooding the internet and making it ever-more difficult for the average person to tell what's real or fake. In her research, DiResta found instances where social platforms' recommendation engines have promoted malicious slop — where scammers would put up images of items like nonexistent rental properties, appliances, and more that users were frequently falling for it and giving away their payment details. On Pinterest, AI-generated "inspo" posts have plagued people's mood boards — so much so that Philadelphia-based Cake Life Shop now often receives orders from customers asking them to recreate what are actually AI-generated cakes. In one shared with Business Insider, the cake resembles a moss-filled rainforest, and features a functional waterfall. Thankfully for cofounder Nima Etemadi, most customers are "receptive to hearing about what is possible with real cake after we burst their AI bubble," he says. Similarly, AI-generated books have swarmed Amazon and are now hurting publisher sales. Pauline Frommer, the president of the travel guide publisher Frommer Media, says that AI-generated guidebooks have managed to reach the top of lists with the help of fake reviews. An AI publisher buys a few Prime memberships, sets the guidebook's ebook price to zero, and then leaves seemingly "verified reviews" by downloading its copies for free. These practices, she says, "will make it virtually impossible for a new, legitimate brand of guidebook to enter the business right now." Ian Lamont says he received an AI-generated guidebook as a gift last year: a text-only guide to Taiwan, with no pictures or maps. While the FTC now considers it illegal to publish fake, AI-generated product reviews, official policies haven't yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but since it's not error-free yet, some worry these measures may do more harm than good. DiResta fears that a potential unintended consequence of ubiquitous AI labels would be people experiencing "label fatigue," where they blindly assume that unlabeled content is therefore always "real." "It's a potentially dangerous assumption if a sophisticated manipulator, like a state actor's intelligence service, manages to get disinformation content past a labeler," she says. For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they're dealing with an actual human and that the money they're sending is actually going where they intend it to go. Etemadi of Cake Life Shop recognizes that for as much as GenAI can help his business become more efficient, scam artists will ultimately use the same tools to become just as efficient. "Doing business online gets more necessary and high risk every year," he says. "AI is just part of that."


Los Angeles Times
22-06-2025
- Business
- Los Angeles Times
Early-Stage Technology Disruptions and Trends Set to Define the Future of Business Systems
Researchers point to technologies addressing GenAI-enabled code architecture, disinformation security and surface asset management as the most likely to be widely adopted by businesses by 2030, while leaders find themselves reassessing cloud usage and not overlooking the ever-growing need for effective cybersecurity solutions Through its research and surveys of C-suiters, Gartner, Inc. has identified the most likely emerging technology disruptions that will impact businesses and define the future of business systems. Technology leaders are clearly prioritizing these over the next five years, as they present competitive opportunities in the near term and will eventually grow to become standard throughout businesses. 'Technology leaders must take action now to gain a first-mover advantage with these technologies,' said Bill Ray, distinguished VP analyst at Gartner. 'Innovative advancements like generative AI-enabled code architecture, disinformation security and Earth intelligence will provide the differentiation needed to help enterprises pull ahead of the pack in terms of data and product offerings.' Each disruptor is significant in its own right, but in combination, they start to define broader emerging solutions to new business practices. For example, advancing GenAI technologies will spawn new solutions around Earth intelligence and business simulation, spur the expansive growth of domainspecific language models and lead to higher functioning tools. GenAI solutions systems using free-form text and multimedia inputs/outputs will displace the conventional form-oriented sequential UI in established enterprise applications and enable new user scenarios. 'To remain competitive, traditional enterprise application software vendors will need to refactor applications to serve composable GenAI solutions that are invoked on demand via textual and multimodal prompts,' said Ray Valdes, VP analyst at Gartner. Because of this, Gartner predicts that by 2029, more than 50% of user interactions linked to enterprise business processes will leverage large language models to bypass the UI layer in traditional enterprise applications, up from less than 5% today. Disinformation security is an emerging discipline focused on threats from outside the corporate-controlled network. It includes a suite of technologies, such as deepfake detection, impersonation prevention and reputation protection, which can address disinformation to help enterprises discern trust, protect their brand and secure their online presence. Gartner predicts that by 2030, at least half of enterprises will have adopted products or services to address disinformation security, up from less than 5% in 2024. 'Disinformation attacks use external infrastructure, like social media, and originate from areas with limited legal oversight,' said Alfredo Ramirez IV, senior director analyst at Gartner. 'Tech leaders must add 'disinformation-proofing' to products by using AI/machine learning for content verification and data provenance tracking to help users discern the truth.' Gartner predicts that by 2028, 80% of major Earth surface assets globally will be monitored by active satellites. Earth intelligence tech uses AI to analyze satellite, aerial and ground data to monitor Earth's assets and activities, providing insights for decision-making. 'That doesn't mean maps and charts. Earth intelligence is delivering numbers on global nickel production, theme park revenue and the health of wheat crops, to name just a few,' said Ray. Given the breadth of applications, Earth intelligence is applicable to all industries and enterprises. Defense has been the first adopter, but improvements in the quality of data and analysis techniques have rapidly expanded the use cases. The Earth intelligence market is now divided between those who capture the data, those who interpret and analyze it and those who generate industry-specific insights. 'Earth intelligence applies to every business,' said Ray. 'Enterprises can gain an early advantage by creatively and strategically applying Earth intelligence to significantly enhance specific functionalities of existing systems or to compete via new net capabilities.' In a separate research report, Gartner has also announced the top trends shaping the future of cloud adoption over the next four years. These include cloud dissatisfaction, AI/machine learning (ML), multicloud, sustainability, digital sovereignty and industry solutions. Joe Rogus, director, advisory at Gartner, said, 'These trends are accelerating the shift in how cloud is transforming from a technology enabler to a business disruptor and necessity for most organizations. Over the next few years, cloud will continue to unlock new business models, competitive advantages and ways of achieving business missions.' Cloud adoption continues to grow, but not all implementations succeed. Gartner predicts 25% of organizations will have experienced significant dissatisfaction with their cloud adoption by 2028, due to unrealistic expectations, suboptimal implementation and/or uncontrolled costs. To remain competitive, enterprises need a clear cloud strategy and effective execution. Gartner research indicates that those who have successfully addressed upfront strategic focus by 2029 will find their cloud dissatisfaction will decrease. Gartner also predicts 50% of cloud computing resources will be devoted to AI workloads by 2029, up from less than 10% today. 'This all points to a fivefold increase in AI-related cloud workloads by 2029,' said Rogus. 'Now is the time for organizations to assess whether their data centers and cloud strategies are ready to handle this surge in AI and ML demand. In many cases, they might need to bring AI to where the data is to support this growth.' Many organizations that have adopted multicloud architecture find connecting to and between providers a challenge. This lack of interoperability between environments can slow cloud adoption, with Gartner predicting more than 50% of organizations will not get the expected results from their multicloud implementations by 2029. Gartner recommends identifying specific use cases and planning for distributed apps and data in the organization that could benefit from a cross-cloud deployment model. This enables workloads to operate collaboratively across different cloud platforms, as well as different onpremises and colocation facilities. AI adoption, tightening privacy regulations and geopolitical tensions are also driving demand for sovereign cloud services. Organizations will be increasingly required to protect data, infrastructure and critical workloads from control by external jurisdictions and foreign government access. Gartner predicts over 50% of multinational organizations will have digital sovereign strategies by 2029, up from less than 10% today. 'As organizations proactively align their cloud strategies to address digital sovereignty requirements, there are already a wide range of offerings that will support them,' said Rogus. 'However, it's important they understand exactly what their requirements are, so they can select the right mix of solutions to safeguard their data and operational integrity.' With all the talk of businesses onboarding GenAI and other emerging technology solutions, it's easy to overlook cybersecurity tools. CEOs, however, are not forgetting to lock the proverbial door. Gartner's research has shown that a whopping 85% of CEOs surveyed say that cybersecurity is critical for business growth moving forward. In a survey of 456 CEOs and other senior business executives, 61% of CEOs are concerned about cybersecurity threats, driven in large part by AI's growing role in commercial activity and the political debates about the sourcing and use of advanced technologies. As risk thresholds shift, they view cybersecurity as a key driver. 'Cybersecurity is no longer just about protection; it's a critical driver for business growth,' said David Furlonger, distinguished vice president analyst and Gartner fellow. 'With 85% of CEOs recognizing its importance, security leaders have a unique opportunity to demonstrate the value of cybersecurity investments not only in safeguarding assets but also in enabling strategic business objectives.' 'Effective communication is key,' said Furlonger. 'CEOs should highlight the role of security leaders in both protecting the business and enhancing cybersecurity to drive growth. This involves, for example, assessing risks in foreign markets and intellectual property protection. Security leaders are positioned to significantly influence value generation, and they should communicate how cybersecurity aids enterprise growth.' With regulatory changes and cybersecurity threats challenging competitiveness, CEOs said they see a direct linkage between cybersecurity capabilities and enterprise growth. Meanwhile, C-suiters' comfort level with AI is far from established at this point. Only 44% of CIOs are deemed by their CEOs to be 'AI-savvy' according to the Gartner data. The survey revealed that 77% of CEOs believe that AI is indeed ushering in a new business era, yet they feel their organization's leading technology experts lack the knowledge and capabilities to support, drive or accelerate business outcomes in this evolving landscape. 'We have never seen such a disproportionate gap in CEOs' impressions about technological disruption,' said Furlonger. 'AI is not just an incremental change from digital business. AI is a step change in how business and society work. A significant implication is that if savviness across the C-suite is not rapidly improved, competitiveness will suffer and corporate survival will be at stake.' CEOs perceive even the CIO, chief information security officer (CISO) and chief data officer (CDO) as lacking AI savviness. CEOs highlighted that the top two limiting factors impacting AI's deployment and use are the inability to hire adequate numbers of skilled people and an inability to calculate value or outcomes. 'CEOs have shifted their view of AI from just a tool to a transformative way of working,' said Jennifer Carter, principal analyst at Gartner. 'This change has highlighted the importance of upskilling. As leaders recognize AI's potential and its impact on their organizations, they understand that success isn't just about hiring new talent. Instead, it's about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks.'


Business Wire
19-06-2025
- Business
- Business Wire
Hanshow Showcases Retail Media and Store Intelligence at 2025 CGF Global Summit
AMSTERDAM--(BUSINESS WIRE)--Hanshow, a global leader in digital retail solutions, underscored its leadership in AI, IoT, and Retail Media at the 2025 Global Summit of the Consumer Good Forum (CGF), held from June 11 to 13 at RAI Amsterdam. From an immersive 'Future Store' showcase in the I-Zone to moderating one of the Summit's most anticipated panel discussions, Hanshow demonstrated how its integrated platform is driving the next wave of store transformation. Inside the Future Store: Hanshow's Real-Time Innovation at I-Zone At Booth No.8 in the I-Zone, Hanshow presented an interactive showcase themed 'Powering Your Future Store,' highlighting four core pillars of the future store: real-time pricing and shelf operations powered by ESLs; smart carts that enhance the shopping experience with autonomous navigation and seamless checkout; GenAI-enabled journeys offering contextual promotions and personalized assistance; and green technologies such as solar storage charging systems and in-store energy saving solutions. Visitors explored how these technologies interoperate to elevate operational efficiency, enhance shopper experience, and advance sustainability. The showcase drew strong interest from retailers, tech leaders, and media, offering a vivid glimpse into how Hanshow is bridging digital and physical retail to deliver a unified, scalable transformation for digital and physical retail. AI, IoT & RMNs: Driving the Next Wave of Retail Ecosystem Transformation On June 12 in the Plenary Hall, a full house of industry attendees was eager to hear how AI, IoT, and Retail Media are reshaping the retail ecosystem. Philippe Brochard—Tech and Innovation Advisory Board Member at Hanshow and former Retail CEO, moderated the plenary panel 'Redesigning Retail: AI, IoT and RMN – Empowering Tech-Driven Experiences and Sustainable Growth', joined by: Klaus Smets, Vice President, Hanshow Europe Bas Komen, Director of Sales & Marketing for Retail Media, Albert Heijn Bart Zoetmulder, Head of Market, Havas Media Netherlands Philippe opened with four pivotal shifts: the rise of Retail Media Networks, the need for AI readiness, stores as digital platforms, and leadership in sustainable transformation. Klaus Smets framed Hanshow as a key enabler connecting retailers, brands, and media agencies. He emphasized that the next frontier in Retail Media is in-store, where shelves, carts, and screens form a unified media network. 'By turning physical stores into data-driven, omnichannel touchpoints, we can synchronize availability, pricing, and messaging right where decisions are made,' he noted. Klaus added that Hanshow's all-in-one platform, combining ESLs, smart carts, and digital signages—enables intelligent, media-capable stores powered by AI, backed by strong local service, and anchored in ESG principles. Bas Komen detailed Albert Heijn's Retail Media Services aim to build brands and drive conversion for CPG brands, built on insights and strategic partnerships. Bart Zoetmulder emphasized Havas's shift to a tech-first agency model, leveraging real-time shopper data and 'Havas Forecast' to scale personalization. The panel concluded that retail transformation requires collaboration across media, retail, and technology anchored in the store. Hanshow's mission is to bridge these domains and deliver actionable, data-driven in-store experiences. Shaping the Future: Smarter, Greener, and Closer to Clients Hanshow will continue investing in IoT, AI, and communication technologies, —ESLs, digital signage, smart carts, and robots. The company is expanding local service capabilities across Europe, the Americas, and APAC to ensure faster deployment and agile response. Committed to ESG-aligned innovation, Hanshow aims to reduce environmental impact, and create long-term value through deeper collaboration with global tech and regional partners. 'We're not just building technology, we're co-designing the digital roadmap for the next era of retail with our clients,' said Shiguo Hou, CEO of Hanshow. 'Innovation must empower, not isolate.'
&w=3840&q=100)

Business Standard
01-06-2025
- Business
- Business Standard
Indian companies struggle to offer clarity, guidance on GenAI use: Report
Enterprises across business verticals in India are struggling to provide the structure, access, and clarity needed to support the use of generative AI (GenAI) at workplaces, according to a report by recruitment firm Michael Page India. About 3,000 professionals across various experience levels were surveyed in the country in the report titled 'Talent Trends India 2025', which pointed out that despite growing access to GenAI tools, many professionals remain unsure how these technologies will shape their careers. 'The disconnect between GenAI rollout and employee confidence has broader implications. When individuals cannot see how emerging technologies support their future, hesitation grows, and engagement can decline. In a GenAI-enabled workplace, clarity isn't just a support function – it's essential to building trust and retaining talent in times of rapid change,' the report said. Forty-two per cent of professionals in India view GenAI as a threat to job security as deeper concerns surface regarding its use and implications, while the number inches up to 44 per cent when it comes to middle-level management. The top management, with 30 per cent, feels the least threatened. Sixty per cent of those surveyed believe it will impact their long-term career path, the report found. This uncertainty points to a broader readiness gap, one not just about technical skills, but about trust, guidance, and future alignment. Many employees may not be resistant to GenAI, but without clear direction, they feel under-equipped to make the most of it. According to the report, employee sentiment on GenAI preparedness is mixed even as 80 per cent of professionals have access to employer-provided GenAI tools. Thirty-one per cent say their employer is preparing them very well, 22 per cent feel fairly well supported, and 16 per cent each describe the support as average and unprepared. Besides clarity on the use of GenAI tools, some of the other questions that employees are asking include queries on salary and career expectations, work arrangement policy, transparent company culture, and approach to inclusivity. 'Candidates are becoming increasingly focused on transparency and alignment with their personal and professional goals. They are seeking employers who offer clarity – not only on salary and flexibility but also on culture, values, and the responsible use of emerging technologies,' Nilay Khandelwal, senior managing director, Michael Page India and Singapore said in a statement. Workplace arrangement policy, a topic that became important since the pandemic, shows signs of stabilisation as most companies adopt a hybrid policy, with 54 per cent saying they were working more days in office compared to a year earlier. Remote work changes have also remained steady (21 per cent vs 23 per cent last year), and the proportion of professionals experiencing no change in their work setup has nudged up slightly from 21 per cent to 22 per cent. India leads the region in workplace trust, with 61 per cent of professionals expressing high or complete trust in their leadership, well above the APAC (57 per cent) and global (49 per cent) averages. Transparency is also a standout strength, with 65 per cent of employees rating their organisations as open and communicative, the report stated.