
Generative AI's Benefit Extends Well Beyond The Office
By Maarten Rikken, SAP
There are bold predictions about the impact of generative AI: that it's set to affect almost 40% of global jobs and increase global GDP by $7 trillion. The predictions tend to focus on knowledge workers, so it can be easy to forget that generative AI's impact is being felt by operational and frontline workers, too.
This isn't to say generative AI's benefits for knowledge workers aren't compelling. It's already eliminating repetitive tasks and increasing efficiency. The thing is, it's also providing significant benefits for workers in the field, especially for organizations like SA Power Networks, the sole electricity distributor in South Australia.
South Australia is a large state, and SA Power Networks' service technicians must maintain electricity poles and infrastructure across a vast space, including in rural and metropolitan areas. Field technicians often need information, such as the pole's installation date, known hazards, voltage, and circuit diagrams, to do their jobs.
"When our crews are out there looking at a pole in a storm, for example, they need information about what has happened in the past," says Jason Anthony, Head of Enterprise Applications, SA Power Networks. "The difficulty is bringing that information to a person who's in a raised work platform 10 meters off the ground." Find out how to build custom AI applications with AI for SAP Business Technology Platform (BTP)
To get that information into the hands of their crews, SA Power Networks built a custom generative AI application that lets them search through relevant information with natural language. Ways generative AI benefits the field
There are a few generative AI qualities that stand out for SA Power Networks' field technicians. Gen AI is accessible and excellent at quickly sifting through vast amounts of data to provide relevant information. This accelerates research and gets service technicians the answers they need to do their jobs. When you combine this incredible analytical ability with accessibility, we see what makes Gen AI so attractive to people in the field.
"Gen AI has allowed us to put 50 years of information into the hands of our field technicians with a simple query, either through natural language or presenting it based on contextual information," says Matt Pritchard, Head of Architecture and Data, SA Power Networks.
Natural language interfaces (NFIs) let people interact in a way that feels natural, through spoken or written language, without learning specific commands or navigating complex systems. When leveraged in customer service, they often result in more self-service use and fewer interactions, which improves both the customer and employee experience.
SA Power Networks' need to analyze 50 years of data also speaks to generative AI's scalability and customization. Building a generative AI app for your business can offer big benefits. Once you know what you want the app to do—analyze data or improve customer service, for example—you'll need to select the model you want to train and gather the data to train it. Then, it's a case of building an interface for users to interact with.
Custom applications let you offer a personalized user experience. Because the model is trained on a specific dataset relevant to your business processes, it will generally offer significantly better performance and accuracy. Additionally, as your business needs evolve, you can easily update and retrain your model to adapt to new data and challenges.
"Gen AI has been a game changer for us, in terms of serving information in a fast and meaningful manner," says Pritchard. "By building custom applications that use Gen AI, we can put information in the hands of our field crews, who really just want to do their jobs as quickly and as safely as possible." Getting the most out of Gen AI
Organizations like SA Power Networks show that the impact of Gen AI is broad-reaching when applied correctly, especially when building custom applications. Tailored AI applications let organizations generate highly accurate and contextual results built with the audience in mind. For SA Power Networks, this was making historical information accessible to field technicians through natural language.
Find out how to build custom AI applications with AI for SAP Business Technology Platform (BTP)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Why enterprises need to balance security and AI spending
The appeal of AI is clear: the technology promises to spur revolutionary innovations that can drive greater business efficiency, productivity and new revenues, but not without security. Like any other enterprise asset, AI systems and applications require protection from threat actors who are eager to breach models for their own profit motives. Securing AI models from advancing threats that could compromise the integrity of data output is a daunting challenge that too few organisations have a handle on today. Add threat actors harnessing AI for their own nefarious purposes to the mix and the situation becomes that much more daunting for the enterprise. Accenture surveyed 2286 executives, 80% of whom are Chief Information Security Officers (CISOs) and uncovered a perilous scenario where enterprises are largely unready to protect their assets. Just 20% of those surveyed said they were ready to defend their generative AI models from cyber threats. Accenture reported that only 34% have a mature cybersecurity strategy. One of the issues enterprises are running into with respect to their security postures in general, is that the prioritisation of AI development and deployment over other IT investments often means security falls by the wayside. Between 2023 and 2024, Accenture reports investments in GenAI projects were 1.6 times higher than security spending. If this trend continues, there is a risk that AI systems built on less than secure ground are vulnerable to cyberthreats. Only 28% of the executives surveyed said they are integrating security capabilities into all transformative projects from the earliest development phases. Only 42% said they are mapping security development spending to AI development. The news is not all bleak. For organisations that prioritise cybersecurity investments and focus on infrastructure resilience as they carry out transformational projects, they create elevated security postures that mitigates serious risks. Enterprises that achieve what Accenture terms a "Reinvention-Ready Zone" classification face a 69% lower risk of the kind of sophisticated cyber attacks that leverage advanced techniques including AI to cripple operations. The investment in security brings other dividends. Accenture found that organisations that prioritise security spending achieve a 1.7 times lower technical debt due in large part to the overall efficiency and resilience of their infrastructure. The clear takeaway is that enterprises need to balance their AI infrastructure investments with their security spending to ensure the most protected, consistent, and high performing environment possible. "Why enterprises need to balance security and AI spending" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
2 hours ago
- Yahoo
Innodata vs. C3.ai: Which AI-Focused Enterprise Stock is a Good Buy?
Innodata INOD and AI are well-known artificial intelligence (AI) focused stocks that cater to the needs of enterprises. While Innodata offers AI data engineering and model training services, provides an AI-powered software platform that offers data integration and analytics solutions. The striking similarity between these two companies is that both treat data as the fundamental entity for AI-led digital are rapidly deploying AI, and the advent of generative AI (Gen AI) has further accelerated usage. Per Gartner, global Gen AI spending is expected to hit $644 billion in 2025, indicating 76.4% growth over 2024. Services are expected to grow a massive 162.6% year over year to $27.76 billion, while software is anticipated to jump 93.9% to $37.12 billion. Per IDC, global spending on AI, including AI-enabled applications, infrastructure, and related IT and business services, will more than double by 2028 to hit $632 billion, seeing a CAGR of 29% between 2024 and 2028. Both Innodata and benefit from this massive which stock is a better buy right now? Let's find out. Innodata benefits from massive investment promises made by the 'Magnificent 7,' including Microsoft's $80 billion and Meta Platforms' $64-$72 billion. The company is expanding relationships with key customers, including a second master statement of work with its largest client, tapping a separate, significantly larger budget. INOD secured approximately $8 million in new engagements from four of its other Big Tech customers. Erstwhile, small accounts are showing material expansion opportunities into multi-million-dollar is onboarding several major clients, including top global firms in enterprise tech, cloud software, digital commerce and healthcare technology, each with significant growth potential. The company expects 2025 revenues to jump 40% year over year to $238.6 million, driven by an expanding clientele. Innodata serves the Gen AI IT services market that is expected to be worth $200 billion by 2029, offering significant growth prospects. The company is building the capability to collect and create Gen AI training data as large language models (LLMs) become more complex and advanced. INOD continues to invest in expanding languages like Arabic and French within domains like math and chemistry, for which the company is creating LLM training data and performing reinforcement recently launched its Generative AI Test & Evaluation Platform, a new suite designed to help enterprises assess the safety and reliability of LLMs. Built on NVIDIA's NIM microservices, the platform supports hallucination detection, adversarial prompt testing and domain-specific risk benchmarking across text, image, audio and video inputs, helping organizations build more trustworthy AI. AI-powered platform operates more than 130 turnkey enterprise AI applications that address issues like predictive maintenance, supply chain optimization, supply network risk, demand forecasting, fraud detection, and drug discovery. In fiscal 2025, C3 Generative AI revenues jumped 100% and AI closed 66 C3 Generative AI initial production deployments across 16 benefits from a rich partner base that includes Microsoft, Amazon Web Services, Google Cloud, Booz Allen and Baker Hughes. In fourth-quarter fiscal 2025, a notable 73% of all agreements were signed in collaboration with major cloud providers. Partner-driven bookings soared 419% year over year in the reported quarter, fueled by 59 deals closed via strategic alliances. secured 193 deals through these partnerships over fiscal 2025, which was a 68% jump year over is gaining strong traction in the federal sector, with the United States government emerging as a key client. In fourth-quarter fiscal 2025, the company secured a $450-million contract ceiling from the U.S. Air Force for its PANDA predictive maintenance platform. AI-driven platforms are now embedded across the Air Force, Navy, Marine Corps and Missile Defense Agency, which is a key catalyst for future prospects. An increasingly diversified business model, as continues to expand its footprint across manufacturing, life sciences, and government (state and local government), boosts prospects. In fiscal 2025, non-oil and gas revenue surged 48% year-over-year, reflecting successful expansion into 19 different industries. In the year-to-date period, Innodata shares have surged 29.6%, outperforming shares, which have dropped 28.6%. Image Source: Zacks Investment Research Valuation-wise, both and Innodata shares are currently overvalued, as suggested by a Value Score of F. Image Source: Zacks Investment Research In terms of forward 12-month Price/Sales, shares are trading at 6.81X, higher than Innodata's 6.03X. The Zacks Consensus Estimate for AI's fiscal 2026 loss is pegged at 37 cents per share, which has narrowed from a loss of 46 cents over the past 60 days. reported a loss of 41 cents per share in the year-ago quarter. Inc. price-consensus-chart | Inc. Quote The consensus mark for Innodata's 2025 earnings is pegged at 69 cents per share, which has fallen 6.8% over the past 60 days. The figure indicates a whopping 22.47% decrease year over year. Innodata Inc. price-consensus-chart | Innodata Inc. Quote Despite Innodata's solid growth prospects thanks to massive spending by Big Tech, we believe rich partner base, innovative Gen AI-powered platform, and an increasingly diversified business model should attract currently carries a Zacks Rank #2 (Buy), which makes it a strong pick compared with Innodata, which has a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Inc. (AI) : Free Stock Analysis Report Innodata Inc. (INOD) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
4 hours ago
- Yahoo
Generative AI is making running an online business a nightmare
Sometime last year, Ian Lamont's inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn't opened any new positions, but when he logged onto LinkedIn, he found one for a "Data Entry Clerk" linked to his business's name and logo. Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company's "manager." The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company's site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied. Generative AI's potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it's expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI's ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses. Since ChatGPT's debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it's increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the "industrial revolution for scams" — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets. The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization's chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and assuming the request came through the CFO, he green-lit the transaction. Business Insider spoke with professionals in several industries — including recruitment, graphic design, publishing, and healthcare — who are scrambling to keep themselves and their customers safe against AI's ever-evolving threats. Many feel like they're playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning. Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company's 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it — and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive. It was a rude awakening for Hankiewicz. She's since ramped up the company's cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, "the incident actually strengthened our relationship with many customers who appreciated our proactive approach," she says. Her alarm bells really went off once the interviewer asked her to share her driver's license. Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, isn't surprised at the surge in personalized phishing attacks against small businesses like Oishya. GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand's image and write flawless, convincing scam messages within minutes, he says. With cheap tools, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," Duncan says. Though mainstream AI tools like ChatGPT have precautions in place when you ask them to infringe copyright, there are now plenty of free or inexpensive online services that allow users to replicate a business's website with simple text prompts. Using a tool called Llama Press, I was able to produce a near-exact clone of Hankiewicz's store, and personalize it from a few words of instructions. (Kody Kendall, Llama Press's founder, says cloning a store like Oshiya's doesn't trigger a safety block because there can be legitimate reasons to do so, like when a business owner is trying to migrate their website to a new hosting platform. He adds that Llama Press relies on Anthropic's and OpenAI's built-in safety checks to weed out bad-faith requests.) Text is just one front of the war businesses are fighting against malicious uses of AI. With the latest tools, it takes a solo adversary — again with no technical expertise — as little as an hour to create a convincing fake job candidate to attend a video interview. Tatiana Becker, a tech recruiter based in New York, tells me deepfake job candidates have become an "epidemic." Over the past couple years, she has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews. At this point, she's able to discern some of their telltale signs of fakery, including a glitchy video quality and the candidate's refusal to switch up any element of their appearance during the call, such as taking off their headphones. Now, at the start of every interview she asks for the candidates' ID and poses more open-ended questions, like what they like to do in their free time, to suss out if they're a human. Ironically, she's made herself more robotic at the outset of interviews to sniff out the robots. Nicole Yelland, a PR executive, says she found herself on the opposite end of deepfakery earlier this year. A scammer impersonating a startup recruiter approached her over email saying he was looking for a head of comms, with an offer package that included generous pay and benefits. The purported person even shared with her an exhaustive slide deck, decorated with AI-generated visuals, outlining the role's responsibilities and benefits. Enticed, she scheduled an interview. During the video meeting, however, the "hiring manager" refused to speak, and instead asked Yelland to type her responses to the written questions in the Microsoft Teams chat section. Her alarm bells really went off once the interviewer started asking her to share a series of private documents, including her driver's license. Yelland now runs a background check with tools like Spokeo before engaging with any stranger online. "It's annoying and takes more time, but engaging with a spammer is more annoying and time-consuming; so this is where we are," she says. While videoconferencing platforms like Teams and Zoom are getting better at detecting AI-generated accounts, some experts say the detection itself risks creating an vicious cycle. The data these platforms collect on what's fake is ultimately used to train more sophisticated GenAI models, which will help them get better at escaping fakery detectors and fuel "an arms race defenders cannot win," says Jasson Casey, the CEO of Beyond Identity, a cybersecurity firm that specializes in identity theft. Casey and his company believe the focus should instead be on authenticating a person's identity. Beyond Identity sells tools that can be plugged into Zoom that verify meeting participants through their device's biometrics and location data. If it detects a discrepancy, the tools label the participants' video feed as "unverified." Tramèr Florian, a computer science professor at ETH Zurich, agrees that authenticating identity will likely become more essential to ensure that you're always talking to a legitimate colleague. It's not just fake job candidates entrepreneurs now have to contend with, it's always fake versions of themselves. In late 2024, scammers ran ads on Facebook for a video featuring Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute in Melbourne. Although the person in it looked and sounded exactly like Dr. Shaw, the voice had been deepfaked and edited to say that metformin — a first-line treatment for type 2 diabetes — is "dangerous," and patients should instead switch to an unproven dietary supplement. The fake ad was accompanied by a fake written news interview with Shaw. Several of his clinic's patients, believing the video was genuine, reached out asking how to get a hold of the supplement. "One of my longstanding patients asked me how come I continued to prescribe metformin to him, when 'I' had said on the video that it was a poor drug," Shaw tells me. Eventually he was able to get Facebook to take down the video. Then there's the equally vexing and annoying issue of AI slop — an inundation of low-quality, mass-produced images and text that is flooding the internet and making it ever-more difficult for the average person to tell what's real or fake. In her research, DiResta found instances where social platforms' recommendation engines have promoted malicious slop — where scammers would put up images of items like nonexistent rental properties, appliances, and more that users were frequently falling for it and giving away their payment details. On Pinterest, AI-generated "inspo" posts have plagued people's mood boards — so much so that Philadelphia-based Cake Life Shop now often receives orders from customers asking them to recreate what are actually AI-generated cakes. In one shared with Business Insider, the cake resembles a moss-filled rainforest, and features a functional waterfall. Thankfully for cofounder Nima Etemadi, most customers are "receptive to hearing about what is possible with real cake after we burst their AI bubble," he says. Similarly, AI-generated books have swarmed Amazon and are now hurting publisher sales. Pauline Frommer, the president of the travel guide publisher Frommer Media, says that AI-generated guidebooks have managed to reach the top of lists with the help of fake reviews. An AI publisher buys a few Prime memberships, sets the guidebook's ebook price to zero, and then leaves seemingly "verified reviews" by downloading its copies for free. These practices, she says, "will make it virtually impossible for a new, legitimate brand of guidebook to enter the business right now." Ian Lamont says he received an AI-generated guidebook as a gift last year: a text-only guide to Taiwan, with no pictures or maps. While the FTC now considers it illegal to publish fake, AI-generated product reviews, official policies haven't yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but since it's not error-free yet, some worry these measures may do more harm than good. DiResta fears that a potential unintended consequence of ubiquitous AI labels would be people experiencing "label fatigue," where they blindly assume that unlabeled content is therefore always "real." "It's a potentially dangerous assumption if a sophisticated manipulator, like a state actor's intelligence service, manages to get disinformation content past a labeler," she says. For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they're dealing with an actual human and that the money they're sending is actually going where they intend it to go. Etemadi of Cake Life Shop recognizes that for as much as GenAI can help his business become more efficient, scam artists will ultimately use the same tools to become just as efficient. "Doing business online gets more necessary and high risk every year," he says. "AI is just part of that." Shubham Agarwal is a freelance technology journalist from Ahmedabad, India, whose work has appeared in Wired, The Verge, Fast Company, and more. Read the original article on Business Insider