logo
#

Latest news with #Workday

New Zealand Finds Itself At A Turning Point For AI: Will We Ride The Wave Or Get Taken Under?
New Zealand Finds Itself At A Turning Point For AI: Will We Ride The Wave Or Get Taken Under?

Scoop

time3 hours ago

  • Business
  • Scoop

New Zealand Finds Itself At A Turning Point For AI: Will We Ride The Wave Or Get Taken Under?

The rise and integration of AI into our workflows presents a sometimes compelling, sometimes daunting prospect. If New Zealand is to respond in kind to these technological changes we must bridge the gap between leadership and employees, understand how we can support existing teams with AI agents, build in privacy and security, and advocate for ethical AI guidelines from an organisational and governmental standpoint. If we are willing to do our due diligence and invest in these quickly advancing technologies, our teams and business stand to greatly benefit. Employees retain a positive outlook on AI, but lingering apprehension remains As AI continues to be a compelling focus, new research paints a picture of great potential and a positive perspective of employees, as well as lingering hesitation and lack of education that may hinder ongoing innovation. Recent data from Workday's global research report, 'Elevating Human Potential: The AI Skills Revolution' highlights that 65% of New Zealand departments are already utilising AI, outpacing the global average of 59%. Furthermore, an overwhelming 98% report confidence in using AI for complex tasks, surpassing the global average of 91%. Workday's research also showcases the positive perspective of Kiwi workers, finding 100% of employees agree that AI allows them to focus on higher-level responsibilities, compared to 93% globally. This optimistic point of view is grounded in the view that AI can enhance human creativity. Overall, 96% of respondents believe AI will generate new forms of economic value, compared to 83% globally. KPMG's new study, 'Trust, attitudes and use of artificial intelligence: A global study 2025' reveals potential difficulties when it comes to the technology. On AI literacy, the study finds only 24% of Kiwis have undertaken AI-related training or education compared to 39% globally. Only 36% believe they have the skills to use AI appropriately, which is significantly lower than the global figure of 60%. In addition to this, 81% of New Zealanders believe greater regulation on AI is required, with 89% wanting laws and action to combat AI-generated misinformation and only 23% believing current safeguards are adequate. Interestingly, the study found the top perceived risk of AI (59%) is the potential loss of human interaction and connection due to AI. While this is an emotive response, this general perception of the technology can inhibit adoption in the workplace, with decision makers more focused on the possible adverse impacts instead of how the technology can be implemented intelligently for the benefit of teams. Echoing the Workday research, the KPMG study also found 66% of New Zealanders expect AI to deliver on a range of benefits, and 54% have personally experienced or observed benefits from AI use. The top benefit cited by respondents (69%) is the improved efficiency from AI with reduced time spent on mundane or repetitive tasks. More than 43% of respondents already report increased efficiency, quality of work and innovation with AI, and 31% report increased revenue generating activity. The power of taking a methodical approach, and closing the trust gap Intelligent AI integration that both allays fears and builds innovation must bridge the gap between leadership and the workforce, predominately by putting people and ethical use as a centrepoint. Workday's Chief Technology Officer, Jim Stratton, put it best when he wrote: 'The scale of addressing this challenge [of implementing AI] may seem daunting, but our experience has taught us that we can take measured steps. Effective organisational frameworks for responsible AI should consider four fundamental pillars: principles, practices, people, and policy. Companies need to also ensure transparent communication about their approach to each of those areas.' To close the trust gap, a methodical approach that is built on shared perspectives is crucial. As Workday's researchers found, both leaders and employees believe in and hope for a transformation scenario, they agree AI implementation should prioritise human control, and that regulation and frameworks are important for trustworthy AI. The differences between leaders and employees arise in how each group thinks AI development should be approached, and fears that people and data privacy won't be prioritised. Leaders should be transparent and open with employees about their ethical AI approach, be it the way it's built, the way it's used, or the way the business advocates for regulation. When it comes to closing the trust gap for customers, businesses should advocate for and utilise a risk-based responsible AI governance approach, leveraging best practices including those described in the NIST AI Risk Management Framework, to ensure that responsible AI innovation is promoted while preserving the benefits promised by innovation. Emphasising the vision of the human skills revolution and embracing this transformative opportunity AI is elevating workforce potential by streamlining processes, automating complex and repetitive tasks, and improving efficiency. This boost in productivity frees teams from high-volume, time-consuming work, allowing individuals to focus on uniquely human skills such as connection and relationship building, emotional intelligence and empathy, and conflict resolution. Leaders can close the trust gap by emphasising this purpose and function of AI - that it can and will amplify existing teams, not replace them. Leaders have the chance to emphasise that AI is a catalyst for a revolution where human skills become even more important. As time goes on, effective AI agent management and governance must also be carefully considered. Forward-thinking technology companies are now offering supportive solutions, such as centralised platforms that provide comprehensive oversight of an entire AI Agent fleet. For instance, Workday has leveraged its extensive experience in helping more than 10,500 companies globally manage their people, to provide a unified platform that manages the entire workforce of both people and AI agents. In this new world of work not only are leaders called upon to drive the AI conversation, but to understand this transformative opportunity. That is, to strategically manage a workforce that intelligently blends human capabilities with the power of AI agents. With the right approach, this future of work can unlock greater levels of productivity, foster creativity, empower leadership, and ultimately enable us to achieve more.

Workday (WDAY) Fell Due to a Reduction in 2025 Revenue Guidance
Workday (WDAY) Fell Due to a Reduction in 2025 Revenue Guidance

Yahoo

time13 hours ago

  • Business
  • Yahoo

Workday (WDAY) Fell Due to a Reduction in 2025 Revenue Guidance

Hotchkis & Wiley, an investment management company, released its 'Hotchkis & Wiley Global Value Fund' first quarter 2025 investor letter. A copy of the letter can be downloaded here. In Q1 2025, the MSCI World Index decreased by 1.8%, driven by the decline of mega-cap growth stocks. The Magnificent Seven represented over 22% of the MSCI World Index in the quarter, collectively experiencing a decline of 14%. The Hotchkis & Wiley Global Value Fund returned 5.96% in the quarter, outperforming the MSCI World Value Index's 4.81% return. For more information on the fund's best picks in 2025, please check its top five holdings. In its first-quarter 2025 investor letter, Hotchkis & Wiley Global Value Fund highlighted stocks such as Workday, Inc. (NASDAQ:WDAY). Workday, Inc. (NASDAQ:WDAY) is a company that offers enterprise cloud applications. The one-month return of Workday, Inc. (NASDAQ:WDAY) was -4.73%, and its shares gained 5.50% of their value over the last 52 weeks. On July 1, 2025, Workday, Inc. (NASDAQ:WDAY) stock closed at $239.23 per share, with a market capitalization of $63.778 billion. Hotchkis & Wiley Global Value Fund stated the following regarding Workday, Inc. (NASDAQ:WDAY) in its Q1 2025 investor letter: "Workday, Inc. (NASDAQ:WDAY) is a leader in cloud application software for back-office business functions including human capital management, financials management, and ERP (enterprise resource planning). Stock price was negatively impacted by a reduction in 2025 revenue guidance. Management noted the pressure on current year sales is macro-related. We believe Workday has a formidable competitive advantage that trades at an attractive valuation for a company with premier franchise potential." A group of finance professionals analyzing market trends on their computer screens. Workday, Inc. (NASDAQ:WDAY) is not on our list of 30 Most Popular Stocks Among Hedge Funds. As per our database, 85 hedge fund portfolios held Workday, Inc. (NASDAQ:WDAY) at the end of the first quarter, which was 89 in the previous quarter. Workday, Inc. (NASDAQ:WDAY) reported revenue of $2.24 billion in the fiscal first quarter of 2026, representing an increase of 13% year-over-year. While we acknowledge the potential of Workday, Inc. (NASDAQ:WDAY) as an investment, our conviction lies in the belief that AI stocks hold greater promise for delivering higher returns, and doing so within a shorter timeframe. If you are looking for an AI stock that is as promising as NVIDIA but that trades at less than 5 times its earnings, check out our report about the undervalued AI stock set for massive gains. In another article, we covered Workday, Inc. (NASDAQ:WDAY) and shared the list of stocks Jim Cramer put under the microscope recently. Parnassus Core Equity Fund added Workday, Inc. (NASDAQ:WDAY) to its portfolio in Q1 2025. In addition, please check out our hedge fund investor letters Q1 2025 page for more investor letters from hedge funds and other leading investors. While we acknowledge the potential of WDAY as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None. This article is originally published at Insider Monkey.

Top 5 ways to mitigate liability risks when AI is used improperly or goes wrong
Top 5 ways to mitigate liability risks when AI is used improperly or goes wrong

Business Journals

timea day ago

  • Business
  • Business Journals

Top 5 ways to mitigate liability risks when AI is used improperly or goes wrong

As integrating artificial intelligence into corporate operations accelerates, the rapid deployment of AI tools—often driven by executive and investor pressure—has outpaced the establishment of robust governance, compliance, and cybersecurity frameworks. Yet, government agencies are increasingly scrutinizing AI technologies and conducting investigations, often using existing customer protection laws designed to target unfair, fraudulent, and deceptive practices. For instance, the U.S. Federal Trade Commission has sanctioned several companies for engaging in misleading marketing campaigns that inaccurately describe the capabilities of their AI tools, including the 'world's first robot lawyer' that could not provide legal advice and an AI content detector that 'did no better than a coin toss.' Numerous class action lawsuits have been filed challenging the use of AI predictive algorithms by health insurance companies on the grounds that they have high error rates, systematically denying coverage without input from a medical professional. Similarly, a class action has been brought against Workday's AI-powered applicant screening platform alleging that it discriminated based on age. Most troubling, AI technologies have rapidly expanded the cyberthreat landscape enabling criminals to commit fraud on a massive scale from phishing email campaigns and deepfake schemes to the development of new variants of malware, and even, poisoning and sabotaging AI systems. It is therefore incumbent upon business leaders and in-house counsel to ensure that the deployment of AI technologies effectively manages operational risks, mitigates the likelihood of civil and criminal liability and government enforcement actions, and incorporates appropriate oversight mechanisms to safeguard an organization's infrastructure and reputation. Here are five ways to mitigate your organization's risks. Incorporate Cybersecurity Standards: As companies race to deploy AI tools, often due to pressure from the C-suite, investors, and shareholders, they need to ensure that they do not cut corners that create cybersecurity risks and increase the likelihood of a data breach. At a minimum, AI systems should incorporate the same level of cybersecurity standards (e.g., access controls, encryption data requirements, intrusion detection and monitoring, etc.) as any other tool in an organization's network because they expand the potential entry points for malicious actors and increase vulnerability risks. Therefore, organizations must continuously monitor their AI applications and infrastructure to detect any irregularities and potential security breaches such as data poisoning, data manipulation, leakage of personal or confidential information, and misuse. Adopt AI Governance Controls: Block users on your network from using risky generative AI (GenAI) tools such as DeepSeek's R1 AI model, released in January 2025, which contained extensive security flaws and critical vulnerabilities. When China's DeepSeek shocked the world with its announcement in late January 2025 that it had developed a comparable model to ChatGPT, millions of people around the world rushed to download this app and experiment with it even though its privacy policy indicated that user data would be stored on servers located in China raising significant privacy and security concerns. Such activity could cause serious harm to your organization from the leakage of confidential and proprietary data. Organizations should clearly set out rules and boundaries in their AI Acceptable Use policy on what specific types of AI tools are permitted and the tools that are prohibited. The policy should also ensure employees know that any use of AI tools must comply with all applicable laws and regulations and any violations of this policy will result in disciplinary action. Organizations should further monitor user behavior with regard to how AI tools are being used and what information is being input into any publicly available AI tool. Reduce the Likelihood of Government Enforcement Actions and False Claims Act (FCA) Liability: Government agencies have begun closely scrutinizing the use of AI tools and cracking down on misleading, deceptive, or unfair trade practices in connection with AI technology. Numerous enforcement actions have been brought by the U.S. Federal Trade Commission and Securities and Exchange Commission against companies for issuing false and misleading statements about the capabilities of their AI systems, a practice that is referred to as 'AI-Washing.' The Department of Justice (DOJ) is likely to target AI-powered healthcare billing and coding systems in its push to prosecute health care fraud, which it recently announced was the top white-collar fraud priority under Attorney General Bondi. Errors in automated coding and claim submissions to the government can result in liability under the FCA leading to treble damages. Similarly, predictive diagnostic AI tools may influence medical practitioners resulting in upcoding and overbilling practices. To reduce liability risk, organizations should ensure that they can demonstrate to government agencies that they acted responsibly in deploying and overseeing AI systems; established governance controls to ensure the technology is only used for its intended purpose and works reliably, ethically, and in compliance with applicable law; conducted robust risk assessments; regularly audited and monitored AI tools; and promptly investigated, corrected, and remediated any identified discrepancies or errors. Indeed, DOJ's September 2024 update to its guidance on the 'Evaluation of Corporation Compliance Programs' emphasizes the importance of assessing and minimizing evolving risks, including the potential for misuse by company insiders, when using AI tools within an organization's enterprise risk management strategy. Take Steps to Address the Rise in Lawsuits Involving AI Tools: AI tools can go wrong, make mistakes, and cause harm. Since the launch of GenAI, there has been a steady increase in the number of lawsuits being filed involving the misuse of AI tools. As noted above, class action lawsuits have been brought against health insurance companies for wrongful denials of coverage based upon AI predictive tools. Failure to implement guardrails in an AI system and monitor AI outputs can prove catastrophic and provide the basis for a product liability claim. On May 21, 2025, U.S. District Judge Anne C. Conway for the Middle District of Florida denied a motion to dismiss and allowed a lawsuit accusing Google and of causing a 14 year old's suicide after he became addicted to an AI chatbot to move forward, finding 'the alleged design defects' actionable. On June 4, 2025, Reddit sued AI startup Anthropic in California State Court for unlawfully using its data for commercial purposes without paying for it and in violation of Reddit's user data policy. It is only a matter of time before we see legal malpractice claims against lawyers for filing pleadings with hallucinated legal citations. Once a problem with an AI tool is detected, steps should promptly be taken to investigate the issue, preserve the evidence, consider making a voluntary self-disclosure and make any required disclosures to state and federal agencies, and fully remediate the situation. Get Ready for the Challenges of Agentic AI: AI agents powered by large language models are not only generating new content in response to prompts, but autonomously making and executing decisions. Agentic AI has the potential to transform business operations. AI agents, however, could also increase liability risks while making organizations more susceptible to cyberattacks. AI agents are authenticated users on a network that operate using corporate credentials and rapidly execute decisions. They can be tricked and manipulated by a prompt injection or adversarial action. It is therefore crucial to adopt clear policies, safeguards, oversight frameworks, and auditing procedures, and conduct AI red teaming exercises. Discover how Hinckley Allen's cross-disciplinary Artificial Intelligence Group helps our clients navigate emerging AI technologies' legal, regulatory, and business challenges. From risk management to strategic deployment, our attorneys provide tailored counsel to help you innovate with confidence. Learn how our insights can support your business's AI journey. Hinckley Allen is a full-service business law firm dedicated to delivering exceptional results for its clients. With more than 170 attorneys across offices in Connecticut, Florida, Illinois, Massachusetts, New Hampshire, New York, and Rhode Island, the firm represents leading regional, national, and global businesses in their most critical legal and business matters. Since 1906, Hinckley Allen has played a vital role in shaping the landscape of law, business, government, and community engagement. Learn more at B. Stephanie Siegmann is a litigation partner at Hinckley Allen. She specializes in handling high-stakes criminal and civil litigation matters, sensitive internal investigations, government enforcement proceedings, and cyber-related incidents of all kinds. Stephanie serves as Chair of the International Trade & National Security group, and Co-Chair of the Cybersecurity, Privacy & Data Protection, and Artificial Intelligence practice groups.

Jim Cramer on Workday Stock: 'I'm Worried'
Jim Cramer on Workday Stock: 'I'm Worried'

Yahoo

time4 days ago

  • Business
  • Yahoo

Jim Cramer on Workday Stock: 'I'm Worried'

Workday, Inc. (NASDAQ:WDAY) is one of the 11 stocks Jim Cramer put under the microscope recently. During the lightning round, a caller inquired about the company, and Cramer replied: 'I'm worried. There's a lot of companies coming for Workday, and I don't like that. I think that what happens is we begin to see what's happening to Salesforce right now, where people just don't want to own Salesforce. So I want to stay away from Workday. I got enough pain right now with Salesforce.' A group of finance professionals analyzing market trends on their computer screens. Workday (NASDAQ:WDAY) provides cloud-based enterprise software designed to support financial management, human resources, spend management, planning, and supply chain operations. The platform includes features for analytics, reporting, and custom application development. Parnassus Investments stated the following regarding Workday, Inc. (NASDAQ:WDAY) in its Q4 2024 investor letter: 'We also added several new positions, including two in Information Technology: Workday, Inc. (NASDAQ:WDAY), a category leader for enterprise cloud applications for finance and human resources. We believe Workday's product stickiness and key initiatives such as its partnership with other service providers position the company well for incremental growth over the next few years.' While we acknowledge the potential of WDAY as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None.

‘AI rollup' investors think services firms can trade more like software companies. Here's what they get wrong
‘AI rollup' investors think services firms can trade more like software companies. Here's what they get wrong

Yahoo

time5 days ago

  • Business
  • Yahoo

‘AI rollup' investors think services firms can trade more like software companies. Here's what they get wrong

Nathan Benaich is the founder of Air Street Capital and author of the State of AI Report. Nikola Mrkšić is the CEO of PolyAI. Across the technology investing world, investors are scaling their bets on a seductive thesis: Generative AI will transform low-margin service businesses into high-margin software companies. Several well-known platform venture firms have committed billions to this strategy and have begun to make their bets. Here's how the thesis goes: First, acquire traditional business process outsourcing (BPO) companies such as call centers and accounting firms at modest valuations of 1x revenue. These businesses typically operate at 10-15% EBITDA (earnings before interest, taxes, depreciation, and amortization) margins, weighed down by armies of human workers performing repetitive tasks, and automation faces the greatest structural resistance. Second, deploy generative AI to automate core workflows, cut headcount, and expand EBITDA margins to 40% or more. What once required hundreds of accountants or call center agents can now be done by a handful of people managing AI systems. Third, exit the newly minted AI-enabled services company at software multiples because buyers and public markets recognize you've transformed a human-heavy service business into a scalable AI business. Where traditional BPOs trade at 6x EBITDA, software companies command 20x or more. On paper, it's brilliant arbitrage. In practice, it's a mirage. It rests on a fundamental category error: confusing operational improvement with business model transformation. Yes, AI can make workflows more efficient. No, that doesn't turn a services company into a software company. Indeed, five years ago, a now notable AI company ran this exact experiment, and walked away. Its findings should serve as a warning to today's believers. Let's dig in. The most damning evidence against the AI rollup thesis hides in plain sight on public markets. Today's 'AI-transformed' BPO firms that have invested heavily in automation—among them Concentrix, Genpact, and Infosys—trade at 5-23x EV/EBITDA (enterprise value to EBITDA). Their pure software counterparts, such as Salesforce, ServiceNow, and Workday, command valuations of 22-92x EV/EBITDA. Here is a chart to tell the story: That's not a gap that can be bridged with press releases about OpenAI, Anthropic, or Gemini partnerships. It's a fundamental difference in how markets value human-dependent businesses versus true software platforms. Consider Concentrix, often cited as a BPO transformation success story. Despite a major push in launching their gen-AI products in 2024 and now having deployments at over 1,000 customers, the company's EV/EBITDA multiple remains stuck in the low single digits, and its EBITDA margin is still hovering around 10%. The market's message is clear: Automating workflows doesn't change your fundamental business model. In 2019 PolyAI, the leading conversational AI company, spent six months exploring whether to acquire incumbent human-driven contact centers to accelerate its growth. After analyzing the opportunity by visiting over 10 contact centers, building relationships with three major BPOs, and hiring industry advisors, the answer was a clear no. 'Business Process Outsourcing firms are not trusted to innovate, not rewarded for innovating, and not allowed to innovate,' read its board deck. The structural barriers it identified remain unchanged today: The illusion of control: Buying a BPO doesn't mean owning the business you're supporting. You're simply renting the right to supply labor on the client's terms. Tech stacks, processes, and approvals remain firmly in the client's hands. AI deployments still require their permission, integration, and oversight. You're not in control; you're a replaceable vendor. The pricing trap: Most service businesses bill by the hour. Efficiency improvements that reduce billable hours directly cannibalize revenue. As PolyAI discovered, BPOs promise innovation to win contracts, then revert to maximizing billable hours to protect margins. It's a business model fundamentally at odds with automation. Zero switching costs: Where 10-year service contracts were once the norm, it's now increasingly common to see three-year terms or less. This reduces the ability to recoup up-front AI investments, particularly when there's little client lock-in, no network effects, and no moat. PolyAI chose to remain a software company, partnering with BPOs rather than acquiring them. Today, it's valued at over $500 million with customers like PG&E, Marriott, and FedEx. Meanwhile, the BPOs they considered buying still trade at single-digit multiples. Here's what investors are missing: Services businesses aren't inefficient by accident. They're inefficient by design. The inefficiency is the product. Clients pay for flexibility, customization, and someone to blame when things go wrong. Automating away the human doesn't just reduce costs, it fundamentally changes what you're selling. BPO technology capability has never been the constraint. And clients who wanted software would have already bought software. The most successful services firms understand this. They use AI to augment their humans, not replace them. They maintain margins through pricing power and relationships, not through headcount reduction. Ultimately, they still trade at services multiples because that's what they are. The AI rollup thesis represents a familiar pattern in technology investing: the conflation of technological capability with business model transformation. We've seen this movie before. In the early 2000s, believers thought e-commerce would transform retail margins. Amazon proved them right by building a native digital retailer, not by acquiring and transforming Sears or Barnes & Noble. In the 2010s, investors believed software would eat traditional industries. The winners built new software-native businesses rather than retrofit old ones. The same lesson applies today, but with a narrower scope. AI may well transform some corners of professional services, especially when existing firms are pushed to adopt new tools by private equity owners with clear control and incentives. We've seen this in sectors like health care and financial services, where PE firms have driven adoption of AI-driven tooling. But this is different from the AI rollup thesis that VCs are chasing—one that assumes low-margin, labor-heavy service businesses can be turned into software-like platforms simply by embedding AI. For those firms, transformation won't come from owning the service layer. It will come from new, AI-native companies with fundamentally different economics. The AI rollup thesis is venture capital's attempt to arbitrage the multiple gap between services and software. But that gap exists for a reason. Services businesses, even highly automated ones, face different constraints, different economics, and different customer relationships than software companies. PolyAI saw it in 2019. Public markets see it now. The AI revolution is real. The opportunity to improve services businesses with AI is real. The idea that this improvement transforms them into software companies? It's unlikely to be real today, just as it wasn't in 2019. AI rollups may still deliver returns, but not the kind VCs are underwriting. At best, they're tech‑enabled private equity: operationally heavy, valuation‑capped, and unlikely to scale like software. The opinions expressed in commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune. Read more: Informatica CEO: How to future-proof your career in the age of AI Why despite all the AI upheaval, there's never been a better time to be human How to lead when machines can do everything (except be human) I've led teams at Google, Glean, and GrowthLoop. Here's why AI is making me a more human leader This story was originally featured on Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store