
How leaders can use AI to improve performance management
These shortcomings ripple beyond individual performance and can affect organizational success. A May 2024 Gartner survey of 1,456 employees found that only 52% believe performance management is helping their organization achieve its business goals.
What prevents employees from getting the most out of performance management is likely due to a perception of bias or lack of fairness in the process. Surprisingly, employees are starting to view AI as being less biased than humans when it comes to performance decisions. An October 2024 Gartner survey of nearly 3,500 employees found that 87% of employees think that algorithms could give fairer feedback than their managers right now, and an additional Gartner survey from June 2024 found that 58% of employees believe humans are more biased than AI when it comes to making compensation decisions.
Generative AI in performance management
Employees are embracing the idea that AI or generative AI (GenAI) can increase, rather than erode, fairness in the workplace. Understandably, a healthy level of skepticism still exists. At Gartner, we found that only 34% of employees agree or strongly agree that if an algorithm provided performance feedback (instead of their manager), the feedback would be fairer.
It's the duty of CHROs to improve the effectiveness and fairness of performance management at their organizations. But if that means integrating GenAI to achieve their goals, they need to take the following steps.
Step one: Evaluate the benefits of GenAI against performance management pain points
To leverage GenAI to improve performance management, HR leaders need to understand the pain points at their organization. They also need to have an idea of how GenAI capabilities might be useful in addressing them.
Data from Gartner employee and manager surveys, as well as interviews with CHROs and heads of talent management, revealed two common complaints about performance management. First, the effort required is too high. Employees and managers complain that the process demands too much of them, is overly complex, and relies on cumbersome technology. Second, many questioned how useful it actually is. Employees and managers shared that performance management was not relevant to how they work, not aligned with business needs, and disengaging and unmotivating.
To have a greater understanding of the pain points within their unique organization, CHROs and heads of talent management should ask managers and employees across the organization to provide feedback on their biggest pain points. From there, HR leaders can assess whether GenAI is the right tool to address those issues.
For example, if fairness is an issue, leaders can implement GenAI as a tool to evaluate text for bias. If time-spend and disparate technology are an issue, companies can use GenAI to summarize data and generate insights from multiple HR systems.
Step two: Gauge readiness for GenAI in performance management
Not all workplaces are alike, and some may be more open to the full spectrum of GenAI capabilities than others. Surveys can be a great tool to assess workforce readiness for GenAI in performance management. This way, leaders can ensure that the technology enhances, rather than detracts, from the employee experience.
Leaders should combine quantitative survey data with qualitative feedback by equipping managers with tools to get a fuller picture of workforce GenAI readiness. This might mean sharing standardized GenAI statements reflecting the desired performance state with managers. For example, that might mean using GenAI as a way to level bias in performance management, increase efficiency, and employee satisfaction.
In addition, question guides can also support managers in gathering candid employee input, such as whether employees are comfortable with GenAI drafting goals or suggesting performance ratings (with human oversight). Managers should collate feedback to assess GenAI's limitations in performance management.
Step three: Secure employee trust to boost adoption and satisfaction
Trust is a top barrier to AI adoption. This is why building a foundation of trust is important when integrating GenAI in performance management. CHROs and talent management leaders can build employee trust by increasing visibility into decision-making and establishing an open dialogue about GenAI.
HR leaders should start by equipping managers and employees with the rationale for how and why the organization is introducing GenAI in performance management. A simple view into the 'why' behind a decision helps employees accept and trust the decision. Employees also need to understand how decisions will directly impact their roles, so they can process, adapt, and move forward in good faith.
Lastly, leaders should establish mechanisms for employees to share feedback on GenAI in performance management to build trust and improve processes. These kinds of mechanisms help leaders identify when there is an erosion of trust, so they can rectify it by incorporating more human touch.
Effective performance management leads to better organizational performance
Improving performance management boosts employee engagement and business success. Gartner research shows that when HR aligns performance management with employee and business needs, organizations see higher perceptions of fairness and accuracy. They also see increases in employee performance (40%), engagement (59%), and overall workforce performance (60%). Increasing performance management utility drives better outcomes for everyone.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
8 hours ago
- Forbes
Engineering Excellence In The Age Of AI
Abhi Shimpi is the Vice President of Software Engineering in a Financial Services organization. As engineering leaders, many of us are racing to integrate GenAI into our development life cycles. The tools are powerful, the potential is massive, but amid all the buzz about velocity and automation, I believe we're overlooking a critical element: Engineering Excellence. If we don't start reshaping our engineering culture for AI, AI will reshape it for us, and it might not be in our favor. I don't mean just a technical shift, but also a cultural one. If we lose sight of the foundational practices that make engineering sustainable, secure and scalable, then we're moving forward recklessly. What Engineering Excellence Used To Mean Before GenAI entered the scene, engineering excellence had a clear definition. We talked about code quality, test automation, secure development practices, peer reviews, resiliency, architecture rigor and continuous delivery. We had internal maturity models to measure and reinforce those principles. Those models gave teams an understanding of what 'good' looked like and how to build clean, maintainable and trustworthy software at scale. It was about process and discipline. We created feedback loops, fostered coaching and mentorship and we made space for design thinking and technical judgment. Now, GenAI is rewriting the rules, and we need to make sure we don't allow it to erase those fundamentals along the way. Speed Without Discipline AI has transformed the developer experience. Tools like GitHub Copilot, Google Gemini and Microsoft Copilot can generate code for entire functions or workflows in seconds. Non-technical users can build apps using natural language prompts. In theory, this is empowerment. In practice, it's often chaos. I've seen firsthand how easy it is to bypass core engineering principles in the rush to adopt GenAI and ship faster. A developer asks Copilot for a script, drops it into a PowerApps and deploys. No design review, no security scan and no consideration given to how security is handled or data is managed. It works, but it doesn't scale. It creates anti-patterns that violate the architectural standards we've spent years putting in place. And it's not just developers; citizen developers (those with minimal technical training) are building and deploying internal applications without understanding the implications. What kind of data are they handling? What access are they exposing? What guardrails are missing? And it's happening across industries. The real risk isn't that GenAI makes mistakes, it's that we stop asking questions. FOMO Is Not A Strategy Let's be honest: A lot of organizations are embracing GenAI out of fear of missing out. Once the floodgates opened, everyone rushed in. The intent was good, but the pace? Unsustainable. There's nothing wrong with moving fast if you're moving with intention, but if you don't know what you're measuring, you're just reacting. And when you prioritize output over outcome, you miss the real opportunity. This is why I keep emphasizing outcome over output. GenAI can help you generate more code. That doesn't mean it's better code. We need to slow down just enough to ask: Does this solution create long-term value? Is it secure? Is it explainable? Is it maintainable? Rebuilding Development Culture For AI Embedding AI into our workflows is not enough. We have to embed engineering judgment alongside it. That means reinvesting in the things that made us strong in the first place: coaching, mentorship, engineering excellence and craftsmanship. Peer reviews still matter, clean architecture still matters, release/maintenance still matters and code design is not optional. In one example from my experience, developers unfamiliar with a programming language were able to deliver time-sensitive solutions using GenAI tools faster. We layered in strong governance: design reviews, peer oversight, security assessment and architectural alignment. Without those guardrails, the same project could have introduced serious risks. Hence, AI doesn't eliminate the need for engineering culture. It amplifies the consequences of not having one. Redefining Maturity For An AI-First World We used to measure engineering maturity using KPIs like velocity, defect rates, time to market and code coverage. Those still matter, but they're no longer enough. Now we need to measure how efficiently and responsibly we're using AI. That includes measuring aspects, such as: • How much human oversight is required? • Are AI outputs explainable? • Are they aligned with our architectural patterns? • Do we trust the AI engine's recommendations? And if not, why? If we allow AI to review our code, we must also define a trust framework. What is the trust score? What patterns is the AI referencing? Do those patterns match what we've codified as best practice? Which LLM should be used? The maturity model must evolve and be assessed continuously. Otherwise, we're shooting in the dark. Psychological Safety And Performance In A Machine-Driven World There's another piece to this puzzle—psychological safety. When we're using AI, safety is about trust in systems. We need to build environments where developers feel safe questioning AI outputs, rejecting them when necessary and adding human judgment. Blind faith in GenAI is just as dangerous as blind rejection. At the same time, we need to hold teams accountable for performance and outcomes. The tools may change, but excellence still requires clarity, consistency and commitment. What Good Looks Like So, what does success look like? From our experience, it includes: • Less rework • Fewer defects • Lower tech debt • Faster and more efficient onboarding, even for junior engineers • Enhanced developer productivity and satisfaction In the example I shared earlier, we saw measurable gains using GenAI. Faster delivery, broader developer capacity and successful outcomes even when teams were new to the tech stack. But those benefits only came after we added extra oversight to ensure architectural compliance and secure development practices. Over time, that governance load decreased because the cultural foundation was strong. That's the path forward. Short-term governance for long-term gain. Shape Or Be Shaped The real test of GenAI is cultural. Tools will continue to evolve. But if we fail to adapt our engineering practices and mindsets, those tools will define our future for us. The future is about moving with purpose. If we can redefine our maturity models, enforce meaningful guardrails and keep engineering excellence at the center, AI will be a powerful ally. If we don't, it will become a force we no longer control. And by then, it might be too late. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Yahoo
8 hours ago
- Yahoo
AI proliferation in healthcare shines light on HIPAA shortcomings
The use of artificial intelligence (AI) and generative AI (GenAI) in the healthcare space is skyrocketing. GlobalData analysis reveals that the AI market in healthcare is projected to reach a valuation of around $19bn by 2027. While the White House recently unveiled plans to 'remove barriers to American leadership' with an AI action plan, for now, entrants into the healthcare space providing AI tools to healthcare providers (HCPs), must comply with the US's Health Insurance Portability and Accountability Act (HIPAA), a regulation from 1996 that outlines rules around protecting patient healthcare data. Aaron T. Maguregui, partner at law firm Foley & Lardner told Medical Device Network: 'HIPAA was intended to scale with time and with technology. What I don't think HIPAA ever contemplated was the fact that AI would be able to essentially take in data from multiple sources, match it together, and create the potential for the reidentification of data that was never intended to be used for reidentification.' Technology has far outpaced regulation, and while Maguregui does not view HIPAA as being incompatible 'in and of itself', he states that it needs updating to account for the growing technology and compute power that exists, and how data is now being used to train AI. 'An AI vendor that provides a service to a HCP that is regulated by HIPAA is a subcontractor, and their role in healthcare is very regulated, and this becomes a somewhat limiting force for AI vendors trying to innovate and move the needle with their product, because their permitted usage and disclosures of the data as regulated by HIPAA is very restrictive,' Maguregui explained. 'It's restricted to the services that the vendor has agreed to provide, so any additional innovation, including, for example, additional training provisions the vendor may need, usually requires the HCP, and sometimes patients', consent.' Navigating HIPAA for HCPs and vendors Maguregui advises clients to start with a privacy impact assessment and bake in data governance from day one. 'On the provider side, it's important to know the types of data you have, who you're sharing data with, and what your responsibilities with respect to that data are,' Maguregui said. 'With virtual health exploding, and clinical intake going virtual, there are chatbots and workflows that are collecting data and information almost constantly, and it is important to understand whether information is regulated by HIPAA or by state law.' Having an awareness of these factors is especially important for HCPs that want to leverage an AI vendor, because they have to be able to communicate to that vendor what they need to comply with, because it will be the same regulation that the HCP has to comply with. Maguregui continued: 'In some cases, from an AI vendor's perspective, this may seem a bit unfair, because they have to rely on another party's assertion that they are complying with all of the laws they are required to comply with. 'The vendor then has to figure out whether they can comply with the relevant regulation and provide their service in compliance with the law and legally use the data at hand for purposes that are going to make their product better.' The direction of HIPAA regulation According to Maguregui, if the US cannot get on board with a single federal privacy legislation, then HIPAA should be expanded to cover the other entities that interact with health information. 'We have a desegregated regime in the US where the Federal Trade Commission (FTC) tries to regulate when HIPAA does not regulate, and that leads to more confusion and results in uncertainty for vendors and HCPs alike in understanding what their roles and obligations are,' Maguregui said. 'My wish for HIPAA would be to expand and update it, to understand where technology has gone, where compute has gone, and to improve the ability for innovation, the ability for vendors to have better access to data that will help them create better products, and to ultimately improve the patient and provider experience, and healthcare overall.' "AI proliferation in healthcare shines light on HIPAA shortcomings" was originally created and published by Medical Device Network, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio
Yahoo
9 hours ago
- Yahoo
Federal Reserve economists aren't sold that AI will actually make workers more productive, saying it could be a one-off invention like the light bulb
A new Federal Reserve Board staff paper concludes that generative artificial intelligence (genAI) holds significant promise for boosting U.S. productivity, but cautions that its widespread economic impact will depend on how quickly and thoroughly firms integrate the technology. Titled 'Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope?' the paper, authored by Martin Neil Baily, David M. Byrne, Aidan T. Kane, and Paul E. Soto, explores whether genAI represents a fleeting innovation or a groundbreaking force akin to past general-purpose technologies (GPTs) such as electricity and the internet. The Fed economists ultimately conclude their 'modal forecast is for a noteworthy contribution of genAI to the level of labor productivity,' but caution they see a wide range of plausible outcomes, both in terms of its total contribution to making workers more productive and how quickly that could happen. To return to the light-bulb metaphor, they write that 'some inventions, such as the light bulb, temporarily raise productivity growth as adoption spreads, but the effect fades when the market is saturated; that is, the level of output per hour is permanently higher but the growth rate is not.' Here's why they regard it as an open question whether genAI may end up being a fancy tech version of the light bulb. GenAI: a tool and a catalyst According to the authors, genAI combines traits of GPTs—those that trigger cascades of innovation across sectors and continue improving over time—with features of 'inventions of methods of invention' (IMIs), which make research and development (R&D) more efficient. The authors do see potential for genAI to be a GPT like the electric dynamo, which continually sparked new business models and efficiencies, or an IMI like the compound microscope, which revolutionized scientific discovery. The Fed economists did cautioning that it is early in the technology's development, writing 'the case that generative AI is a general-purpose technology is compelling, supported by the impressive record of knock-on innovation and ongoing core innovation.' Since OpenAI launched ChatGPT in late 2022, the authors said genAI has demonstrated remarkable capabilities, from matching human performance on complex tasks to transforming frontline work in writing, coding, and customer service. That said, the authors said they're finding scant evidence about how many companies are actually using the technology. Limited but growing adoption Despite such promise, the paper stresses that most gains are so far concentrated in large corporations and digital-native industries. Surveys indicate high genAI adoption among big firms and technology-centric sectors, while small businesses and other functions lag behind. Data from job postings shows only modest growth in demand for explicit AI skills since 2017. 'The main hurdle is diffusion,' the authors write, referring to the process by which a new technology is integrated into widespread use. They note that typical productivity booms from GPTs like computers and electricity took decades to unfold as businesses restructured, invested, and developed complementary innovations. 'The share of jobs requiring AI skills is low and has moved up only modestly, suggesting that firms are taking a cautious approach,' they write. 'The ultimate test of whether genAI is a GPT will be theprofitability of genAI use at scale in a business environment and such stories are hard to come by at present.' They know that many individuals are using the technology, 'perhaps unbeknownst to their employers,' and they speculate that future use of the technology may become so routine and 'unremarkable' that companies and workers no longer know how much it's being used. Knock-on and complementary technologies The report details how genAI is already driving a wave of product and process innovation. In healthcare, AI-powered tools draft medical notes and assist with radiology. Finance firms use genAI for compliance, underwriting, and portfolio management. The energy sector uses it to optimize grid operations, and information technology is seeing multiples uses, with programmers using GitHub Copilot completing tasks 56% faster. Call center operators using conversational AI saw a 14% productivity boost as well. Meanwhile, ongoing advances in hardware, notably rapid improvements in the chips known as graphics processing units, or GPUs, suggest genAI's underlying engine is still accelerating. Patent filings related to AI technologies have surged since 2018, coinciding with the rise of the Transformer architecture—a backbone of today's large language models. 'Green shoots' in research and development The paper also finds genAI increasingly acting as an IMI, enhancing observation, analysis, communication, and organization in scientific research. Scientists now use genAI to analyze data, draft research papers, and even automate parts of the discovery process, though questions remain about the quality and originality of AI-generated output. The authors highlight growing references to AI in R&D initiatives, both in patent data and corporate earnings calls, as further evidence that genAI is gaining a foothold in the innovation ecosystem. Cautious optimism—and open questions While the prospects for a genAI-driven productivity surge are promising, the authors warn against expecting overnight transformation. The process will require significant complementary investments, organizational change, and reliable access to computational and electric power infrastructure. They also emphasize the risks of investing blindly in speculative trends—a lesson from past tech booms. 'GenAI's contribution to productivity growth will depend on the speed with which that level is attained, and historically, the process for integrating revolutionary technologies into the economy is a protracted one,' the report concludes. Despite these uncertainties, the authors believe genAI's dual role—as a transformative platform and as a method for accelerating invention—bodes well for long-term economic growth if barriers to widespread adoption can be overcome. Still, what if it's just another light bulb? For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data