Measuring Performance with Objective Evaluations

Size: px
Start display at page:

Download "Measuring Performance with Objective Evaluations"

Transcription

1 PERFORMANCE MANAGEMENT FOR HIGH-PERFORMANCE CULTURES PART 3 of 5 Measuring Performance with Objective Evaluations

2 TABLE OF CONTENTS I CREATING A CULTURE OF HIGH PERFORMANCE Decouple performance development & performance measurement Build growth & learning into your company s culture II ACCELERATING PERFORMANCE WITH EMPLOYEE- OWNED DEVELOPMENT Empower employees with feedback Create a framework for manager check-ins Allow for agile goals management Facilitate employee-driven career planning III MEASURING PERFORMANCE WITH OBJECTIVE EVALUATIONS Reframe your rubric to r educe manager bias Enable manager-requested peer feedback 2 6

3 IV TEMPLATES AND RESOURCES Sample timeline for performance events Interactive templates for manager check-ins Sample objective evaluation rubric Performance Management Software checklist V HIGH-PERFORMANCE CULTURE SPOTLIGHTS

4 PART III Measuring Performance with Objective Evaluations In Part 2 of Performance Management for High-Performance Cultures, we detailed the framework for employee-owned development. However, companies should not throw away the performance review altogether. Most companies still believe in pay for performance, and need an objective way to evaluate employees. In Part 3 of our series, we share how to create a performance measurement process optimized for reliability and objectivity. Once companies have introduced a performance development system, it effectively separates the conversation of development from evaluation. This improves the review experience for employees, who are given year-round data to drive improvements without worrying about how it may affect their job compensation. However, HR teams need a way to reliably evaluate employees for promotion and compensation decisions. Don t be tempted to throw away the performance review altogether. Without a clear system in place, the process becomes a black box. Leadership on the People team at Facebook wrote an article called Let s Not Kill Performance Evaluations Just Yet for Harvard Business Review. They write, The reality is, even when companies get rid of performance evaluations, ratings still exist. Employees just can t see them. Ratings are done subjectively, behind the scenes, and without input from the people being evaluated. At Facebook, 87% of people wanted to keep performance evaluations. Instead, improve your performance measurement process to optimize for reliability. 1

5 Reframe your rubric to reduce manager bias If you ask people to rate one another, you end up with a flawed system. In 2015, the HBR published an article titled Reinventing Performance Management by Marcus Buckingham and Ashley Goodall. The authors describe in detail how Deloitte built a new performance management system based on research in the science of ratings. 2

6 What they discovered is that the typical process of skills assessment produces unreliable data. The article cites extensive research published in the Journal of Applied Psychology in The study concludes that ratings are actually telling you more about the skills of the rater, not the ratee. The idiosyncratic rater effects lead to a system in which managers are evaluating members of their team on skills relative to their own. So, for example, if a manager is meant to rate an employee on her project management skills, and she has weak project management skills herself, she ll rate the employee highly, with low correlation to her actual performance. Asking more people through peer or 360-degree surveys does not solve this issue. Remember, people are not reliable raters of other s skills. Instead, Deloitte found that people are highly consistent when rating their own feelings and intentions. To see performance at the individual level, then, we will ask team leaders not about the skills of each team member but about their own future actions with respect to that person. To see performance at the individual level, then, we will ask team leaders not about the skills of each team member but about their own future actions with respect to that person. Here are a few sample questions to illustrate this approach: Given what I know of this person s performance, I would always want him or her on my team. I can always rely on this person to solve the most challenging issues. What qualities does this person demonstrate in this role? 3

7 Unfortunately, idiosyncratic rater effect is not the only type of bias that creeps into performance reviews. Recency and availability bias play a large role in the outcomes of a performance evaluation. And guess what? These biases can lead to other types of bias, including gender and confirmation bias. It seems like a lot to tackle at once, but it s actually quite simple to make small adjustments that will have a big impact in reducing manager bias: 1 Have access to historical feedback To start, make sure that reviewers have access to a large volume of historical feedback when evaluating performance. This way, when managers need to make a decision on an employee, they have historical data points in front of them as well as a log of the employee s goals, check-ins, and feedback (see Part 2 of this series for more details on these). The best way to avoid recency bias is to conduct evaluations more than once per year. Consider having a review cycle twice per year (or more) to collect additional data and to shorten the evaluation period. This also meets the needs of the fast-paced nature of companies today, where roles or goals can change often and substantially within the period of a year. 2 Don t use the rater scale Second, don t use a rating scale when evaluating employees. This can lead to something called Central Tendency Error. Backed by research, Central Tendency Error is the philosophy that reviewers, when using a rating system, tend to rate everyone in the middle of the pack. 4

8 For example, if we have a 1-5 scale, most reviewers will pick 3 or a 4. That average score may not be aligned with an employee s true performance. When the tendency is to typically rate everyone straight-down-the-middle, an employee s evaluation may be distorted. Instead, encourage managers to submit Strong No, No, Yes or Strong Yes answers to questions like those provided above, as they encourage managers to think about what actions they would take with employees. The result is far more accurate evaluation data. * You can download a sample evaluation template in Part 4 of this series. 5

9 Enable manager-requested peer feedback Peer feedback can be a critical component of the performance management process. This is especially true in cases where managers do not work closely enough with employees to fairly offer an assessment. 6

10 In traditional peer feedback systems, co-workers are asked to rate an employee s skills, but as research suggests (see previous section), rating systems are not the best tool for evaluating an employee s performance. In every interview we ve conducted, employees say they prefer peer-provided commentary, as opposed to a rating score. This option gives employees insight into not only where, but how they can improve. Consider these steps for successful peer-provided feedback: Come up with a good set of questions that managers can use to solicit peer feedback. Allow managers to collect peer feedback at any time and not just at review time. For example, at the end of a project or prior to a monthly one-on-one. Encourage managers to seek feedback from a few employees on the success of the peer feedbak session. This is helpful information for coaching purposes. USE CALIBRATION SESSIONS A manager does not have a full picture of an employee s work, and peer feedback is one way to collect feedback from those who can more accurately weigh in on someone s impact and contributions. An extra step to ensure reliability is to hold manager calibration sessions once ratings have been submitted. Referring again to the wisdom of Laszlo Bock in Work Rules!, he writes, It s fair to say that without calibration, our rating process would be less fair, trusted, and effective. I believe that calibration is the reason why Googlers were twice as favorable toward our rating system as people at other companies were to theirs. 7

11 At Google, before ratings are finalized, a group of five to ten managers meet and discuss fifty to a thousand employees, discuss individuals, and land on a fair rating together. Bock says that this not only relieves the pressure that managers feel from employees to inflate their ratings, but also results in a shared performance standard across groups. Managers with different expectations of performance will lead to a biased process. Calibration also allows managers to hold one another accountable, and flag any other biases that may be affecting reviews. The ultimate goal, as it should be, is to be fair, and for employees to feel confidence that promotions are based on merit alone. In order to create an objective calibration process, some things to keep in mind are: Make sure bell curves aren t forced. Let this happen organically. Don t rank people against one another. Managers should come to the calibration meeting with data (see Part 2). Without this, you risk ratings being based on a manager s ability to advocate for employees. This is not a popularity contest. Make sure the focus of the conversation is on the impact someone is having, not how often you ve heard her name. It s small adjustments that can mitigate manager bias in performance management review situations, ensuring your performance decisions are rooted in reliable data. 8

12 PART III Next, get started right away by downloading interactive resources in Part 4: Performance Management Tools & Templates, or learn specific examples of how companies have rebuilt their performance in Part 5: High-Performance Culture Spotlights. Jump right to them from here: Part I Part 2 Part 4 Part 5 Part 3 9

13 ABOUT ZUGATA Zugata is Performance Management Software for high-performance cultures. It is the only solution that enables both performance evaluation and performance development, driving performance forward and impacting your company s bottom line. Our robust platform accelerates employee performance by using sophisticated algorithms to gather continuous feedback, enabling meaningful check-in conversations, and delivering personalized resources to help employees advance their skills. Over 1,000 companies like Lyft, Gusto, and Greenhouse use Zugata to accelerate employee performance. Join them and create a high-performance culture at your organization Sign up for a demo Empowering employees to reach their potential Copyright 2017 Zugata Inc.