Five Myths about Measuring Training s Impact

Size: px
Start display at page:

Download "Five Myths about Measuring Training s Impact"

Transcription

1 Five Myths about Measuring Training s Impact Why They ll Get You into Trouble Shawn Overcast Managing Consultant CEB Metrics That Matter Managing Consultant CEB Metrics That Matter

2 The World s Leading Learning Analytics Software Built-In Epertise Key metrics SmartSheet evaluations Optimization roadmap Learning Impact Benchmarks >1 billion data points Integrated in reporting Filterable Automated Insights Scrap learning Business results ROI from

3 Who We Are CEB is the world s leading member-based advisory company. We have a unique view into what matters and what works when capitalizing on drivers of business performance. With 30 years of eperience working with top companies to share, analyze, and apply proven practices, we begin with great outcomes and reverse engineer to help you unlock your full potential. As a result, our members achieve outsized returns by more effectively optimizing talent investments, creating new sources of efficiency, reducing risk, and enabling and accelerating growth. 30+ Years of Eperience 110+ Countries Represented 6,000+ Participating Organizations 300,000+ Business Professionals 90% of the Fortune % of the FTSE % of the Dow Jones Asian Titans Best Practices & Decision Support Talent Management Leadership Councils Market Insights Talent Acquisition Talent Development Talent Strategy & Analytics. 1

4 Five Myths: Insights & Tips to Counter 1 Level 1 Evaluation Results Provide Valuable Information 2 Self-Reported Data are Biased and Lack Value 3 Statistical Significance Testing is Necessary 4 Customization is More Valuable than Standardization 5 Evaluation Begins After Training is Deployed 4

5 Five Myths: Insights & Tips to Counter #1: Level 1 Evaluation Results Provide Valuable Information 5

6 ACTION: Level 1 is No Longer Level 1 Change Your Paradigm. An evaluation can do more than Level 1. Smile Sheet If you are going to bother the learners to ask them about their reactions to training, then ask them the really important questions. Ask more than Level 1 Smart Sheet Did you learn new knowledge and skills? Will you be able to apply them? Will your performance improve due to training? Will you have managerial support? Will your improvement impact the performance of the organization? 6

7 ACTION: Level 1 is No Longer Level 1 Change Your Paradigm An evaluation can do more than Level 1. Reaction Change the purpose. Don t confirm that they liked training Prediction Use the data provided by leaners to predict performance improvement! I did learn I will apply I will improve So will the company 7

8 Five Myths: Insights & Tips to Counter #2: Self-Reported Data are Biased and Lack Value It s just someone s opinion My boss needs highly accurate statistical results, not opinion data Where is the scientific rigor? 8

9 Various Levels of Value Type of Data Value Drawbacks Anecdotal Rich qualitative information Details about value Self-report Quantitative and qualitative Easy to gather on surveys Can standardize to allow analysis across courses Can create predictive models Business / Operations data Non-biased Shows actual business performance Outcome measures specific to individual courses/programs Not easy to summarize Not statistically reliable Requires rigor to develop valid and reliable instruments Opinion-based and biased Difficult to etract from systems Requires business analyst Takes months/years to observe the outcome Inconsistent data quality, accuracy comprehensiveness 9

10 ACTION: Crowds and Rigor Rely on the Wisdom of Crowds People who know their roles and their skills provide accurate estimates of how much their performance will improve Trust but verify Whether you build your own data collection instruments or use those created by others, apply appropriate rigor to determine if the survey tools are valid and reliable 10

11 Five Myths: Insights & Tips to Counter #3: Statistical Significance Testing is Necessary Is there a sig relationship or not? If it s not sig, it doesn t matter. Burden of Proof? 11

12 What is Statistical Significance? Statistical significance testing is a way of determining if the observed data occurred by chance. Are there differences between groups? Is there a relationship between two or more variables? Learner Performance Improvement Learner Performance Improvement y = m + b + e Low Manager Support High Manager Support Scrap Learning 12

13 ACTION: Balance Statistical and Practical Significance! Determine for your teams and stakeholders what differences are meaningful 5%, 10% 15%? What difference is practical and would spur action?! Be cautious: Large data sets can create false positives (small differences are sig) and small data sets can create false negatives (large differences that are not sig)! The CEO wants to know What is the impact of training?! Educate the C-suite that timely, valid, reliable smart sheet data are equally as valuable as statistically significant results from impact studies! Impact studies time consuming, costly, and resource intensive! Scalable Smart Sheets reliable, timely indicators of program quality 13

14 Five Myths: Insights & Tips to Counter #4: Customization is More Valuable than Standardization Content varies per course Evaluation questions should vary as well Custom questions provide deep insights for stakeholders Stakeholders often demand questions about certain aspects of the course 14

15 ACTION: Incorporate Standards Standardization is The use of a core set of questions for every course Some variations are necessary for learning methodology (e.g., ILT, SPWB, Virtual ILT) and possibly business unit Benefits are Simplicity in design; reduce time and effort Allows for benchmarking and comparison of results across courses Governance: Reduce or eliminate library of evaluation forms Prevent the deletion of core questions Allow for additional questions in an Additional Questions section on a limited basis (e.g., only for strategic, visible and costly programs) 15

16 Five Myths: Insights & Tips to Counter #5: Evaluation Begins After Training is Deployed 16

17 ACTION: Ongoing Evaluation Planning is essential to good evaluation. 1. When forecasting the value of a training program before it is designed 2. When selecting training methods and tools that will maimize learning while minimizing costs 3. While demonstrating the effectiveness of training after it has been deployed 4. While making decisions about future versions of the program, especially in comparison to other programs in the curriculum 17

18 Key Takeaways

19 Thank You Shawn Overcast Managing Consultant, CEB Danny Brown Account Manager, CEB