Judith S. Shaw, EdD, MPH, RN, FAAP Executive Director, Vermont Child Health Improvement Program. Editor, (NIPN) Bright Futures

Size: px
Start display at page:

Download "Judith S. Shaw, EdD, MPH, RN, FAAP Executive Director, Vermont Child Health Improvement Program. Editor, (NIPN) Bright Futures"

Transcription

1 Judith S. Shaw, EdD, MPH, RN, FAAP Executive Director, Vermont Child Health Improvement Program National Associate Improvement Professor of Pediatrics, Partnership UVM College of Network Medicine Editor, (NIPN) Bright Futures MCH-EPI Pre-conference Training December 11, 2012 San Antonio Session Objectives Data for Improvement vs. Accountability or Research Share examples of visual data displays from states Recognize different ways of visually communicating data to engage and activate specific audiences Discuss the advantages and limitations of selected data displays 1

2 Tools for Improvement 1 Provost, Lloyd and Sandra Murray. The Health Care Data Guide Learning From Data for Improvement. San Francisco: JosseyBass, Category Tool Use of Tool ViewingSystems and Processes Gathering Information Organizing Information Understanding Variation Understanding Relationships Project Management Flow Diagram Develop picture of a process Data collection form Operational Definition Affinity Diagram Cause and Effect Diagram Tree Diagram Interrelationship Diagram Run Chart Pareto Chart Frequency Plot Shewhart Chart Data collection plan and how applied Summarizes information,current knowledge, visual structure Study variation over time Focus on areas of improvement with greatest impact Distinguish between special and common cause variation Scatter plot Analyze associations or relationship between two variables Gantt Chart PERT Chart Organize andd isplaysequential relationships of project tasks Data for Improvement Typically collected to: Observe process performance Obtain ideas Test changes Determine sustainability Collecting data for improvement usually data that is already available or easy to obtain. 2

3 Data for Accountability Typically collected to: Evaluate or judge the performance of a group or organization For an external customer Summarizes information (percentiles) and often compared to similar organizations (benchmarking) Not viewed over time Data for Improvement, Accountability, or Research 1 ASPECT Improvement Judgment or Accountability Aim/Purpose Test observability Bias Improvement of process, system, and outcomes Test observable Accept stable bias Judgment, choice, reassurance, to create urgency for change No test, evaluate current performance Measure and adjust to reduce bias Research To generate new generalizable knowledge Test blinded Design to eliminate bias 1 Provost, Lloyd and Sandra Murray. The Health Care Data Guide Learning From Data for Improvement. San Francisco: JosseyBass,

4 Data for Improvement, Accountability, or Research ASPECT Improvement Judgment or Accountability Sample size Flexibility of hypothesis Just enough data, small, sequential samples Very flexible; changes as learning takes place Obtain 100% of relevant and available data No hypothesis Research Just in case data Fixed hypothesis Testing strategy Sequential tests No tests One large test Determining if a change is an improvement Run charts or statistical process control charts No focus on change Hypothesis tests (t-test, X 2, etc ), focus on p-value 1 Provost, Lloyd and Sandra Murray. The Health Care Data Guide Learning From Data for Improvement. San Francisco: JosseyBass, Challenges in moving from Accountability/Research to Improvement Many assume you need the highest level of rigor If not of research quality, then data may be discounted or dismissed Data collected for improvement doesn t meet the standards for accountability and thus doesn t meet organizations needs (rarely can use for both purposes) People may be suspect of your reasons for data collection improvement data should not be used for accountability Data for accountability may reveal deficiencies, but won t determine what to change in current processes to achieve improvement. 4

5 Types of Data Quantitative (preferred for improvement) Qualitative may be used when: Quantitative data difficult or expensive to obtain Information dramatic so qualitative is sufficient to meet all needs Observations often best describe the phenomena of interest Relationship of Nature of Data to Improvement Terms Nature of Data Continuous Attribute Quantitative Continuous Count Qualitative Classification 1 Provost, Lloyd and Sandra Murray. The Health Care Data Guide Learning From Data for Improvement. San Francisco: JosseyBass,

6 Example Team working to improve wait times in clinic. The goal was for 75% of patients to wait less than 30 min. Outcome measure: % of patients waiting < 30 min Changes: many Perception was that patients were happier and patient satisfaction had improved. 1 Provost, Lloyd and Sandra Murray. The Health Care Data Guide Learning From Data for Improvement. San Francisco: Jossey Bass, Attribute Data (collect and document each week the count of patients who wait less than 30 min) Percent % of patients waiting < 30 min Goal = 75% Weeks 6

7 What else might they look at? Continuous data (collect and document each week the average wait time) Minutes Average patient waiting time Weeks 7

8 Annotated Run Chart 'Flu shot uptake in Ped CF patients % known to be immunized flu shots arrive MDs and nurses reminded to give flu shots, clinic nurses screen patients, and attach flu shot packet to clinic chart Letter sent to CF families re flu epidemic asked to call office with flu shot info weeks List generated of pts with unknown flu shot status office staff call 60 families get info on 43. Goal = 95% Outcome, Process, and Balancing Measures Type of Measure Description Example (perioperative) Outcome The voice of the customer or patient How is the system performing? What is the result? Process The voice of the workings of the process Logically linked to obtaining the outcomes Address how key parts orsteps of the system are performing Balancing Look at a systemfrom different directions or dimension What happened to the system as we improved the outcome and process measures? Could relate to unintended consequences or competing explanations for project success. % ptsharmed % unplanned OR return % surgical readmission % pts with on-time antibiotics % with DVT prophylaxis %with appropriate Beta Blocker use Volume of surgical workload %of prophylactic antibiotics appropriately discontinued 1 Provost, Lloyd and Sandra Murray. The Health Care Data Guide Learning From Data for Improvement. San Francisco: JosseyBass,

9 NIPN States Reported on their data collection activities Conducting qualitative assessments Data for quality improvement Customizing data reports for specific (nonpractice) target audiences Qualitative Assessments Process-oriented Details the story behind the numbers Obtain a pulse on a practice s progress Identify barriers/challenges Share successes and lesson s learned Assess team functioning Promote sr. leadership support for change 9

10 VCHIP 10

11 Data for Quality Improvement Data collection process Understand variation through run charts Using dashboards Compare practices to one another Identify & measure progress towards target goals Identifying vs. de-identifying data 11

12 12

13 13

14 Reporting Findings to Key Stakeholders Start where you want to end Ask them for input at each point Strength-based approach to data reporting Consider the needs of the audience and anchor the information relevant to the audience What do they worry about? What can they leverage? Changes in 9 Month Screening Rates by Practice 100% 80% 60% 40% 20% Changes in 24 Month Screening Rates by Practice 100% 80% 60% 40% 20% 0% pre post pre post 0% pre post pre post Goal No Goal Goal No Goal Changes in 18 Month Screening Rates by Practice 100% 80% 60% 40% 20% Changes in 30 Month Screening Rates by Practice 100% 80% 60% 40% 20% 0% pre post pre post 0% pre post pre post Goal No Goal Goal No Goal 14

15 15