Control Charts for Customer Satisfaction Surveys

Similar documents
Chapter 1. Introduction

Animal Research Ethics Committee. Guidelines for Submitting Protocols for ethical approval of research or teaching involving live animals

David M. Rocke Division of Biostatistics and Department of Biomedical Engineering University of California, Davis

DOWNLOAD PDF MANUAL ON PRESENTATION OF DATA AND CONTROL CHART ANALYSIS

PVP Proceedings of the ASME 2014 Pressure Vessels & Piping Conference PVP2014 July 20-24, 2014, Anaheim, California, USA

Distinguish between different types of numerical data and different data collection processes.

Revision confidence limits for recent data on trend levels, trend growth rates and seasonally adjusted levels

Quality Control Charts

Measurement and Scaling Concepts

Lecture Notes on Statistical Quality Control

Business Quantitative Analysis [QU1] Examination Blueprint

Shewhart and the Probability Approach. The difference is much greater than how we compute the limits

Untangling Correlated Predictors with Principle Components

Project Quality Management

Modelling buyer behaviour/3 How survey results can mislead

Online Student Guide Types of Control Charts

Module - 01 Lecture - 03 Descriptive Statistics: Graphical Approaches

Overview of Statistics used in QbD Throughout the Product Lifecycle

FUNDAMENTALS OF QUALITY CONTROL AND IMPROVEMENT. Fourth Edition. AMITAVA MITRA Auburn University College of Business Auburn, Alabama.

AP Statistics Scope & Sequence

Chapter 4 Exercise Solutions

Modelling buyer behaviour - 2 Rate-frequency models

Welcome to the course, Evaluating the Measurement System. The Measurement System is all the elements that make up the use of a particular gage.

Quality Management (PQM01) Chapter 04 - Quality Control

The Importance of Understanding Type I and Type II Error in Statistical Process Control Charts. Part 1: Focus on Type 1 Error

Project Management CTC-ITC 310 Spring 2018 Howard Rosenthal

A GUIDE TO GETTING SURVEY RESPONSES

FUNDAMENTALS OF QUALITY CONTROL AND IMPROVEMENT

PROJECT QUALITY MANAGEMENT. 1 Powered by POeT Solvers LImited

Chapter 1 Data and Descriptive Statistics

Introduction to Control Charts

ISO 13528:2015 Statistical methods for use in proficiency testing by interlaboratory comparison

Introduction to Research

AMERICAN SOCIETY FOR QUALITY CERTIFIED RELIABILITY ENGINEER (CRE) BODY OF KNOWLEDGE

Evaluating the use of psychometrics

Understanding UPP. Alternative to Market Definition, B.E. Journal of Theoretical Economics, forthcoming.

Don t We Need to Remove the Outliers?

Quality Control Troubleshooting Tools for the Mill Floor

Getting the Most out of Statistical Forecasting!

TOTAL QUALITY MANAGEMENT THE TIME HAS COME FOR METALLURGICAL PLANTS

Draft agreed by Scientific Advice Working Party 5 September Adopted by CHMP for release for consultation 19 September

Ten Requirements for Effective Process Control

THE IMPROVEMENTS TO PRESENT LOAD CURVE AND NETWORK CALCULATION

Forecasting with Seasonality Version 1.6

Design for Manufacture. Machine and Process Capability

Sawtooth Software. Sample Size Issues for Conjoint Analysis Studies RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

Change-Point Analysis: A Powerful New Tool For Detecting Changes

1. What is a key difference between an Affinity Diagram and other tools?

Correlation and Simple. Linear Regression. Scenario. Defining Correlation

An Agent-Based Approach to the Automation of Risk Management in IT Projects

Chapter 10. Applications for Chemiluminescence in Combustion Diagnostics

Gasoline Consumption Analysis

Statistical Process Control and Software Reliability Trend: An Analysis Based on Inter Failure Time Data

Quantitative Analysis for Management, 12e (Render) Chapter 2 Probability Concepts and Applications

Acknowledgments. Data Mining. Examples. Overview. Colleagues

The Role of Continuous Improvement in Health Care

Process Performance and Quality Chapter 6

Use and interpretation of statistical quality control charts

CORPORATE ATTRIBUTES OF IS09000 AND THE QUALITY SYSTEM. Bob Anderson Western Regional Manager Fremont Industries Shakopee, Minnesota

Defect Escape Analysis: Test Process Improvement. Author: Mary Ann Vandermark IBM Software Group, Tivoli Software January 23, 2003

Process Performance and Quality

WHAT YOU DON T KNOW CAN HURT YOU

Business Analytics & Data Mining Modeling Using R Dr. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee

THE LEAD PROFILE AND OTHER NON-PARAMETRIC TOOLS TO EVALUATE SURVEY SERIES AS LEADING INDICATORS

Developing Control Charts in a Manufacturing Industry and Improvement of Quality Using PDCA Cycle

CH (8) Hot Topics. Quality Management

Intermediate Systems Acquisitions Course. The Manufacturing Process

Full file at

EFFICACY OF ROBUST REGRESSION APPLIED TO FRACTIONAL FACTORIAL TREATMENT STRUCTURES MICHAEL MCCANTS

Justifying Advanced Finite Capacity Planning and Scheduling

How to reduce bias in the estimates of count data regression? Ashwini Joshi Sumit Singh PhUSE 2015, Vienna

Folia Oeconomica Stetinensia DOI: /foli FORECASTING RANDOMLY DISTRIBUTED ZERO-INFLATED TIME SERIES

The Tyranny of Tools, or Wait, what were we trying to do in the first place? Bill Hathaway, MoreSteam

Strategies for Improvement of Process Control

TDWI strives to provide course books that are contentrich and that serve as useful reference documents after a class has ended.

The Art of the Performance Review:

Engenharia e Tecnologia Espaciais ETE Engenharia e Gerenciamento de Sistemas Espaciais

Comparison of Efficient Seasonal Indexes

RISK and Uncertainty

Practical Sampling for Impact Evaluations

CS626 Data Analysis and Simulation

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Mathematical Theory and Modeling ISSN (Paper) ISSN (Online) Vol.5, No.4, 2015

COST OF QUALITY (COQ): WHICH COLLECTION SYSTEM SHOULD BE USED?

A General Method for Fatigue Analysis of Vertical Axis Wind Turbine Blades*

Acceptance sampling is an inspection procedure used to

Linda Carrington, Wessex Commercial Solutions

Operations and Supply Chain Management Prof. G. Srinivisan Department of Management Studies Indian Institute of Technology, Madras

Overview. Presenter: Bill Cheney. Audience: Clinical Laboratory Professionals. Field Guide To Statistics for Blood Bankers

Getting the Bead of Hardship

A Parametric Tolerance Interval Test for Improved Control of Delivered Dose Uniformity of Orally Inhaled and Nasal Drug Products

Unit QUAN Session 6. Introduction to Acceptance Sampling

Prof. Bryan Caplan Econ 812

BODY OF KNOWLEDGE CERTIFIED QUALITY TECHNICIAN

Checking data for outliers: Few data points, tolerance tables. 7 th Seminar on Statistics in Seed testing. Gregoire, Laffont, Remund

Business Customer Value Segmentation for strategic targeting in the utilities industry using SAS

SPECIAL CONTROL CHARTS

Seven Basic Quality Tools. SE 450 Software Processes & Product Metrics 1

Dose Exposure Response Relationships: the Basis of Effective Dose-Regimen Selection

Chapter 12. Sample Surveys. Copyright 2010 Pearson Education, Inc.

Transcription:

Control Charts for Customer Satisfaction Surveys Robert Kushler Department of Mathematics and Statistics, Oakland University Gary Radka RDA Group ABSTRACT Periodic customer satisfaction surveys are used in many industries. Control charting principles can and should be applied when interpreting the results of such surveys, in order to detect changes over time and avoid reacting to random variation. However, traditional control charts for count data are not adequate. Alternative probability models provide a more appropriate basis for constructing such charts. The issues involved are discussed, and an implementation using standard statistical software is described and illustrated using data collected in an automotive quality tracking study that measures customer reported Things Gone Wrong (TGW). INTRODUCTION It is customary in the automotive industry (and others) to obtain customer satisfaction data by regularly surveying a random sample of recent purchasers. The surveys typically include a checklist of potential problem items and respondents are asked to mark any items where they have experienced a trouble. One of the key measures of these surveys is the number of items checked as things gone wrong (or TGWs) by the survey respondent. Manufacturers pay close attention to the resulting average number of complaints per customer (or equivalently scaled as per 100 or per 1000 customers). Managers are interested in trends over time and comparisons with competitors, and have been known to have compensation packages tied to TGW levels. Given the high profile of these customer reported data, and the finite resources available for analyzing them, it is important to ensure that valid and efficient methods of analysis are used. There is a strong analogy between a sequence of such surveys and a series of defect count data quality control samples, so it is natural to consider applying control charting concepts to the analysis of such survey results. In this article the issues involved are discussed, and procedures for analyzing such data are described and illustrated. In section 1 some basic control charting concepts and issues are discussed, in section 2 some alternative probability models for count data are introduced, and in section 3 the proposed control chart methodology is described. 1. CONTROL CHARTING ISSUES Several practical issues arise whenever a control chart is being developed for a particular application. In addition there are some special features of the situation considered here which require modification of the standard methods. Some of the important general issues are discussed in this section. Given the long time lag between samples in this situation, the goal of control charting is not the usual one of finding assignable causes and removing them. Instead, the purpose is to provide a rational basis for interpreting the results of new surveys as they are collected, so that both unwarranted complacency and unreasonable panic can hopefully be avoided. In constructing and using control charts the main issue is the trade off between two kinds of mistakes: (1) reacting to random, common cause variation when the process is actually stable (false alarms) and (2) failing to react when a change (assignable cause) has occurred (sometimes referred to as a beta error ). For the standard three sigma control chart, the false alarm rate is nominally 0.0027, based on the tail area of the normal distribution. The probability of a beta error depends on the magnitude of the effect of the assignable cause. Setting control limits that achieve the desired balance between the two types of mistakes requires a thorough understanding of the process, and traditional three sigma limits should not be used without considering whether they produce a reasonable balance between the two kinds of mistakes in a given situation. For defect count data control charts (and many others, including R or s charts), there is a fundamental difference between out of control signals at the upper control limit vs. the lower limit. From a practical perspective, points below the lower control limit indicate improvement, while points above the upper control limit indicate trouble. Thus it makes sense to consider the two directions separately, and possibly use asymmetric control limits. The standard control chart procedure for defect count data is a c chart if the actual count is used or a u chart if the counts are adjusted for the size of the inspection unit (which in this case is the survey sample size).

These charts are based on assuming that the observed number of defects follows the Poisson distribution, an assumption that is often reasonable when the defects are rare events with a large area of opportunity for their occurrence. Actually, the traditional version of these charts requires an additional assumption: that the normal distribution provides an accurate approximation to the Poisson distribution, so that the actual false alarm rate will be close to the nominal rate. This assumption is often seriously in error. For example, a standard c chart with centerline at 3 has an actual false alarm rate (at the upper control limit) of 0.0038, almost three times the nominal rate of 0.00135 (in other words, false alarms will occur almost three times as often as they are supposed to). Due to the discreteness of the Poisson distribution, small changes in the settings can cause large changes in the probabilities. If the centerline is at 2.89 instead of 3, the false alarm rate for the three sigma upper control limit rises to 0.0097! Fortunately, reliance on the normal approximation is completely unnecessary. Rather than using the standard three sigma control limit formula, limits for a c chart should be determined based on the actual tail area of the Poisson distribution. This will ensure that the actual false alarm rate is understood, and also clarifies the tradeoffs required when dealing with a discrete distribution (in the previous example, the large difference in the false alarm rate arises based on whether the occurrence of 8 defects is treated as an out of control signal or not). For analyzing survey data, it will often make sense to start a fresh control chart at the beginning of each model year, with the initial centerline and control limits determined based on analysis of historical data. When the start up of production for the new model year involves a large number of changes, it may quickly become clear that the historically based limits are inappropriate. Since surveys are conducted (at most) monthly, it may be necessary to apply short run SPC techniques to revise the chart. Traditional control charts have a flat centerline, since they are based on the concept of a stable, in control process. An alternative model, based on the concept of continuous improvement, suggests that the centerline on a defect count control chart should show a downward trend. Historical data can be analyzed to determine whether this model is plausible, and if so a trended centerline can be used for the current year. 2. ALTERNATIVE MODELS FOR COUNT DATA In practice, count data often exhibit variability exceeding that predicted by the Poisson, a phenomenon known as overdispersion. One way this can arise is when the data result from a mixture of Poisson processes with different rate parameters. In the context of TGW count data, the practical idea is that each vehicle has a different propensity to produce problems, and each customer subjects the vehicle to a different level of stress (duty cycle), and also has a different level of acceptance of what they view to be a problem with a vehicle. All of these factors in combination produce different Poisson rate parameters for each customer. The resulting overall distribution of the TGW counts will exhibit more variability than a pure Poisson distribution for which the rate parameter is the same for everyone. (Note that this idea could also apply to defect count data in other contexts where there are a large number of quality control inspectors involved, since it is possible that each has a different propensity for detecting problems.) Figure 1 shows a histogram of TGW counts for a typical survey, with a best fit Poisson distribution curve superimposed. It is obvious that the Poisson distribution does not provide a suitable fit to the data. It is sensible to view the excess variation as part of the common cause system rather than arising from removable assignable causes. From this point of view, applying a standard c or u chart to such data would result in excessive false alarms. Operationally, the purpose of a control chart is not only to identify problems, but also to avoid allocating resources to chasing random variation. Thus it is imperative when designing a control chart that the underlying distribution of the data be accurately modeled. Fortunately, some relatively simple alternative models are available, and experience has shown that they can provide a much more accurate basis for analyzing TGW data. One particularly useful model arises from the mixture of Poisson distributions with varying rate parameters idea described earlier. It can be shown that if the mixing distribution for the rate parameter is from the gamma family, then the unconditional distribution of the TGW counts is the negative binomial distribution. See Sheaffer and Leavenworth (1976) for discussion of industrial applications of this distribution, and see Johnson, Kemp and Kotz (2005) for a comprehensive discussion of this and many other models for discrete data. The negative binomial is usually characterized as the distribution of the number of failures that occur prior to the kth success in a sequence of independent trials with success probability p. The special case k = 1 is the geometric distribution, which corresponds to an exponential mixture of Poisson distributions. It is convenient to reparameterize the negative binomial distribution before proceeding. Let µ represent the mean of the distribution (this is the mean of the mixing distribution and also the mean of the resulting TGW distribution). The success probability parameter is then p = k/(k + µ), and the resulting distribution is the same. A very important fact is that the variance of the negative binomial distribution is µ + µ 2 /k. As k becomes very large, this reduces to µ, and in fact the distribution converges to the Poisson distribution, but for finite k the

variance is larger than µ (i.e., there is overdispersion). However, this relatively simple formula can be used to obtain an estimate of the process sigma using the estimates of µ and k for the fitted distribution. Another phenomenon that can arise is the occurrence of extra zeros. The basic idea is that a particular distribution (Poisson, or negative binomial, or some other family) may fit the data well except for the fact that there seem to be too many zeros. Allowing for this possibility simply means including an additional parameter (call it p 0 ) in the model fitting process. In a manufacturing environment the extra zeros pattern can be generated by a manufacturing process that operates in two modes; one that produces no defects and the other that produces defects that can be described by a Poisson (or some other) distribution. See Jackson (1972) for a general discussion of this issue and some other examples. The extra zeros pattern has been seen in TGW data, often enough to make it worth considering. More than one plausible argument can be made to explain why extra zeros could occur, and unfortunately these arguments have different implications for how the p 0 parameter should be interpreted. As in the manufacturing example, one possibility is that the extra zeros come from a subset of customers who never complain about anything. In contrast, marketing research describes something known as the halo effect, which occurs when someone purchases a vehicle (and perhaps pays a premium for it) because it has a reputation for quality, and by not indicating a problem they are confirming their purchase decision. Another possibility is respondent fatigue. When faced with a long list of items the respondent may skip that section of the survey in lieu of reading all the questions and thereby indicate no TGWs (actually this could occur in the middle of the list as well, suggesting a general downward bias in all the reported counts). The values of µ and k are parameters that must be estimated in order to determine the best fit negative binomial distribution for a given set of data. The concept of maximum likelihood estimation provides a very general criterion (and method) for estimating the parameters of any distribution, and the related concept of likelihood ratio tests provides a way to assess the goodness of fit of the model, and to compare the fit of different models. See Agresti (2002) for a general introduction to these and other procedures for analyzing count data. For some distributions, the maximum likelihood estimates (MLEs) can be directly computed using a simple formula. For example, the MLE of the Poisson rate parameter is just the sample mean. The sample mean is also the MLE of µ for the negative binomial, but computing the MLE of k requires use of a nonlinear optimization algorithm. Including the extra zeros parameter p 0 in a model also usually requires this. Modern statistical software packages such as SAS, S-PLUS, and R contain procedures that can perform the required calculations, and also compute likelihood ratio test statistics. Figure 2 shows fits of the Poisson, geometric and negative binomial distributions, with and without an extra zeros parameter, to another example data set. The graphs make it clear that the fit of the Poisson distribution, even with the extra zeros parameter, is inferior to that of the other models. Formal likelihood ratio tests (see Display 1) confirm these impressions. For this example, the fit of the geometric distribution is adequate, but the fit of the geometric with extra zeros model is significantly better (in a statistical, if not a practical, sense), and the negative binomial model (i.e., using a value of k other than 1) is also marginally better. 3. CONSTRUCTING A CONTROL CHART Any of the models described in the previous section can be used as the basis for constructing a control chart for evaluating the results of new surveys as they become available. The basic idea is to estimate the parameters of the fitted distribution for the new survey and assess whether they are consistent with results of past surveys. The first step is to perform a retrospective analysis of historical data. A natural choice for the historical time period is the data for the previous model year. The usual control charting assumption must be made: that the process was stable (i.e., in control) during the time period when the historical data were obtained. The usual process of removing out of control samples may need to be applied. The first step is selecting one of the models described in the previous section. Ideally, one of the simpler models will provide adequate fits to all (or at least most) of the historical surveys. If there is strong evidence that the nature of the distribution (as opposed to just the parameter values) was changing from month to month, that should be taken as a sign that the process was not in control. The next issue to address is whether the parameter values were stable over the time period in question. As mentioned in section 1, a steady trend in the mean could be considered a stable process. Note that the k parameter of the negative binomial distribution plays the role of the process sigma, so the stability of this parameter is particularly crucial. These questions can be addressed using likelihood ratio tests, comparing models with common parameter values across the months to models that allow the parameter(s) to vary. Regardless of which model is used, the main (and perhaps only) control chart is for the mean of the TGW counts. Based on the stable parameter estimates for the historical data, the centerline and control limits for this chart can be calculated. The key point is that use of a probability model that fits the historical data leads to properly calibrated control limits.

Note that since the sample size for these surveys often varies greatly from month to month, the width of the control limits will vary (i.e., this will be a version of a u chart rather than a c chart). Also, the sample sizes tend to be relatively large (enough to make even a statistician happy), so the caveats about the accuracy of the normal approximation illustrated in section 1 should not be a problem. This procedure is illustrated using two consecutive years of survey data for a particular vehicle model, with year 1 serving as the historical period. The procedure described in section 2 was applied to each survey, leading to the general conclusion that the negative binomial distribution (without extra zeros ) would be the best choice (see Figure 3 and Display 2). Likelihood ratio tests showed no significant variation in either parameter over the course of the year, so the parameter estimates µ = 1.84 and k = 0.74 were used to establish the chart, and the process sigma was estimated based on the variance formula given in section 2. The resulting chart for the data of year 2 (see Figure 4) shows that the TGW counts were generally lower, but there are no out of control points. ( Supplemental rules would produce a signal, but note that some of these rules must be modified when used with varying sample sizes.) In preparation for year 3, the same historical data analysis is applied to the data of year 2. In this case the downward trend that can be seen on the control chart turns out to be statistically significant (see Display 3), suggesting that perhaps a chart with a trended centerline should be used in year 3 (promoting further continuous improvement). A control chart can also be developed for the k parameter, but a relatively simple alternative is to apply a likelihood ratio test to each new survey to see whether the historical value of k provides an adequate fit. A similar method can be applied to monitor the stability of p 0, if the extra zeros feature is part of the assumed model. In fact, it makes sense to apply the analysis methods described in section 2 (and illustrated in Display 1) to each new survey, as a general check on the stability of the system. CONCLUSION Control charting methods can be a useful aid when interpreting the results of customer satisfaction surveys. However, since the survey data often exhibit overdispersion relative to the Poisson distribution, the traditional charts will tend to produce excessive false alarms. The alternative methods described here can correct this problem, and produce decision making tools that help the voice of the customer reach the ears of management. REFERENCES 1. Agresti (2002), Categorical Data Analysis (2 nd ed), Wiley. 2. Jackson (1972), All Count Distributions Are Not Alike, Journal of Quality Technology, V4 No2, 86-92. 3. Johnson, Kemp, and Kotz (2005), Univariate Discrete Distributions (3 rd ed), Wiley. 4. Sheaffer and Leavenworth (1976), The Negative Binomial Model for Counts in Units of Varying Size, Journal of Quality Technology, V8 No3, 158-163. Figure 1 Observed TGW counts with fitted Poisson distribution 0 50 100 150 200 250 9 10 11 12 13 14 15 16 17 18 19

Figure 2. Example comparison of fitted models Poisson Geometric Negative binomial Poisson with extra zeros Geometric with extra zeros Negative binomial with extra zeros Display 1. Test statistics for above data and models Poisson model: Rate parameter = 1.1983 Lack of fit G2 = 74.17 df = 7 p = 0 Contribution of extra zeros G2 = 61.58 p = 0 Extra zeros p = 0.47555 Lack of fit G2 = 12.59 df = 6 p = 0.05 vs Negative binomial G2 = 66.42 p = 0 Geometric model: p = 0.45489 Lack of fit G2 = 11.06 df = 7 p = 0.1359 Contribution of extra zeros G2 = 4.92 p = 0.0266 Extra zeros p = 0.22380 Lack of fit G2 = 6.15 df = 6 p = 0.4069 vs Negative binomial G2 = 3.31 p = 0.0688 Negative binomial model: k = 0.61196 mu = 1.1982 Lack of fit G2 = 7.75 df = 6 p = 0.257 Contribution of extra zeros G2 = 2.64 p = 0.1043 Extra zeros p = 0.36565 Lack of fit G2 = 5.11 df = 5 p = 0.4025

Figure 3. Year 1 data with negative binomial fits Month 1 Month 2 Month 3 Month 4 0 50 150 250 0 50 150 250 0 50 150 250 0 50 100 150 200 0 2 4 6 8 11 15 0 2 4 6 8 10 14 0 2 4 6 8 11 14 24 0 2 4 6 8 11 13 Month 5 Month 6 Month 7 Month 8 0 50 100 150 200 0 50 100 150 0 20 40 60 80 0 10 30 50 0 2 4 6 8 12 16 0 2 4 6 8 10 13 0 2 4 6 8 11 15 9 11 Month 9 Month 10 0 10 20 30 40 0 2 4 6 8 10 0 1 2 3 4 5 Display 2. Testing stability of parameters for Year 1 data In control' model: Coefficients: (Intercept) 0.607 'Continuous improvement' model: Coefficients: (Intercept) month 0.62089-0.00363 'Out of control' model: Coefficients: (Intercept) month2 month3 month4 month5 month6 month7 month8 month9 month10 0.6213-0.1155 0.0588 0.0181 0.0336-0.0359-0.0367-0.0607-0.1016-0.5303 Likelihood ratio tests of Negative Binomial Models Model k Resid.df 2xlog-lik. Test df LR stat. Pr(Chi) In control 0.73625 136-15163 Cont Improvmnt 0.73630 135-15163 1 vs 2 1 0.13049 0.71793 Out of control 0.73985 127-15153 2 vs 3 8 10.01896 0.26370 Test for common k: G2 = 6.58 df = 9 p = 0.6807 Conclusion: centerline = exp(0.607) = 1.84, sigma = sqrt(cl+cl 2 /k) = 2.53

Figure 4. Control chart applied to Year 2 data Example TGW chart Mean TGWs 1.0 1.5 2.0 2.5 3.0 1 2 3 4 5 6 7 8 9 10 Month Display 3. Testing stability of parameters at end of Year 2 'In control' model: (Intercept) 0.51341 'Continuous improvement' model: (Intercept) mnth 0.622470-0.030854 'Out of control' model: (Intercept) month2 month3 month4 month5 month6 month7 month8 month9 month10 0.587647-0.0437-0.0364-0.0627-0.1887-0.1424-0.2741-0.1289-0.2468-0.3645 Likelihood ratio tests of Negative Binomial Models Model theta Resid. df 2xlog-lik. Test df LR stat. Pr(Chi) In control 0.71267 128-16812 Cont Improvmnt 0.71637 127-16800 1 vs 2 1 11.8880 0.000565 Out of control 0.71726 119-16797 2 vs 3 8 2.9154 0.939553 Test for common k G2 = 15.97 df = 9 p = 0.0674 Conclusion: Continuous improvement! Next year's chart could have a trended centerline. Possible variation of k.