Customer Satisfaction Survey Report Guide

Size: px
Start display at page:

Download "Customer Satisfaction Survey Report Guide"

Transcription

1 OVERVIEW The survey is designed to obtain valuable feedback and assess customer satisfaction among departments at CSU San Marcos. The survey consists of ten standard items measured on a 5-point scale ranging from 1 ( Not at all Satisfied ) to 5 ( Extremely Satisfied ). The ten standard items are measured each year to enable tracking of performance trends. Departments may also provide supplemental items to identify key customer segments and address department-specific needs and goals. The survey begins with an Overall Satisfaction score based on the statement, Thinking of your OVERALL experience with this department, how would you rate your satisfaction with it in meeting your or your department s needs during the past 12 months? The other standard survey items address understanding of customer needs, accessibility, responsiveness, effectiveness of advice, facilitation of problem resolution, knowledgeability, helpfulness, effective use of the website, and movement in a positive direction to meet customer needs. The goal of the survey is to identify customer service Strengths (areas where departments are doing well) and Opportunities (areas where departments can address issues). Responses to a given item are placed on a Scatterplot of Strengths and Opportunities based on two factors: 1) How strongly satisfied people were with the item and 2) How strongly the item was related to Overall Satisfaction. NOTES ON STATISTICS AND ANALYSIS Consider the number of responses (n) that your department received. The larger the number of responses, the more confidence you can have that responses reflect what you would find if you were able to ask all your customers. Keep the item response scale in mind when looking at item and dimension MEAN scores: Below 3.00: Low : Marginal : Good 4.30 & above: Excellent The size of the CORRELATION reflects the strength when looking at the relationship of items to Overall Satisfaction:.10 Weak.30 Moderate.50 Strong Statistical significance does not always translate to real-world significance. Whether or not they are significant, differences between means are probably more important when they: o Change direction (e.g., move from neutral to positive, neutral to negative, negative to positive) o Cross a boundary (Marginal to Low; Good to Excellent) Office of the Vice President, Finance and Administrative Services 1

2 READING THE STATISTICAL REPORT The table below shows a description of figures and tables contained in the report and where they can be found: Highlights Page 1, top left Contains an overview of survey responses, including: The survey response for the department in the current and previous years Items representing Strengths & Opportunities Strengths are ranked by Correlation Coefficient x Mean Score, if this question falls into the Influential Strengths category. Opportunities are ranked by Correlation Coefficient / Mean Score, if this question falls into the Primary Opportunities category. Overall Satisfaction Item Breakdown for Standard Items Page 1, top right Page 1, center Shows mean and standard deviation, the number and percentage of responses in each response category for the Overall Satisfaction question. Table shows mean scores for each of the 10 standard survey items, compared across two years. Changes are indicated by color-coded bars to the right of the table and stars indicate statistically significant changes from the previous year. Survey Background Page 1, bottom Details of survey history, distribution dates, number of respondents, and response rates. Net Promoter Score (NPS) Page 2 See page 2 of statistical report for a detailed description. Departments who opted out of the NPS question will have different page numbers. Strengths and Opportunities Scatterplot by Survey Question List of Strengths and Opportunities by Question by Respondent Classification by Division and Respondent Classification Page 3, top Page 3, bottom Page 4 Page 5 end of statistical report Shows where each survey item falls on the map of Strengths versus Opportunities, based on the mean correlation of each item with overall satisfaction. See page 3 of this guide for a full description. Exhibits Strengths and Opportunities by item in graphic and table forms, showing item means, correlation with Overall Satisfaction, and a category of Strength/Opportunity for each item. Summarizes the mean scores by Faculty, Staff, and Student and total number of responses.* Shows mean evaluation scores for standard survey items for comparison across respondent departments and classification.* *Respondent numbers not shown when number of total responses is less than five Office of the Vice President, Finance and Administrative Services 2

3 STRENGTH/OPPORTUNITY SCATTERPLOT How to develop/interpret a Scatterplot for customer satisfaction questions 1. List the mean scores of all questions (except Overall Satisfaction) 2. List the correlation coefficient for all questions with Overall Satisfaction. This enables us to identify if a question has a strong (>0.5) or weak ( ) association with Overall Satisfaction 3. Obtain the average mean score and average correlation coefficient for all questions (excludes Overall Satisfaction) 4. Use the average mean (3.36) and average coefficient (0.71) scores to define the boundaries of the four quadrants 5. Plot the satisfaction questions onto the quadrants using their respective mean score and correlation coefficient Strengths Higher than average mean score, lower than average correlation. "keep up the good work" Influential Strengths Higher than average mean score, higher than average correlation. "keep an eye on it" Secondary Opportunities Lower than average mean score, lower than average correlation. "low priority" Primary Opportunities Lower than average mean score, higher than average correlation. "concentrate efforts" Mean of Attribute/Questions (1 - lowest, 5 - highest). Higher the score, stronger the attribute. Correlation Coefficient - Strength of a linear relationship between an attribute and Overall Satisfaction (scale: -1 to 1. Weak: 0.1, moderate: 0.3, strong: 0.5). Higher the coefficient, stronger the relationship. Correlation does not imply causal relationship. Note: if all attributes' mean scores are above 4.3 (excellent) and coefficients are above 0.5, all should be considered as strengths. The opportunity for the following year will be to sustain the excellent scores. Office of the Vice President, Finance and Administrative Services 3

4 HOW TO USE THE RESULTS Refine additional survey questions Deploy survey and obtain feedback (statistical reports & verbatim comments) Identify customer needs and priorities Realign with strategic goals Identify strengths and opportunities using the worksheet Evaluation and Recognition for Actions Taken Did changes result in goal attainment? Communicate impact and share results with Senior Leadership Develop and implement action plans Project and Program Implementation Follow up and assess impact of action plans Planning & Goal Setting SHARING THE RESULTS IS THE MOST IMPORTANT STEP! Discuss the meaning of results with leadership and staff. Ask what these results mean to them. Are there any surprises? Were there any particular cases or exceptional situations that may put the results in context? Is there a common understanding of what the questions mean? For the Strengths/Primary Opportunities Scatterplot, look at the overall picture. Is the overall mean score already at or above 4.30? If so, be realistic about the ROI of investing resources to further raise these scores. Be sure to look at patterns across time as well as the current year. Commit to taking specific needed action based on your results. Identify benchmarks you are trying to meet. (e.g., how do your results compare to previous surveys?) Communicate the results. All results will be posted on the Quality Improvement Website. Utilize data to assist with developing annual department goals and objectives. Office of the Vice President, Finance and Administrative Services 4

5 GLOSSARY Correlation Customer Term Face validity Frequency Heat map Mean N n Net Promoter Score (NPS) Opportunity p-value Definition A measure of the strength of the relationship between responses on two survey items. Correlations can range from -1 to +1, with values close to zero representing little or no relationship. Any stakeholder in the university's mission and success. Students, faculty, and staff are all customers of the services provided by campus departments. A survey or survey item has face validity if it seems, subjectively, to be measuring what it is intended to measure. A count of the number of times a survey item response choice or scale value is chosen. Frequency can also be represented as a percentage of the total responses. A color-coded table of mean item values broken down by demographic characteristics or survey units, showing regions of strengths and opportunities. An average response value. The mean is computed by adding the responses of all survey participants on a given item and dividing by the number of participants. The total number of potential respondents to a survey (i.e., all students, faculty and staff). The number of respondents in a sample or subsample. A measure of willingness to recommend a survey unit's products or services, calculated by subtracting the percentage of highly unwilling respondents from the percentage of highly willing respondents (based on an 11-point measure of willingness to recommend). Area where the survey department is currently receiving relatively low ratings. The p (probability)-value indicates the likelihood that the result of a statistical test (such as a t- test or correlation) could have occurred due to chance. Generally, values <.05 are considered statistically significant. However, more stringent criteria (p<.01) may be used if many tests are being conducted on the same set of data. The size of the p-value is based on a total probability of 100 percent; a p-value of.05 means the observed result has only a 5% probability of occurring by chance, while a p-value of.01 means there is only a 1% likelihood of the results occurring by chance. Scatterplot Standard deviation (Std. dev.) Statistical significance A visual representation of the relationship between overall satisfaction and individual survey items or dimensions. A statistical measure of the spread of a set of data points. Larger standard deviations indicate responses are more variable, rather than clustered around a particular value. A statistically significant result is one unlikely to have occurred by chance or random variation. Likelihood of detecting significant results increases with sample size. Strength t-test Verbatim Area where the survey department rates relatively highly. A statistical test of the difference between two means (e.g., survey item means across two samples). A statistically significant t-test indicates the difference is larger than could reasonably be expected by chance. The exact content of the response to an open-ended survey item written by the respondent. Office of the Vice President, Finance and Administrative Services 5