MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS THE TECHNICAL REPORT APRIL 2010

Similar documents
CHAPTER 5 DATA ANALYSIS AND RESULTS

CHAPTER 5 RESULTS AND ANALYSIS

Which is the best way to measure job performance: Self-perceptions or official supervisor evaluations?

Estimation of multiple and interrelated dependence relationships

Chapter 5 RESULTS AND DISCUSSION

1. Measures are at the I/R level, independent observations, and distributions are normal and multivariate normal.

THE STATE OF NEW HIRES UPON ARRIVAL BC PUBLIC SERVICE WORK ENVIRONMENT SURVEY NOVEMBER 2009

Last update to this document: 11/11/14

Last update to this document:

Chapter -7 STRUCTURAL EQUATION MODELLING

Kristin Gustavson * and Ingrid Borren

Chapter Five- Driving Innovation Factors by Using Factor Analysis

CHAPTER 5 DATA ANALYSIS

Studying the Employee Satisfaction Using Factor Analysis

Chapter 5. Data Analysis, Results and Discussion

ASSESSMENT APPROACH TO

Chapter Six- Selecting the Best Innovation Model by Using Multiple Regression

Adequacy of Model Fit in Confirmatory Factor Analysis and Structural Equation Models: It Depends on What Software You Use

GREEN PRODUCTS PURCHASE BEHAVIOUR- AN IMPACT STUDY

CHAPTER 4 RESEARCH OBJECTIVES AND METHODOLOGY

AIS Contribution in Navigation Operation- Using AIS User Satisfaction Model

Chapter 3. Basic Statistical Concepts: II. Data Preparation and Screening. Overview. Data preparation. Data screening. Score reliability and validity

A Study on New Customer Satisfaction Index Model of Smart Grid

Laleh Karamizadeh Corresponding Author: M.S.C., Management Department, Islamic Azad University, Branch of Dehaghan, Iran.

Statistics & Analysis. Confirmatory Factor Analysis and Structural Equation Modeling of Noncognitive Assessments using PROC CALIS

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Band Battery

An Empirical Investigation of Consumer Experience on Online Purchase Intention Bing-sheng YAN 1,a, Li-hua LI 2,b and Ke XU 3,c,*

CHAPTER 4 METHOD. procedures. It also describes the development of the questionnaires, the selection of the

Content Appendicises

INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials

Empirical Analysis in the Relationship Between the Consumption and Economic Growth Based on the Structural Equation Model

Archives of Scientific Psychology Reporting Questionnaire for Manuscripts Describing Primary Data Collections

*Javad Rahdarpour Department of Agricultural Management, Zabol Branch, Islamic Azad University, Zabol, Iran *Corresponding author

Understanding the Dimensionality and Reliability of the Cognitive Scales of the UK Clinical Aptitude test (UKCAT): Summary Version of the Report

STRUCTURAL MODELING AND CRITICAL SUCCESS FACTORS FOR INDUSTRIAL PROJECTS MANAGEMENT

USING EXPLORATORY FACTOR ANALYSIS IN INFORMATION SYSTEM (IS) RESEARCH

IMPACT OF SELF HELP GROUP IN ECONOMIC DEVELOPMENT OF RURAL WOMEN WITH REFERENCE TO DURG DISTRICT OF CHHATTISGARH

Phd Program in Transportation. Transport Demand Modeling. Session 4

Segmentation and Targeting

The Relationships among Organizational Climate, Job Satisfaction and Organizational Commitment in the Thai Telecommunication Industry

International Journal of Business and Administration Research Review, Vol. 1, Issue.2, April-June, Page 165

Introduction to Research

CHAPTER 7 ASSESSMENT OF GROUNDWATER QUALITY USING MULTIVARIATE STATISTICAL ANALYSIS

Tutorial Segmentation and Classification

THE MEDIATING EFFECT OF TQM PRACTICE ON COST LEADERSHIP STRATEGY AND IMPROVEMENT OF PROJECT MANAGEMENT PERFORMANCE

Introduction to Business Research 3

Customer satisfaction as a gain/loss situation: Are experienced customers more loss aversive?

ijcrb.webs.com INTERDISCIPLINARY JOURNAL OF CONTEMPORARY RESEARCH IN BUSINESS An Empirical study on talent retention strategy by BPO s in India

GSR Management System - A Guide for effective implementation

demographic of respondent include gender, age group, position and level of education.

Chapter 11. Multiple-Sample SEM. Overview. Rationale of multiple-sample SEM. Multiple-sample path analysis. Multiple-sample CFA.

Chapter - 2 RESEARCH METHODS AND DESIGN

Distinguish between different types of numerical data and different data collection processes.

Indian Journal of Engineering An International Journal ISSN EISSN Discovery Publication. All Rights Reserved

The Impact of Human Resource Management Functions in Achieving Competitive Advantage Applied Study in Jordan Islamic Bank

Strategic inventory management through analytics

IMPACT OF PROJECT MANAGEMENT ON PROJECT PERFORMANCE: A STRUCTURAL EQUATION MODELLING APPROACH

Bruce K. Berger, Ph.D., Juan Meng, Ph.D., and William Heyman

Recent Developments in Assessing and Mitigating Nonresponse Bias

Latent Growth Curve Analysis. Daniel Boduszek Department of Behavioural and Social Sciences University of Huddersfield, UK

Research Note. Community/Agency Trust: A Measurement Instrument

How to Get More Value from Your Survey Data

Evaluation of Critical Success Factors and their InterrelationshipUsing Structural Equation Model

Untangling Correlated Predictors with Principle Components

The prediction of economic and financial performance of companies using supervised pattern recognition methods and techniques

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

An Empirical Study on Customers Satisfaction of Third-Party Logistics Services (3PLS)

Sampling Definitiion Sampling is the process of systematically selecting that which will be examined during the course of a study.

Using Factor Analysis to Generate Clusters of Agile Practices

Complex modeling of factors influencing market success of new product and service developments

BUYER-SUPPLIER RELATIONSHIP AND ORGANIZATIONAL PROFILE: SOURCES OF PRODUCT INNOVATION IN INDIA AND PAKISTAN

Managing Strategic Initiatives for Effective Strategy Execution

Glossary of Research Terms

Segmentation and Targeting

Summary. Internal evaluation of the OPA system

AFFECTIVE AND CONTINUANCE COMMITMENT IN CALL CENTRES: VALIDATION OF MEYER AND ALLEN QUESTIONNAIRE ABSTRACT

Environmental Determinants of Surface Water Quality Based on Environmetric Methods

CHAPTER 4 DATA ANALYSIS, PRESENTATION AND INTERPRETATION

Relation between EFQM and Quality of Work Life and Tendency to Change among Faculty Members

Transformational and Transactional Leadership in the Indian Context

Chapter 3. Database and Research Methodology

EMPLOYEE ENGAGEMENT SURVEY

Impact of ERP Implementation on Supply Chain Performance of Transport and Logistics Companies in Sri Lanka

Structural equation model to investigate the factors influencing quality performance in Indian construction projects

Model Building Process Part 2: Factor Assumptions

Partial Least Squares Structural Equation Modeling PLS-SEM

Factor Retention Decisions in Exploratory Factor Analysis Results: A Study Type of Knowledge Management Process at Malaysian University Libraries

CHAPTER SIX DATA ANALYSIS AND INTERPRETATION

THE LOOP MODEL: MODELING CONSUMER INTERACTIVITY IN CAMPAIGNS COUPLING SIMULTANEOUS MEDIA

TDWI strives to provide course books that are contentrich and that serve as useful reference documents after a class has ended.

Chapter 5. Conclusions and Recommendations

Getting the Bead of Hardship

STRUCTURAL EQUATION MODELLING OF FACTORS THAT INFLUENCE THE SHOPPERS PURCHASE DECISION THROUGH E COMMERCE

Examination of Cross Validation techniques and the biases they reduce.

Reliability and Validity Testing of Research Instruments

Glossary of Standardized Testing Terms

2009 Smithsonian Employee Perspective Survey Office of Policy and Analysis Smithsonian Institution December 2009

Treatment of Influential Values in the Annual Survey of Public Employment and Payroll

How Intellectual Capital Reduces Stress on Organizational Decision- Making Performance: the Mediating Roles of Task Complexity and Time Pressure

Transcription:

MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS THE TECHNICAL REPORT APRIL 200

CONTACT INFORMATION This paper was prepared by Taylor Saunders. If you have any questions about the information in this report, please email taylor.saunders@gov.bc.ca or phone 250-387-8972. BC Stats 200

ABSTRACT ABSTRACT This technical report details the statistical analyses supporting the development and subsequent testing of the 2009 Work Environment Survey (WES) employee engagement model. A discussion of the methods leading up to and including the identification of the key drivers of engagement is provided, including a discussion of the sampling implications, data screening steps and the factor identification procedures used to isolate a set of theoretically and empirically supported latent variables. Modelling of the corporate wide 2009 WES results was achieved through the confirmation of pre-existing 2008 WES structural equation models (SEM). Direct comparisons were made between the 2008 and 2009 WES model results, focusing both on a basic model of employee engagement (consisting only of two management drivers and three engagement outcomes) and a comprehensive house model of engagement (consisting of two management drivers, three engagement outcomes and 0 meditating drivers). Organizational level model comparisons were also performed, in an effort to confirm the applicability of the basic model and house models to a subset of BC Public Service organizations. For both the corporate wide and organization level analyses, slightly modified versions of the 2008 WES models were also tested on the 2009 WES data in order to assess the impact of potential model improvements. Finally, consideration was given to the methodological and statistical limitations of the 2009 employee engagement modelling process, as well as the direction of SEM analysis for future iterations of WES. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page i

TABLE OF CONTENTS TABLE OF CONTENTS ABSTRACT... I INTRODUCTION... 3 SAMPLE CHARACTERISTICS... 4 Sample Size, Completion Rate and Response Rate... 4 Sample Weighting... 6 PRELIMINARY ANALYSIS... 8 Survey Scale Transformation... 8 Data Screening... 9 Correlations between Variables... 9 FACTOR ANALYSIS... The Identification and Grouping of Questions into Latent Variables... Exploratory Principal Component Analysis... Checking the Factor Analysis Results... 2 Overall Conclusions... 3 STRUCTURAL EQUATION MODELLING ANALYSIS... 4 Variance-Covariance Matrices... 4 Establishing Model Fit Criteria and SEM Requirements... 4 Understanding Structural Path Diagrams... 5 Basic Model Tests Validating the Links between Management and Engagement... 7 Full Model Tests Validating the Links between Management, Workplace Functions and Engagement... 2 Assessment of Organizational Differences... 26 Basic Model Fit by Organization... 26 Full Model Fit by Organization... 29 LIMITATIONS AND RECOMMENDATIONS... 32 MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page ii

INTRODUCTION INTRODUCTION Between 2006 and 2008, BC Stats contracted ERIN Research to develop and maintain a structural equation model (SEM) based on the BC Public Service Work Environment Survey (WES) results. Using WES data collected in 2006, ERIN Research created the initial iterations of two distinct models: a basic model and a comprehensive house model of engagement. The basic model provided a simplified representation of the BC Public Service work environment, with a focus on how perceptions of management drive employee engagement. While the basic model provided a summary representation of the drivers of engagement, the house model offered a more complete depiction of the work environment through the incorporation of several additional drivers. Each additional driver represented a unique workplace function, and when taken in combination with the management focused drivers, formed the underlying foundation and building blocks of the house model of engagement. After 2006, rather than redeveloping the basic and house models from the ground up, survey results from 2007 and 2008 were fitted to the existing 2006 models. In the event that the 2006 basic and/or house model ceased to provide a well fitted representation of the BC Public Service work environment, modifications were made to the models composition of latent variables (both the number and type of observed variables that comprised each latent variable) as well as the structural weights that defined the causal relationships between the latent variables. The overall complement of latent variables however, remained the same across all three years. In 2009, BC Stats opted to perform the SEM analysis in-house by implementing a modelling procedure similar to that used by ERIN Research. While the modelling analysis for the 2009 WES results focused primarily on the confirmation of the existing 2008 WES model, a description of the sampling considerations, preliminary analysis and factor analysis steps that are typically needed prior to SEM analysis have also been included in this report. In combination with a detailed summary of the 2009 WES modelling procedure, the analytic process contained within this report provides a thorough description of the methodology used for both the development and confirmation of the WES models of employee engagement. Within the context of WES analysis, drivers are almost entirely synonymous with latent variables. The exceptions to this rule are the engagement characteristics Job Satisfaction and Organization Satisfaction. As the two satisfaction drivers are each measured by a single variable, they do not represent latent variables. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 3

SAMPLE CHARACTERISTICS SAMPLE CHARACTERISTICS In most survey based research, the sample frame plays a critical role in determining the final composition and distribution of the response data. Given the considerable scope of the 2009 WES s target population, it is possible to obtain a large sample with even a moderate response rate. However, while large samples can improve the accuracy of many statistical analyses, they are still subject to response bias and sampling error. As such, consideration was given to the 2009 WES s sample characteristics prior to the start of higher level analyses. Provided below is a description of the relevant summary statistics and response rates for the 2009 WES. Sample Size, Completion Rate and Response Rate The in scope population for the 2009 WES consisted of 26,985 employees, of which survey completions were received from 23,574 respondents. Given the magnitude of the response rate (87%), as well as the large overall sample size (n = 23,574), the resulting margin of error was, as to be expected, quite small (± 0.2%, 9 times out of 20). Equivalent margins of error were also found at the organizational level, with margins ranging from a high of ± 3.6% for the Olympic Game Secretariat, to a low of ± 0% for the Environmental Assessment Office (representing a complete census of the organization). The following table (Table ) provides a breakdown of the in-scope population, completion rate and associated 95% margin of error 2 for each organization. Table : 2009 WES Response Rates Organization Completed Survey Total Population Completion Rate 95% Margin of Error (±) Aboriginal Relations and Reconciliation 64 7 96%.55% Agriculture and Lands 332 366 9%.64% Attorney General 2,604 3, 84% 0.78% BC Public Service Agency 400 423 95%.4% Children and Family Development 3,598 4,468 8% 0.72% Citizens' Services,779,988 89% 0.75% Education 333 353 94%.28% Energy, Mines and Petroleum Resources 287 300 96%.2% Environment,38,504 88% 0.95% Environmental Assessment Office 53 53 00% 0.00% Finance,375,506 9% 0.78% Forests and Range 2,754 3,043 9% 0.58% Integrated Land Management Bureau 488 532 92%.28% Labour 96 222 88% 2.40% Office of the Premier 87 89 98%.58% Public Affairs Bureau 226 245 92%.82% Public Safety and Solicitor General 2,82 2,60 84% 0.84% Advanced Education and Labour Market Develop 32 342 94%.36% Community Development 235 252 93%.66% 2 This assumed a proportion of 50% for the estimate of interest, which in turn provided the largest margin of error (and therefore largest margin of error) for a given sample and population size. As the error calculations were based on the full sample size, the actual margins of error for many survey questions would likely be higher due to the incidence of Don t Know and Not Applicable responses. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 4

SAMPLE CHARACTERISTICS Organization Completed Survey Total Population Completion Rate 95% Margin of Error (±) Health Services 86 92 93% 0.85% Healthy Living and Sport 46 6 9% 2.48% Housing and Social Development 2,283 2,600 88% 0.72% Olympic Games Secretariat 38 40 95% 3.60% Small Business, Technology and Economic Development 90 96 97%.25% Tourism, Culture and the Arts 3 42 92% 2.39% Transportation and Infrastructure,93,356 88% 0.98% Total 23,574 26,985 87% 0.23% While the ratio of sample size to population size (defined here as completion rate) played an important role in determining the margin of error for a given sample 3, large sample and population counts had a greater impact on a margin s final calculation. As a result, high completion rate organizations with small populations, such as the Office of the Premier (98%), had a greater level of uncertainty surrounding their estimates than did larger organizations with significantly smaller response rates, such as the Ministry of Children and Family Development (8%). For this reason, the need for high completion rates is even more critical when considering the representativeness of results for smaller organizations. In terms of question specific response trends for the 23,574 respondents who completed the survey, a consistently high response rate was maintained throughout the majority of the questionnaire. Focusing specifically on the 72 agreement scale questions, a high response rate of 99.9% was obtained for both the My workload is manageable and the I have the tools (i.e., technology, equipment, etc.) I need to do my job well questions. Conversely, the Essential information flows efficiently from staff to senior leadership question had the lowest proportion of responders across all of the 72 agreement scale questions, with a response rate of 87.6%. As response rates for each question approached 00%, their associated margins of error also approached that of the overall completion rate. Due to this relationship, low response rate questions led to margins of error that were inflated beyond what would be obtained based purely on the sample of respondents who completed the survey. Table 2 provides the 0 agreement scale questions with either the highest or lowest response rates, as well as their corresponding 95% margins of error. Table 2: Top 5 and Bottom 5 Response Rate Questions Response Rate Rank Question Wording Response Rate 95% Margin of Error (±) My workload is manageable. 99.9% 0.23% 2 I have the tools (i.e., technology, equipment, etc.) I need to do my job well. 99.9% 0.23% 3 I have the information I need to do my job well. 99.9% 0.23% 3 This assumes a finite population where the sample size exceeds 0% of the population from which it was drawn. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 5

SAMPLE CHARACTERISTICS Response Rate Rank Question Wording Response Rate 95% Margin of Error (±) 4 I am proud of the work I do. 99.8% 0.23% 5 My job is a good fit with my skills and interests. 99.8% 0.23% 68 69 70 7 72 In my work unit, the process of selecting a person for a position is fair. My Ministry/organization is taking steps to ensure the long-term success of its vision, mission and goals. In my work unit, the selection of a person for a position is based on merit. Executives in my organization communicate decisions in a timely manner. Essential information flows efficiently from staff to senior leadership. 92.0% 0.29% 92.0% 0.29% 9.5% 0.30% 9.5% 0.30% 87.6% 0.33% Sample Weighting For many surveys, sampling bias can have a considerable impact on both the analysis and interpretation of surveys results. In instances where a sample has not been proportionately drawn from a population, there is a possibility that the disproportionate characteristics of the sample may distort some or all of the research findings. Weighting procedures provide a means of correcting for this bias, and in doing so help ensure survey results are representative of the population being investigated. Unfortunately, the incorporation of sample weights can also complicate the interpretation of results, particularly for surveys that are primarily used for benchmarking purposes. In the case of WES, the increase or decrease of an un-weighted mean score represents an easily understood change in the work environment. Due to the scope of the survey, this change can be tracked across several levels of resolution; including corporate, organizational, divisional and work unit levels. In all cases, the mean scores are obtained by generating a simple average of the scores for all respondents within a particular work environment. With the introduction of weights though, both the driver and engagement scores for each individual are adjusted to meet the specifications of the weighting scheme. The result is a set of synthetic scores for all respondents, where an employee s contribution to a particular level of analysis may either exceed or fall below their corresponding un-weighted scores 4. From a respondent s point of view, this weighting adjustment could be exceedingly difficult to contextualize. Whereas the calculations and longitudinal changes for an un-weighted mean 4 This distortion could become particularly pronounced at the work unit level, where a sum of weights within the work unit may in some instances exceed the total population of the work unit. While a work unit level weighting adjustment would address this concern, due to the small size of many work units, the adjustment would become prohibitively complex. It would also run the risk of over stratifying the sample, which in turn could lead to a greatly inflated weighted margin of error for the overall corporate results. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 6

SAMPLE CHARACTERISTICS score can be easily understood by many public servants, it is likely that a weighted score would be both poorly understood and widely misinterpreted. This would not only reduce the utility of WES as an educational tool, but also the ownership respondents have for their results. Fortunately, the high response rate for 2009 WES helped minimize the impact of sampling bias for many groups across the BC public service. In cases where an entire organization, division or work unit completed the survey, then sampling bias was entirely eliminated, and a completely representative sample was ensured. As a result of these considerations, sample weights were not incorporated into any level of analysis for 2009 WES. However, in the event that response rates sharply decrease for future iterations of the survey then the benefits of sample weighting will be revisited. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 7

PRELIMINARY ANALYSIS PRELIMINARY ANALYSIS As the 2009 WES SEM analysis was constrained to a confirmation of the preceding year s (2008) WES model with 2009 data, the analytic steps typically implemented prior to the construction of a model were not performed in 2009. However, a description of this preliminary analysis is still beneficial, as it provides a summary of how the original 2006 WES model was developed, as well as offering direction for future SEM analysis. The primary goal of a SEM preliminary analysis is to identify which survey questions best support the modelling process. While all of the WES 2009 survey questions offer excellent insights into the respondents work environments, both the wording and scale of a question can limit their applicability during a SEM analysis. To determine which questions should be excluded from further analysis, a systematic analysis is performed across all survey questions. This identification process is made possible through the following steps: Review the response scales for each question and determine the need for scale transformations Consider the distribution and response characteristics for each question Test the associations between certain variables through correlation analysis Survey Scale Transformation As variance-covariance matrices play a central role in SEM analysis, variables with a larger range of values (and therefore greater variance) tend to produce more accurate results. Based on this premise, the response data for all 5-point questions in the survey should be linearly converted to a 00 point scale. This process is known as a percent to maximum conversion (PMT) and is based on the work of Miller and Miller 5. The result is a set of response data that can be analyzed under SEM, while also providing a range of values that can be easily standardized and interpreted. A limitation of the conversion is that it should only be performed on 5-point scale questions, as questions with shorter scales cannot provide a sufficient amount of variance for SEM analysis. Furthermore, shorter scales limit comparability with the larger 5-point scales due to differing interval sizes, which in turn significantly constrain the number of possible interactions a 3-point scale question can have within a model. With regards to the WES 2009 survey, the scale transformation step would require the removal of 4 2-point questions from further analysis, 2 of which focus on whether an employee has experienced a change in their work environment within the past year. One final consideration with regards to scale transformation is that it is sometimes necessary to invert the scale for certain questions. This is required when the positive and negative ends of a scale are in opposition to the majority of other scale questions throughout the survey. As an inverted scale may complicate the creation of model factors, as well as the interpretation of model results, aligning the inverted and non-inverted scales facilitates the analysis of all survey questions. 5 Miller, T.I. & Miller, M.A., (99). Citizen surveys: how to do them, how to use them and what they mean. Washington: International City/Country Management Association. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 8

PRELIMINARY ANALYSIS Data Screening By generating a comprehensive set of descriptive statistics for each of the remaining 5-point scale questions, it is possible to identify questions that have potential data quality issues. The statistics of interest should focus on the distribution of responses for each question and include measures of skewness, kurtosis, and measures of central tendency. Supporting these statistics is the inclusion of response rate and non-response trends, which provide a more complete picture of the distributions for each question. Normality of distributions (i.e., skewness and kurtosis) One of the assumptions implicit to SEM analysis is that the data being modeled is normally distributed, as well as multivariate normally distributed. Violations of these normality assumptions can result in either the inflation or minimization of the chi-square statistic, which in turn has a significant impact on the interpretation of a model s fit. As a result, identification of non-normal distributions is critical for determining which survey questions should and should not be used for SEM analysis. The criteria for flagging problematic response distributions are based on measures of skewness, kurtosis and mode. If a distribution has an absolute skewness value greater than two, the question s distribution should be noted as problematic. Similarly, absolute kurtosis values greater than two can also be used to identify non-normal distributions. With respect to a distribution s mode, in cases where the mode is at the end point of the response scale (0 or 00), the question and its distribution should be flagged as a concern. Missing responses The response rate for survey questions are frequently impacted by their wording and response options. If the wording of a question is confusing or only applies to a limited subset of the survey population, many respondents may be inclined to answer the question with Don t Know or Not Applicable. Depending on the proportion of respondents that answer in this way, the missing responses can have a significant effect on the resulting response distributions. If the rate of missing responses is large enough for certain questions, the absence of response data may actually represent a systematic response trend in which certain demographics have been excluded from the survey population. If this is the case, then concerns of representativeness and response bias become an important consideration. Using a missing response rate criterion of 0%, the Don t Know and Not Applicable response distributions should be reviewed for all unfiltered survey questions. Questions that have a cumulative missing response rate of 0% or higher can then be flagged as having potential bias concerns. Correlations between Variables Up to this point, each variable in WES will have been analyzed on its own. While this process provides an effective means of filtering out several of the SEM incompatible questions, the primary focus of SEM analysis is the relationships between variables. As such, a preliminary analysis of the relationships between variables is useful to begin developing the framework of the engagement model. To this end, a comprehensive correlation matrix should be generated to examine the relationship each variable has with every other remaining variable. As SEM requires variables to MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 9

PRELIMINARY ANALYSIS have at least a moderate connection with each other, correlation coefficients provide a clear indicator as to whether a linear relationship exists between two questions. Due to this requirement, variables with either extremely low or non-existent correlations with all other variables should be flagged as potentially incompatible with SEM analysis. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 0

FACTOR ANALYSIS FACTOR ANALYSIS While the preliminary analysis helps identify which variables are potentially SEM incompatible, it is necessary to further reduce the variable count for reasons of parsimony. Although SEM provides a powerful tool for explaining the predictive relationships of variables, the usefulness of SEM is compromised when models become unnecessarily complicated. With this in mind, factor analysis can be applied to the remaining variables as means of reducing the variable count as well as helping to better understand the structure of relationships between variables. The Identification and Grouping of Questions into Latent Variables Using results from previous WES iterations and existing employee engagement research as a guideline, the basic framework of the model is created by establishing a set of tentative latent variables 6. Each latent variable is comprised of a small number of questions that best represent theoretically established drivers of engagement. The ultimate goal of developing this rough engagement framework is to facilitate the exploratory and confirmatory analysis of the drivers leading to engagement. Exploratory analysis provides a means of discerning how well the questions within each proposed latent variable represent a statistically sound concept grouping of variables. Through the exploratory analysis, the latent variables are refined in an a posteriori fashion, until the factor analysis criteria are satisfied. Supporting this process is a SEM based confirmatory analysis of the refined latent variables directed by established research findings. This confirmatory analysis is a priori, requiring a specified model structure prior to the start of the analysis. Exploratory Principal Component Analysis Exploratory factor analysis was performed through principal components analysis (PCA). PCA works by indentifying a linear grouping of variables and extracting the maximum amount of variance from the group. This process is continued iteratively, group after group; extracting the maximum remaining variance as each group is identified. The result is a set of independent latent variables that are each comprised of a closely related group of questions. As the statistics required for PCA utilize the complete response set for each variable, the relationships between variables are based on a list-wise analysis. As a result, the list-wise statistics for each variable can sometimes provide differing sample totals. While this offers a less than ideal scenario, the typically high response rates for WES survey questions helps to minimize any inconsistencies the varying sample totals may create. As PCA is exploratory in nature, the initial question groupings under investigation are typically large and based primarily on hypothesis rather than established theory. Due to the large size of these initial groupings, it is not unusual for several factors to be present in a set of questions. To determine which factors are contained within a grouping of questions, the variables are loaded into a PCA routine, and the resulting coefficients and tests are compared against a set of 6 In SEM, there are certain terms that have similar, if not identical definitions. The term latent variable is one such example, and can sometimes be used interchangeably with the following terms: factor, component, unobserved variable. For the purposes of the current WES model, latent variables can be further distinguished as being either engagement drivers (split between the foundational management drivers and the building block, workplace function drivers) or one of the engagement characteristics (specifically the BC Public Service Commitment outcome latent variable that comprises a third of employee engagement). It should be noted that both the Job Satisfaction and Organization Satisfaction engagement characteristics each consist only of a single observed variable, and therefore, are not technically unobserved variables. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page

FACTOR ANALYSIS predetermined criteria. These criteria, and their associated coefficients and tests can be found in the table summary below. Table 3: Criteria for Principal Components Analysis Statistical Tests and /or Coefficients Criteria Correlations 0.3 Communalities/Extractions 0.5 Factor Loadings 0.6 Kaiser Meyer Olkin Measure of Sampling Adequacy* 0.6 Diagonal of Anti-Image Correlation Matrix* 0.05 Bartlett s Test 0.0 Eigen Values.0 Total Variance Explained 60% * Criteria used only when factoring more than two questions Beginning with Correlations and ending with Total Variance Explained the PCA results are reviewed step-by-step for each question. The Factor Loadings provide perhaps the most important finding of the PCA process, as they offer a tentative structure for an initial set of latent variables. Based on the Factor Loadings for a group of variables, a set of questions are flagged as representing a potential latent variable. The remaining, ungrouped questions are also flagged, and their introduction to an existing group or removal from the PCA process is considered. The newly grouped latent variables are then analyzed, either with the addition or removal of a separate variable. Improvements and deterioration in the PCA criteria are reviewed and further adjustments to the question grouping for each factor are made. According to statistical theory, latent variables should ideally be comprised of three to four observed variables 7. For the WES analysis, in cases where a factor is identified as having more than four variables, the four variables with the strongest contribution to the factor (based on criteria values) would be selected. In some cases, individual variables not grouped to other latent variables are identified as independent indicators of a construct 8. Once the PCA criteria values stabilize and the inclusion or removal of additional variables provides no further benefit to the characteristics of a latent variable, a finalized set of latent variables can be defined. Checking the Factor Analysis Results A check on internal consistency Following PCA, the internal consistency of each latent variable is examined through the application of a Cronbach alpha analysis. The Cronbach alpha is an indicator of how well a group of variables measure a single construct. Typically, alpha values of 0.7 or greater represent a uni-dimensional construct, whereas values of less than 0.7 suggest that a group of variables is measuring a multi-dimensional construct. In the event that the variables in a factor produce an alpha value of less than 0.7, the latent variable should be flagged prior to commencing SEM analysis. The flagged latent variable can then be subjected to a higher 7 D. Baer, personal communication, April 22, 200 8 This was the case for the Job Satisfaction and Organization Satisfaction drivers in the WES engagement model. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 2

FACTOR ANALYSIS degree of scrutiny during the modelling phase, to help minimize the impact of internally inconsistent variables on the final model. A check on multicollinearity Multicolllinearity provides an indication of how closely predictor variables are correlated in a multiple regression equation. While models with high multicollinearity values don t necessarily lose overall predictive power, individual relationships between variables and latent variables can be compromised due to question redundancy or overlap. The decision rules for multicollinearity checks are based on two criteria: tolerance.20 and variance inflation factors (VIF) 4.0. The first of two multicollinearity checks should involve regressing all proposed model questions with the job satisfaction question (I am satisfied with my job) as an outcome variable. The second multicollinearity check should regress all proposed model questions with the organization satisfaction question (I am satisfied with my organization). If for either check a problematic tolerance and/or VIF value is obtained for a particular question(s), then further consideration should be given to the potentially redundant questions. If the redundancy is substantial, then the removal of the question should be considered prior to beginning SEM. Overall Conclusions The end result of factor analysis should be a significant reduction in the overall question count as well as a set of independent latent variables. These represent a preliminary set of latent variables for the engagement model and will provide a basis for the subsequent SEM analysis MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 3

STRUCTURAL EQUATION MODELLING ANALYSIS STRUCTURAL EQUATION MODELLING ANALYSIS Compared to the PCA portion of the analysis, which consists of an exploratory examination of latent variables present in the WES, SEM represents an analysis that is more confirmatory in scope. Whereas PCA identifies potential latent variables based on the exploratory established relationships between variables, SEM describes the causal relationships between already clearly defined latent variables. Therefore, prior to performing SEM analysis, the relationship between latent variables must be described by pre-existing theory and empirically confirmed findings. For 2009 WES, confirmation of the WES engagement house model was achieved by incorporating WES 2009 data into the pre-existing WES 2008 SEM model. As the structure of the WES 2008 model was based on both empirically and theoretically established findings, the existing model offered a strong framework against which the WES 2009 data could be tested. This process was made possible through the implementation of AMOS, SPSS s structural equation modelling software. As both BC Stats and ERIN Research had access to and familiarity with AMOS, it was possible for BC Stats to load the 2009 WES model data directly into the 2008 WES house model files developed by ERIN Research. By utilising the original model files, BC Stats was able to ensure that the default model parameters remained constant over time, thereby guaranteeing direct comparability of the 2008 and 2009 models. Variance-Covariance Matrices The first step in performing a SEM analysis involves the generation of a variance-covariance matrix of all model variables. A variance-covariance matrix provides a structure for all the joint probability distributions contained within a dataset. While AMOS allows for the loading of both matrices and complete datasets, due to the large size of the WES dataset, matrices were used in the 2009 WES analysis in an effort to minimize the load on workstation processors. This in turn made it possible to include a large number of variables in the final matrix without adversely impacting file size considerations. Using the full set of 72 5-point scale questions, 3 variance-covariance matrices were generated for the purposes of SEM analysis. The matrix of greatest importance was based on the overall sample (n = 23,574), whereas the remaining 2 matrices were based on an organization level stratification of the overall sample s dataset. These 2 matrices consisted only of organizations with samples greater than 400, each of which provided a sufficiently large sample size for individual SEM analysis. Establishing Model Fit Criteria and SEM Requirements To determine how well a proposed model fits the response data, a set of criteria was established. These criteria represent the minimum acceptable thresholds commonly used for SEM analysis. The fit indices of interest consisted of following: relative chi square statistic (CMIN/df), significance for the chi square statistic (p), Standardized Root Mean Squared Residual (SRMR) Comparative Fit Index (CFI), Normed Fit Index (NFI), Tucker-Lewis Index (TLI), Parsimony- Adjusted Measures (PCFI), and Root Mean Square Error Approximation (RMSEA). The MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 4

STRUCTURAL EQUATION MODELLING ANALYSIS decision criteria used for all key indices and tests are shown in the table below. The criteria for specific squared multiple correlations (R 2 ) and standardized regression estimates (R) are listed as well. Table 4: Criteria for Structural Equation Modelling Analysis Type of Index / Estimate Name of Index / Estimate Criteria Absolute Fit Indexes Baseline Comparisons Parsimony Adjusted Measures CMIN/df Lower the better p > 0.05* SRMR < 0.05 CFI > 0.95 NFI > 0.95 TLI > 0.97 PCFI Higher the better RMSEA < 0.05 R for Drivers Regression Weight p 0.05 R 2 for Outcomes Squared Multiple Correlation Higher the better * Represents a test of non-significance In addition to model fit criteria, several analytic decisions needed to be made prior to the beginning of SEM. Perhaps the most important of these choices was the type of estimation method to be used during the analysis. Depending on the unique characteristics of a dataset (e.g. sample size, normality, etc.), a particular estimation method may be better suited than others when generating parameter estimates and/or performing fit analysis. In the case of the 2009 WES data, it was decided that the well tested Maximum Likelihood (ML) estimator offered the most reliable and robust results. Understanding Structural Path Diagrams To better understand the relationships between the engagement model s latent variables and their constituent surveys questions, an example of a structural diagram is presented in Figure. The example diagram provides a simplified depiction of the structural relationship that exists between the Executive-level Management and Supervisory-level Management drivers. As structural models allow for the estimation of complex causal relationships between variables, diagrams are can be useful in understanding both the direction and composition of the relationships. For the purposes of WES, all measurement and structural paths were assumed to be linearly related, and as such, the coefficients presented are associated with each path s resulting linear regression equation. For more detailed structural diagrams, refer to Figure 2 and Figure 3. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 5

STRUCTURAL EQUATION MODELLING ANALYSIS Figure : Example Diagram of the Relationship between the Two Management Drivers EXAMPLE STRUCTRUAL DIAGRAM Exec Error Supe Error Executive-level Management 0.45 Supervisory-level Management Q Executive Q2 Executive Q Supervisor Q2 Supervisor Q Error Q2 Error Q Error Q2 Error Ovals represent unobserved variables (also referred to as factors, latent variables, components). The light blue ovals are estimated error terms that are associated with observed variables as well as other unobserved variables. Dark blue ovals are the theoretically determined constructs that are identified by the measured variables. Rectangles represent observed variables (also referred to as indicator variable, measured variable, survey question (in the case of WES). Arrows represent paths between variables and are quantified by a regression coefficient. If the arrow connects two latent variables it is called a structural weight (represented by a value of 0.45 in the above example), whereas paths connecting a latent variable with an unobserved variables is called a measurement weight. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 6

STRUCTURAL EQUATION MODELLING ANALYSIS Basic Model Tests Validating the Links between Management and Engagement Much of the research into employee engagement supports a model that is multipartite and hierarchical in structure 9. For the BC Public Service, the foundation of this structure is comprised of management drivers, which support workplace function drivers, which in turn lead to the characteristics of engagement. While the workplace functions can have a significant impact on engagement, engagement is primarily influenced by the structure s foundation. With this in mind, SEM analysis of the 2009 WES data commenced with the confirmation of the WES 2008 basic engagement model. Representing a reduced version of the full house model the basic model focuses exclusively on management drivers and engagement characteristics. As with the full house model, the foundation of the basic model is comprised of two management drivers: Executive-level Management 0 and Supervisory-level Management. Moving upward, the management foundation directly impacts the three engagement characteristics: Job Satisfaction, Organization Satisfaction and BC Public Service Commitment. The following figure (Figure 2) provides a structural diagram of the basic model, including all observed and unobserved variables, as well as each all of the model s paths and their associated standardized regression weights. 9 Schmidt, F. (2009). Employee engagement: a review of the literature. Schmidt and Carbol Consulting Group, Inc. for the Office of the Chief Human Resources Officer, Treasury Board of Canada Secretariat. 0 It is important to note that the questions which comprise both the 2008 and 2009 WES Executive-level Management model drivers are not entirely the same as those found in the 2008 and 2009 WES reports. Whereas the model driver consists of two questions (Executive in my organization communicate decisions in a timely manner. and Executives in my organization provide clear direction for the future.) the report driver includes a third question (Executives in my organization clearly communicate strategic changes and/or changes in priorities.).it should also be noted that the reports for the 200 WES have addressed this issue by eliminating the third Executive focused question. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 7

STRUCTURAL EQUATION MODELLING ANALYSIS Figure 2: Diagram of the Basic Model of Engagement E Q.87 Q Commit E Q2.62 Q2 Commit E Org Sat Organization Satisfaction.58.93.79 E Commit.53.58 BC Public Service Commitment.53.9 E Job Sat Job Satisfaction.55.52.6.6.05.5.24.90 Q Exec Exececutive-level Management.82.9.83 Q2 Exec.45.93 Supervisory-level Management Q Super.87.93.86 Q2 Super E Q E Q2 E Q E Q2 Using the SEM criteria described above, the 2008 WES basic model was analyzed using the 2009 WES overall sample s covariance matrix. A review of the fit indices revealed that the 2009 dataset fit well with the 2008 basic model, suggesting that the model continued to provide a MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 8

STRUCTURAL EQUATION MODELLING ANALYSIS suitable description of the work environment for the overall public service. However, as with the 2008 basic model, the 2009 model produced a significance value (p) for the Chi-square statistic (CMIN) that did not satisfy the index s minimum requirements. One explanation for this result is that the large sample size (n = 23,574, model n = 23,250) inflated the Chi-square statistic, leading to a significant discrepancy between the expected model (the hypothetical model based on the defined structure) and the observed model (the actual measured basic model data). As the chi-square statistic is known to become distorted by large sample sizes, regardless of model fit, it was decided that the final assessment of fit should instead focus on the results of the remaining fit indices. With this in mind, Table 5 provides a summary of the results for both the 2008 and 2009 basic models. Table 5: Comparison of 2008 and 2009 WES Basic Model Findings R² - Squared Multiple Correlation WES Model CMIN/df p RMSEA Year n* Job Satisfaction Organization Satisfaction BC Public Service Commitment 2008 2,03 26.2 0.00 0.035 54% 60% 58% 2009 23,250 24.5 0.00 0.032 55% 58% 58% * As the AMOS version used for SEM analysis did not have imputation capabilities, analysis could not be performed on missing data. As a result, the Model n refers to the number of respondents who answered every WES question. Consistent with the 2008 WES findings, the 2009 WES model offered further confirmation of the strong relationships between the management foundation and employee engagement. While the deterioration in the Organization Satisfaction R² value suggested the 2009 basic model represented a less complete description of employee engagement as compared to 2008 s basic model, the results for the remaining two engagement characteristics indicated this effect was fairly moderate. These results also further substantiated the underlying structure of the engagement model and supported the subsequent development of a fully detailed house model. Once the 2009 WES data had been successfully fitted to the 2008 WES basic model, consideration was given to how the model could be further improved. While the criteria for all fit indices were sufficiently met, it was hypothesized that refinements to the 2009 WES basic model may illicit additional improvements to the fit indices, representing an increase in model fit. The scope of the modifications was limited to either the alteration of structural weights or the composition of latent variables, and in both cases, the changes were directed by theory and/or AMOS s modification indices. In total, five separate basic models were tested and compared. The fit indices for each model, as well as a description of the model adjustments, are summarised in Table 6. Reviewing the fit indices for each model variation revealed that Model, the model with the same structure as the 2008 basic model, provided the best model fit for the 2009 data. While some of the alternate models offered slightly improved results for one or two fit indices, none of the model adjustments led to a consistent improvement across all indices. Furthermore, an examination of the squared multiple correlations suggested that Model explained the greatest amount of variance across all outcome measures. It should be noted that the Model 4 adjustment resulted in no discernable impact to the model s fit. In other words, the change from a covariance relationship to a direct path leading from Executive- to Supervisory-level Management, did not affect the model in any observable way. In SEM terms, a covariance relationship differs from a direct regression path in that two variables are reciprocally rather than uni-directionally connected. As a result, two covariant variables can influence one another. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 9

STRUCTURAL EQUATION MODELLING ANALYSIS Table 6: Comparison of 5 Basic Model Variations Model Number Type of Modification Description of Modification* CMIN/df p SRMR CFI NFI TLI PCFI RMSEA R² - Squared Multiple Correlation Job Satisfaction Organization Satisfaction BC Public Service Commitment Unmodified N/A 24.5 0.00 0.008 0.998 0.998 0.995 0.428 0.032 55% 58% 58% 2 3 4 5 Latent Variable Executive-level Management Latent Variable Executive-level Management Structural Weight Structural Weight Replaced question "Executives in my organization communicate decisions in a timely manner." with "Executives clearly communicate changes and/or changes in priorities." Added question "Executives clearly communicate changes and/or changes in priorities." Replaced covariance between Executive- and Supervisory-level Management with a path leading from Executive- to Supervisory-level Management Replaced path leading from Commitment to Job Satisfaction with path leading from Job Satisfaction to Commitment 22. 0.00 0.008 0.998 0.998 0.996 0.428 0.030 55% 57% 58% 37.6 0.00 0.02 0.996 0.996 0.992 0.526 0.040 55% 56% 58% 24.5 0.00 0.008 0.998 0.998 0.995 0.428 0.032 55% 58% 58% 24.5 0.00 0.008 0.998 0.995 0.995 0.428 0.032 33% 60% 67% * All model adjustments are in relation to the unmodified Model. The modifications in Models 2 through 5 are not cumulative. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 20

STRUCTURAL EQUATION MODELLING ANALYSIS Full Model Tests Validating the Links between Management, Workplace Functions and Engagement Once satisfactory results were obtained for the basic model, work began on the confirmation of model fit between the 2009 WES data and the 2008 house model. Whereas the basic model was comprised of a management foundation and three engagement characteristics, the house model built upon the basic model s framework by introducing several workplace functions. Acting as the house model s building blocks, the workplace functions helped support and mediate the foundation s impact on the engagement outcomes. This in turn developed a depiction of the work environments that was both broader and more nuanced than the basic model, while also helping to explain more of the variance in the engagement outcomes. Incorporation of WES 2009 data into the WES 2008 house model led to the confirmation of 2 unique drivers of engagement. Of these 2 drivers, 0 represented various workplace functions throughout the work environment, while the remaining two drivers characterized the executive and supervisory aspects of the management foundation. Once the three engagement characteristics were accounted for, a final count of 5 drivers was obtained for the house model. The location and connection amongst all 5 drivers is provided in Figure 3. Due to the complexity of the house model, it was necessary to suppress the majority of regression coefficients in order to make the structural diagram readable. The coefficients that are present, all of which are equal to one, represent either reference indicators or specific parameter constraints that are needed to correctly perform a SEM analysis. Similar to the basic model, the survey questions that comprised each of the house model s 5 drivers were nearly identical to those found in the standard WES reports. However, differences were present, specifically with regards to the Executive-level Management driver and the Empowerment driver 2. As with the basic model, these differences were partly a reflection of shifts that have occurred in respondents perceptions since the engagement model s initial development in 2006. Additionally, efforts to improve both the model fit and questionnaire over time have resulted in minor refinements to the model, including the modification of the Executive-level Management and Empowerment drivers. In terms of model fit, the 2009 WES data appeared to work well within the structure of the WES 2008 model. Using the same variance-covariance matrix that was employed in the 2009 basic model SEM analysis, the resulting fit indices for the 2009 house model were closely comparable to the 2008. Despite the difficulties associated with interpretation of Chi-square statistics for large samples, the remaining fit indices provided a strong confirmation that the WES 2008 house model was not only a good representation of the 2009 WES data, but actually had a better fit with the 2009 data than the 2008 data. A summary of the relevant fit indices for the 2006 through 2009 house models is provided in the Error! Reference source not found.. 2 While the Empowerment driver for both the 2009 house model and the WES reports is comprised of three survey questions, one of the questions differs between the model and reports. For the WES reports, the Empowerment driver includes the question: I am encouraged to be innovative in my work. In the WES reports, this is replaced by the question: I have the opportunities I need to implement new ideas. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 2

STRUCTURAL EQUATION MODELLING ANALYSIS Table 7: Comparison of 2006-2009 WES Full Model Findings R² - Squared Multiple Correlation WES Year Model n CFI SRMR RMSEA Job Satisfaction Organization Satisfaction BC Public Service Commitment 2006** 4,392 0.993 0.023 0.034 62% 69% 65% 2007 7,469 0.993 0.02 0.033 50% 68% 73% 2008 2,03 0.98 0.024 0.034 54% 66% 69% 2009 23,250 0.982 0.023 0.033 55% 66% 70% * As the AMOS version used for SEM analysis did not have imputation capabilities, analysis could not be performed on missing data. As a result, the Model n refers to the number of respondents who answered every WES question. ** The in-scope population for 2006 did not include the Ministry of Transportation. As such, 2006 model results should not be directly compared to the results for subsequent years. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 22

STRUCTURAL EQUATION MODELLING ANALYSIS Figure 3: Diagram of the House Model of Engagement E Q Q Vis E Q2 E Vis Q2 Vis E Q Q Prof E Q2 Q2 Prof E Q3 Q3 Prof E Org Organization Satisfaction Vision, Mission & Goals Professional Development E Q Q Comm Q2 Comm E Q2 E Comm BC Public Service Commitment Empowerment Recognition Physical Tools & Environment E Job Job Satisaction E Emp Q Emp E Q Q2 Emp E Q2 Q3 Emp E Q3 E Recog Q Recog E Q Q2 Recog E Q2 E Phys Q Phys E Q Q2 Phys E Q2 E Q E Q2 E Prof Q Pay Q2 Pay E Pay Pay & Benefits Teamwork Stress & Workload E Stress Q Stress E Q Q2 Stress E Q2 Q2 Team E Q2 Q Team E Q Q3 Team E Q3 E Team Respectful Environment Q Resp E Q Q2 Resp E Q2 Q3 Resp E Q3 Executive-level Management E Staff E Super E Exec Q Exec Q2 Exec E Q E Q2 Staffing Practices Q Staff Q2 Staff E Q E Q2 Supervisory-level Management Q Super Q2 Super E Q E Q2 E Resp As the 2009 WES data was found to have a slightly better fit with the 2008 WES house model than the 2008 WES data, it was hypothesized that minor modifications to the house model MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 23

STRUCTURAL EQUATION MODELLING ANALYSIS would result in even further improvements. As such, a set of adjustments were defined and then systematically applied to the unmodified 2009 house model through an iterative process. To ensure the adjustments were both theoretically and empirically sound, the choice of each modification was informed by a combination of established research and AMOS s modification indices. Including the unmodified house model, seven models in total were tested and compared. The results for this comparison are summarised in Table 8. A comparison of the fit indices for each of the seven house model variations revealed only slight differences; with each modified version offering a combination of slight improvements and moderate deteriorations when contrasted with the unmodified version. Due to the similarity in fit indices for all model variations, a small subset of criteria were selected in order to determine which of the seven models represented the best fitted model. This subset of criteria was assessed hierarchically, such that those models that did not satisfy the requirements of the initial sub-criterion were dropped from further analysis. As the engagement characteristics represent the final outcome of the model, their associated R² values provided the first sub-criterion by which the seven models could be differentiated. Applying this criterion, the decrease in the R² values for models 2, 5 and 6 indicated that each variation explained less of the variance in employee engagement than model, and as result, were subsequently removed from further consideration. The second criterion used to distinguish the remaining four models focused on the rule of parsimony. The rule of parsimony suggests that when considering two equivalent models, preference should be given to the simpler, less complex model. As the PCFI provides a parsimony adjusted measure of model fit, a comparison of PFCI values for models, 3, 4 and 7 offered a clear means of distinguishing the similarly fitted models. Based on this analysis, the PCFI indicated that model was the most parsimonious model, and therefore the best representation of the 2009 WES data. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 24

STRUCTURAL EQUATION MODELLING ANALYSIS Table 8: Comparison of 7 House Model Variations Model Number Type of Modification Description of Modification CMIN/df p SRMR CFI NFI TLI PCFI RMSEA R² - Squared Multiple Correlation Job Satisfaction Organization Satisfaction BC Public Service Commitment Unmodified N/A 26.2 0.00 0.023 0.982 0.98 0.978 0.88 0.033 54.5% 65.6% 69.7% 2 Latent Variable Executive-level Management Replaced question "Executives in my organization communicate decisions in a timely manner." with "Executives clearly communicate changes and/or changes in priorities." 26. 0.00 0.023 0.982 0.982 0.979 0.88 0.033 54.5% 65.3% 69.6% 3 4 5 Structural Weight Structural Weight Structural Weight * Added path leading from Teamwork to Job Satisfaction *Added path leading from Teamwork to Job Satisfaction *Added path leading from Pay & Benefits to Vision, Mission & Goals *Added path leading from Teamwork to Job Satisfaction *Added path leading from Pay & Benefits to Vision, Mission & Goals *Added path leading from Teamwork to Commitment 25.5 0.00 0.022 0.983 0.982 0.979 0.86 0.032 54.8% 65.5% 69.8% 25.3 0.00 0.022 0.983 0.982 0.979 0.84 0.032 54.8% 65.6% 69.9% 24.9 0.00 0.022 0.983 0.982 0.98 0.83 0.032 56.4% 65.5% 69.% 6 7 Latent Variable Executive-level Management Latent Variable Empowerment Added question "Executives clearly communicate changes and/or changes in priorities." Replaced question "I have opportunities I need to implement new ideas." with "I am encouraged to innovative in my work." 27.2 0.00 0.023 0.98 0.98 0.978 0.825 0.034 54.6% 65.4% 69.6% 26.7 0.00 0.023 0.982 0.98 0.978 0.87 0.033 54.5% 65.6% 69.7% * All model adjustments are in relation to the unmodified Model. The modifications in Models 2 through 7 are not cumulative. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 25

STRUCTURAL EQUATION MODELLING ANALYSIS Assessment of Organizational Differences By using the overall sample (n = 23,574), it was possible to obtain confirmation of both the basic and house models within the context of the entire BC Public Service. While this provided a clear depiction of employee engagement corporate wide, it was done so at the expense of finer grained results. With all high level estimates, variation within the population of interest can sometimes be obscured by the aggregation of demographic and geographic groups. Given the BC public service s size and breadth, it became clear that a higher resolution analysis was necessary to fully understand model fit issues at the organization level. Before the modelling of organizations could be performed, it was necessary to reconsider some of the requirements of SEM due to the wide variation in organization level sample sizes. One of the critical assumptions of SEM is that the dataset being analysed is comprised of a sufficiently large sample (n > 400). Clearly, the sample obtained for the overall public service exceeded this threshold. However, at the organization level, of the 23 in-scope organizations did not satisfy the criteria, and as such, the SEM analysis had to be constrained to the 2 organizations with sample sizes greater than 400. Basic Model Fit by Organization Once the 2 organizations were identified, their respective variance-covariance matrices were individually loaded into the basic model and a unique set of model parameter and fit indices was generated for each organization. After the parameters and indices were obtained for all 2 organizations, comparisons were made between each organization s results as well as the results for the overall public service. In comparison to the overall public service, the comparatively small sample sizes for each organization lead to a substantial shift in the significance levels for many of the parameter estimates. While this impact was most pronounced for the smaller organizations (n 400), half of all organizations under analysis were found to have at least one parameter that was nonsignificant (based on a total of nine structural weights for the basic model). As the intent of the organization level analysis was to confirm if the structure of the corporate wide basic model could be applied to organizations, focus was given specifically to non-significant structural weights. Given that structural weights represented the paths leading to engagement, a nonsignificant structural weight indicated that certain relationships between the management drivers and the engagement characteristics were not universally applicable to all organizations. It should be noted though that the composition of latent variables were not subjected to similar scrutiny, as model findings suggested there were no significant issues with the latent variables factor loadings. In other words, concerns over the basic model s structure were limited to the paths between drivers and not the drivers themselves. A summary of the significant and nonsignificant structural weights for each organization is provided in Table 9. Table 9: Organization Level Basic Model Structural Weights Organization Significant Non-Significant Structural Structural Weights Weights Attorney General 8 BC Public Service Agency 8 Children and Family Development 8 Citizens Services 9 0 MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 26

STRUCTURAL EQUATION MODELLING ANALYSIS Organization Significant Structural Weights Non-Significant Structural Weights Environment 9 0 Finance 9 0 Forests and Range 9 0 Health Services 8 Housing and Social Development 9 0 Integrated Land Management Bureau 8 Public Safety and Solicitor General 9 0 Transportation and Infrastructure 8 Keeping the structural weight results in mind, the fit indices for each organization (summarised in Table 0) were reviewed to determine if their corresponding variance-covariance matrices fit well with the basic model for the overall BC public service. Beginning with the Chi-square statistics, non-significant discrepancies were obtained for Health Services and Integrated Land Management Bureau (ILMB). This indicated that, based on the chi-square statistic, the basic model worked well for both of these organizations. Unfortunately, the remaining 0 organizations, much like the corporate wide results, were found to have a significant difference between the observed and expected model results. While the large sample for the overall public service prevented a clear interpretation of its chi-square statistic, the less extreme sample sizes at the organization level suggested that the basic model was not an ideal representation of the work environment for 0 organizations. Despite the problematic chi-square statistics, a review of the remaining fit indices suggested that all but two of the 2 organizations had a strong fit with the basic model. For both the BC Public Service Agency and the Ministry of Environment, their corresponding RMSEA values exceeded the minimum 0.05 requirement. As the RMSEA is largely independent of sample size, the low BC Public Service Agency sample size and the moderate Ministry of Environment sample size likely had no distorting influence on the index s values. Based on this finding, both the BC Public Service Agency and the Ministry of Environment were determined to have a poor fit with the basic model. Finally, turning to the multiple squared correlations results, a wide range of R² values were obtained for all three engagement characteristics, both within and between organizations. For four of the 2 organizations, the R² values for all three of the engagement characteristics were either equivalent to or higher than the corresponding corporate level results. This indicated that, for these four organizations, the basic model provided a more comprehensive explanation of engagement than the overall sample results initially suggested. Conversely, the remaining eight organizations offered less consistent results, with each organization having at least one engagement characteristic R² value that dropped below the corresponding result for the overall sample. Taking both the parameter estimates and fit indices into consideration, it was determined that, while the basic model fit moderately well for some organizations, it did not provide an ideal representation for any organization s work environment. Although this does not represent a best-case scenario, it is not entirely surprising as the development of the original basic model was based upon corporate wide results. Furthermore, the fact the basic model fits as well as it did for several organizations points to its ability to describe both specific and more general work environments. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 27

STRUCTURAL EQUATION MODELLING ANALYSIS Table 0: Comparison of Organization Level Basic Model Findings Organization Model N CMIN/df p SRMR CFI NFI TLI PCFI RMSEA R² - Squared Multiple Correlation Job Satisfaction Organization Satisfaction BC Public Service Commitment Attorney General 2,564 3.4 0.00 0.009 0.998 0.997 0.996 0.428 0.03 60% 60% 67% BC Public Service Agency 397 2.6 0.00 0.022 0.997 0.984 0.977 0.424 0.064 59% 57% 49% Children and Family Development 3,533 4.7 0.00 0.008 0.998 0.997 0.995 0.428 0.033 49% 54% 55% Citizens' Services,755 3.3 0.00 0.0 0.997 0.996 0.994 0.427 0.036 57% 58% 6% Environment,303 4.8 0.00 0.04 0.994 0.992 0.985 0.426 0.054 60% 56% 55% Finance,356 2.5 0.00 0.00 0.998 0.996 0.995 0.428 0.033 56% 63% 55% Forests and Range 2,73 2.6 0.00 0.009 0.999 0.998 0.997 0.428 0.025 56% 53% 60% Health Services 85.7 0.05 0.00 0.998 0.996 0.996 0.428 0.030 58% 58% 58% Housing and Social Development 2,224 4.0 0.00 0.009 0.997 0.996 0.993 0.427 0.037 53% 57% 50% Integrated Land Management Bureau 483.7 0.06 0.08 0.997 0.992 0.993 0.427 0.038 56% 60% 49% Public Safety and Solicitor General 2,60 3.2 0.00 0.009 0.998 0.997 0.995 0.428 0.032 59% 62% 63% Transportation and Infrastructure,80.8 0.04 0.006 0.999 0.997 0.997 0.428 0.027 53% 56% 62% MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 28

STRUCTURAL EQUATION MODELLING ANALYSIS Full Model Fit by Organization Similar to the process used for the basic model, the 2 organization level variance-covariance matrices were individually loaded into the house model. Once loaded, the parameter estimates and fit indices were calculated for each organization and then contrasted with the remaining organization and overall public service results. Beginning with the parameter estimates, both the structural weights and their associated significance levels were examined for each organization. In the case of the overall sample, the large sample size helped ensure the statistical significance of even minimally strong structural weights. Unfortunately, the comparatively small organization level sample sizes did not provide the same statistical power 3, and as result, nine organizations were found to have at least one non-significant structural weight. As expected, the incidence of non-significant structural weights was highest for the smaller organizations, while only a handful of the weights for larger organizations were found to be non-significant. For the majority of organizations with non-significant parameters, the number of problematic structural weights was relatively low, and remained roughly consistent with each organization s respective basic model results. This was particularly noteworthy as the total number of structural weights in the full model was 40, whereas the simpler basic model only included a total of nine weights. However, for both the BC Public Service Agency, and the Integrated Land Management Bureau, the high incidence of non-significant weights suggested deterioration in fit between the house model and each of the two organization s variance-covariance matrices. A summary of the parameter results is provided in Table. Table : Organization Level House Model Structural Weights Organization Significant Structural Weights Non-Significant Structural Weights Attorney General 39 BC Public Service Agency 25 5 Children and Family Development 40 0 Citizens Services 37 3 Environment 37 3 Finance 40 0 Forests and Range 36 4 Health Services 38 2 Housing and Social Development 40 0 Integrated Land Management Bureau 30 0 Public Safety and Solicitor General 39 Transportation and Infrastructure 36 4 Consistent with the parameter estimate findings, model fit indices provided encouraging results for the larger organizations, while some of the smaller sample size organization had one or more fit indices that did not sufficiently meet their corresponding criteria. However, Chi-square statistics for all organizations, regardless of sample size, showed a significant difference between the observed and expected models. Due to the significance of the discrepancies, a 3 Statistical power refers to the probability that a statistical test will reject a false null hypothesis (a Type II error). Factors that influence statistical power include sample size, effect size and the statistical significance criterion. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 29

STRUCTURAL EQUATION MODELLING ANALYSIS Chi-square statistic focused interpretation of the models suggested that none of the organization level variance-covariance matrices fit well with the full house model. Looking at remaining fit indices, a total of four organizations (BC Public Service Agency, Ministry of Environment, Health Services, and Integrated Land Management Bureau) were found to have at least one index that did not satisfy its minimum requirement. For all four organizations, the TLI proved to be an issue, resulting in values slightly less than the index s 0.97 criteria. Both the BC Public Service Agency and the Integrated Land Management Bureau were found to have similar difficulties with the NFI, with values of less than 0.95. Finally, the BC Public Service Agency was found to have an RMSEA value greater than the minimum 0.05; a result that occurred during both the basic and full model SEM analyses. A summary of these results can be found in Table 2. Focusing on the R² values, the house model provided four organizations with a more complete explanation of all three engagement characteristics as compared to overall public service results. For the remaining eight organizations, the R² value for at least one of the engagement characteristics was less than the overall public service, indicating the organization level model explained less of the variance for one or more of the outcomes. Due to the increase in both the non-significant structural weights and problematic fit indices, the house model proved to offer a poorer organization level fit than the basic model. This lack of model fit was particularly pronounced for the BC Public Service Agency and the Integrated Land Management Bureau, suggesting that substantial alterations to the house model would be necessary if a well fitted model was to be obtained for either organization. To address these concerns, individual adjustments were made to both the BC Public Service Agency and the Integrated Land Management Bureau, based on the removal of non-significant structural weights. Unfortunately, even after all the alterations were exhausted for each model, improvements in the fit indices were only slight. Based on these results, the organization level fit of the house model was determined to be moderate for the majority of organizations, and very poor for these two organizations. Whether model fit was simply a result of sample size, or a reflection of cultural differences between organizations is unclear at this time. What is evident though is that house model offered a close, but not exact, representation of the work environment for the majority of organizations. Should improvements be necessary, it is likely that the majority of refinements will be constrained to the structural weights rather than the composition of latent variables. This offers confirmation that the drivers of engagement are sound, even at the organization level; whereas the differences between organizations may originate out of the paths that connect drivers with one another. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 30

STRUCTURAL EQUATION MODELLING ANALYSIS Table 2: Comparison of Organization Level House Model Findings Organization Model N CMIN/df p SRMR CFI NFI TLI PCFI RMSEA R² - Squared Multiple Correlation Job Satisfaction Organization Satisfaction BC Public Service Commitment Attorney General 2,564 4.4 0.00 0.027 0.980 0.974 0.976 0.86 0.036 58% 66% 75% BC Public Service Agency 397 2. 0.00 0.042 0.953 0.95 0.943 0.793 0.054 29% 67% 70% Children and Family Development 3,533 5.0 0.00 0.026 0.980 0.975 0.976 0.86 0.034 46% 63% 66% Citizens Services,755 3.5 0.00 0.026 0.978 0.978 0.973 0.84 0.037 54% 64% 7% Environment,303 3.4 0.00 0.033 0.969 0.956 0.963 0.807 0.043 45% 66% 7% Finance,356 3.0 0.00 0.026 0.977 0.966 0.973 0.84 0.038 55% 69% 67% Forests and Range 2,73 4.4 0.00 0.026 0.978 0.972 0.973 0.84 0.035 45% 6% 7% Health Services 85 2.6 0.00 0.033 0.969 0.95 0.963 0.807 0.043 62% 66% 67% Housing and Social Development 2,224 4.0 0.00 0.028 0.977 0.969 0.972 0.83 0.037 55% 63% 60% Integrated Land Management Bureau 483.8 0.00 0.034 0.972 0.939 0.967 0.80 0.040 54% 67% 62% Public Safety and Solicitor General 2,60 3.7 0.00 0.025 0.980 0.972 0.976 0.86 0.035 57% 70% 74% Transportation and Infrastructure,80 2.8 0.00 0.029 0.976 0.963 0.97 0.82 0.039 52% 67% 73% MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 3

LIMITATIONS AND RECOMMENDATIONS LIMITATIONS AND RECOMMENDATIONS For the 2009 WES results, the primary limitation of the modelling process had less to do with the statistical shortcomings of the model, and more to do with underlying method of analysis. As the SEM process focused on the confirmation of the 2009 WES data with an existing model of engagement, it is unclear whether or not the resulting basic and/or house models were the best possible representation of the 2009 results. If, as was done during the development of the initial house model in 2006, the 2009 models were built from the ground up following the steps outlined above in the preliminary and factor analysis sections, then it is possible that a different basic and/or house model may have been obtained. However, it is also possible that such a process might have lead to the development of a model that was nearly identical to the existing 2008 model. While it is difficult to address such hypothetical considerations, it is important to note that the 2009 basic and house models were established through the confirmation of an existing model. As a result, the introduction of new elements, such as new questions or entirely new latent variables, was limited to drivers that had already undergone modifications (such as altering the combination of questions in the Executive-level Management or Empowerment drivers). This presented perhaps the greatest limitation of the confirmation method, as the underlying structure of the engagement model was forced to remain largely static. While the 2006 WES model offered an excellent representation of the BC Public Service, many theoretical and empirical findings in the field of engagement research have likely come to light since that time 4. One such example is the recent investigation performed by BC Stats into the benefits of an expanded organization satisfaction engagement characteristic (e.g. including work unit satisfaction as an additional indicator of engagement). Unfortunately, due to the WES modelling method s focus on confirmation, many of these findings have been precluded from incorporation into the model. Ideally, as the theoretical understanding of engagement evolves over time, a similar evolution should also be reflected in the BC Public Service s model of engagement. For future iterations of WES, it may be possible to replicate the confirmation method as well as develop a model from scratch. This process could then offer an opportunity to contrast the models and determine if the differing analytical processes lead to the development of similar or distinct engagement models. In the event that the two processes lead to similar models, particularly with respect to the composition of latent variables, then the reliability of the confirmation method could be further substantiated. This would also offer a deeper insight into how extensively the BC Public Service work environment has changed since the initial 2006 model was established. In terms of the underlying structure of the engagement models, a specific concern was indentified with regards to the basic model. The basic model was originally conceived to be a limited representation of the work environment, in which the management foundation s influence on engagement could be modelled in isolation. However, as some of the structural paths present in the basic model were not also present in the house model, a comparison between the basic and house models became less clear cut. While the both the basic and house models offer well fitted representations of the BC Public Service work environments, the role management plays in each model may not be equivalent. To address this issue for future SEM analysis, BC Stats intends to reconcile the differences in structural weights between each 4 A literature search performed on April 29, 200, using EBCSOhost.com indicated 46 journal articles have been published since 2007 in the field of employee engagement. Expanding the search to include non-peer reviewed sources increased the article count to several hundred. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 32

LIMITATIONS AND RECOMMENDATIONS model, such that the same structural weights appear in both the basic and house models. This would help ensure directly comparability between models, as well as more fully define the relationships the management drivers have throughout the work environment. Finally, focusing on more technical concerns, the difficulty in interpreting the Chi-square statistics presented a significant challenge during the modelling process. As Chi-square statistics represent one of the primary means of assessing model fit, distortion of the statistic can make a meaningful determination of fit unclear. To address this issue for future WES iterations, BC Stats intends to perform the SEM analysis on a sub-sample of respondents. As sample sizes were found to be a confounding factor for both the corporate wide and organization level analyses, a representative-sample of approximately,000 respondents could minimize distortions to the Chi-square statistics as well as the remaining fit indices. This in turn would allow for the reduction of sampling bias present in certain organizations, while also facilitating greater comparability between the organization level models. The sub-sample could also be tested against the overall sample, to confirm representativeness of the sub-sample s model results. On a similar note, whereas sampling may offer direct comparability of results between organizations, a sector level comparison of model results may facilitate a more comprehensive organization-level analysis. Since some organizations were excluded from the organization-level analysis due to small sample sizes (n<400), it was not possible to confirm model fit for their respective work environments. By aggregating similar organizations into a small number of sectors, it would be possible to incorporate the results of smaller organizations into a comparison, while at the same time establishing sub-populations that are more homogeneous in scope. This in turn may reveal unique differences between sectors that an organization-level or corporate wide analysis would obscure. MODELLING THE 2009 WORK ENVIRONMENT SURVEY RESULTS Page 33

If you have any questions about the information in this report, please contact BC Stats. 250-387-8972