ABSTRACT. systems, as they have demonstrated high criterion-related validity in predicting job

Size: px
Start display at page:

Download "ABSTRACT. systems, as they have demonstrated high criterion-related validity in predicting job"

Transcription

1 ABSTRACT WRIGHT, NATALIE ANN. New Strategy, Old Question: Using Multidimensional Item Response Theory to Examine the Construct Validity of Situational Judgment Tests. (Under the direction of Dr. Adam W. Meade). Situational judgment tests (SJTs) are common in many organizational selection systems, as they have demonstrated high criterion-related validity in predicting job performance (e.g., Chan & Schmitt, 2002; Christian, Edwards, & Bradley, 2010; Clevenger, Pereira, Wiechmann, Schmitt, & Harvey, 2001; McDaniel, Morgeson, Finnegan, Campion, & Braverman, 2001; Motowidlo, Dunnette, & Carter, 1990; Weekley & Jones, 1999). However, much of the research on SJTs has failed to take a construct-based focus, which has led to a lack of understanding of what constructs SJTs measure (Christian et al., 2010). To complicate efforts at SJT construct validation, SJTs are multidimensional, even at the level of response options (Schmitt & Chan, 2006), and factor analyses of SJT data are generally uninterpretable (Chan & Schmitt, 2002). As pointed out by Cronbach and Meehl (1955), determining the internal structure of a measure is one of the criteria needed for evaluating construct validity. Thus, the inability to evaluate the internal structure of SJTs via factor analytic techniques has severely hampered SJT construct validation attempts. Multidimensional item response theory (MIRT) holds promise in evaluating SJT construct validity, as can account for within-item multidimensionality better than factor analysis. The present study used both MIRT and factor analysis to analyze SJT data from 1,012 test takers. MIRT and factor analysis were compared in terms of convergent validity of recovered dimensions with personality and cognitive ability scores, interpretability of recovered dimensions, and criterion-related validity of the recovered dimensions. Results indicated that

2 both MIRT and factor analysis were useful tools for evaluating SJT construct validity, and the dimensions recovered by both methods offered incremental validity to the prediction of job performance above overall SJT scores, cognitive ability, and personality measures. The dimensions recovered by MIRT were not more interpretable than the dimensions recovered by factor analysis. Evidence from both MIRT and factor analysis suggested that the SJT used in the study was measuring practical intelligence.

3 Copyright 2013 Natalie Ann Wright All Rights Reserved

4 New Strategy, Old Question: Using Multidimensional Item Response Theory to Examine the Construct Validity of Situational Judgment Tests by Natalie Ann Wright A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy Psychology Raleigh, North Carolina 2013 APPROVED BY: Adam W. Meade, Ph.D. Committee Chair Mark A. Wilson, Ph.D. S. Bartholomew Craig, Ph.D. Samuel B. Pond III, Ph.D.

5 DEDICATION To my wonderful parents, who always encouraged me to do my best, and to my husband, who always believed in me. ii

6 BIOGRAPHY Natalie Wright grew up in Loveland, Colorado, and graduated from Thompson Valley High School. After spending a year at Lewis and Clark College in Portland, Oregon, she transferred to Colorado State University. This was a fortuitous decision, as it was there that she was introduced to the field of Industrial and Organizational (I/O) Psychology. Natalie graduated from Colorado State in 2008 with a B.S. in psychology, summa cum laude. After graduating, she moved to Raleigh, NC and joined the I/O Psychology program at North Carolina State University. She earned her M.S. in I/O Psychology in 2010, and her Ph.D. in the same discipline in iii

7 ACKNOWLEDGEMENTS I d like to thank all my friends, family, colleagues, and professors for their guidance and support during graduate school and during the writing of this dissertation. Thank you to my advisor, Dr. Adam Meade, for his advice and mentoring during my time at NC State. He provided me with a wealth of guidance not only on my dissertation, but on my master s thesis and other research projects as well. Thank you also to my committee members, Drs. Mark Wilson, Bart Craig, and Bob Pond both for their invaluable suggestions on this research, and their excellent teaching and support during my graduate school career. I owe a debt of gratitude to my parents, Eric and Teri, for their love and support. Throughout my life, they ve encouraged me to live up to my potential. Without them, I would have never had the motivation to get a Ph.D. And finally, to my husband Ryan, who has stood by my side throughout this whole graduate school business: words cannot express how grateful I am to have you in my life. iv

8 TABLE OF CONTENTS LIST OF TABLES... vi INTRODUCTION...1 SJT validity...2 Multidimensional item response theory...7 Research questions...11 METHOD...13 Participants...13 Measures...14 Analysis...16 RESULTS...23 Descriptive statistics and item screening...23 MIRT model dimensionality...24 Evaluating presence of testlet effects...24 CFA model estimation...25 Construct and criterion-related validity...27 DISCUSSION...31 Limitations...37 Directions for future research...38 Conclusion...40 REFERENCES...41 v

9 LIST OF TABLES Table 1. Descriptive statistics...50 Table 2. Orthogonal versus oblique MIRT model comparisons...51 Table 3. MIRT model dimensionality determination...52 Table 4. Three-dimensional MIRT model item parameter estimates...53 Table 5. Testlet screening model fit results...54 Table 6. CFA model fit comparisons...55 Table 7. CFA standardized factor loadings for 4-factor model...56 Table 8. MIRT dimension construct and criterion-related validity...57 Table 9. CFA factor construct and criterion-related validity...58 Table 10. MIRT dimension-cfa factor correlations...59 Table 11. Hierarchical regression examining prediction of job performance using CFA and MIRT SJT scores...60 Table 12. Relative weight analysis for CFA and MIRT scores predicting job performance...62 Table 13. Hierarchical regression examining prediction of job performance including overall SJT score...63 Table 14. Relative weight analysis for overall SJT, CFA, and MIRT scores predicting job performance...65 vi

10 INTRODUCTION Situational judgment tests (SJTs) are a popular feature of many organizational selection systems, and have accordingly generated a large amount of research. SJTs are measurement methods that present respondents with work-related situations and ask them how they would or should handle the situations (Ployhart & MacKenzie, 2011, p. 237). SJTs have relatively high validity for predicting job performance (e.g., Chan & Schmitt, 2002; Christian, Edwards, & Bradley, 2010; Clevenger, Pereira, Wiechmann, Schmitt, & Harvey, 2001; McDaniel, Morgeson, Finnegan, Campion, & Braverman, 2001; Motowidlo, Dunnette, & Carter, 1990; Weekley & Jones, 1999). Despite demonstrating high criterionrelated validity, the question of construct validity has continually plagued SJT research and development. SJTs are multidimensional and tend to demonstrate significant correlations with cognitive ability and personality measures (e.g., Chan & Schmitt, 2002; Christian et al., 2010; Clevenger et al., 2001; McDaniel et al., 2001; Weekley & Ployhart, 2005). To further complicate matters, SJTs may be multidimensional not only across items, but within a given item. As Chan and Schmitt (2002) noted, factor analyses of SJTs are generally uninterpretable, and Schmitt and Chan (2006) suggested that SJTs may be multidimensional even at the level of the item response options. Establishing internal structure is one of the criteria for evaluating a measure s construct validity (Cronbach & Meehl, 1955). Due to difficulties in evaluating the internal structure of SJTs, the examination of SJT construct validity has been fraught with difficulty. One reason that SJT construct understanding has been hampered is the lack of use of appropriate analytical methods for assessing measures which are so thoroughly 1

11 multidimensional. Although cross-loaded confirmatory factor analysis (CFA) strategies have been proposed for assessing SJT constructs (Ployhart, 2006; Schmitt & Chan, 2006), factor analyses which allow items to cross-load on factors can be difficult to interpret (Schmitt & Chan, 2006). A more appropriate strategy to assess SJT construct validity may be found in multidimensional item response theory (MIRT). Although MIRT is popular in educational applications, it has rarely been applied to problems faced in organizational research and applications (for an exception, see Li, 2010). MIRT assesses the relationship between items and test-takers across multiple dimensions and can account for within-item multidimensionality (Reckase, 2009). Because of these features, MIRT holds promise in examining the internal structure of SJTs. Thus, the purpose of this study was to investigate the application of MIRT to the evaluation of SJT construct validity. SJT validity Criterion-related and incremental validity. Across numerous studies, SJTs have fared well in terms of criterion-related and incremental validity. In the first primary study of SJT criterion-related validity in the industrial-organizational psychology literature, Motowidlo et al. (1990) found that SJT scores correlated moderately ( ) with supervisory ratings of performance. Confirming the early results of Motowidlo et al., most studies reporting SJT criterion-related validity have found a moderate validity coefficient for SJTs related to job performance. The most recent meta-analytic estimates of SJT criterionrelated validity (McDaniel et al., 2007) indicate that the corrected validity coefficient for SJTs related to overall job performance is.26 (uncorrected =.20). 2

12 Construct validity. Because SJTs have been shown to be a useful predictor of job performance, there has been a long-standing interest in determining what exactly SJTs measure so that their relationship with job performance can be better understood. Most researchers today are in agreement that SJTs are a method that can be used to measure a variety of different constructs, rather than an indicator of a particular construct (Arthur & Villado, 2008), although Schmitt and Chan (2006) suggest that SJTs might be both constructs and methods. Thus, some differences in the constructs measured across SJTs are expected. However, research has shown substantial commonalities across the constructs measured by SJTs. Most studies that have examined the construct-related validity of SJTs have focused on SJTs convergent validity with cognitive ability, the Big Five personality traits (especially conscientiousness, agreeableness, and emotional stability), and practical intelligence or judgment, the results of which will be elaborated upon below. Cognitive ability. SJT scores have been found to correlate positively with cognitive ability. Via meta-analysis, McDaniel et al. (2001) concluded that the value for the population correlation between SJT scores and cognitive ability test scores is.46, and McDaniel et al. (2007) found a mean correlation of.32 (uncorrected=.28). In both metaanalyses, there was a substantial amount of variance around this value. To illustrate this variability, consider the correlations between SJT scores and cognitive ability test scores from several primary studies:.42 and.48 (Weekley & Jones, 1999),.36 (Weekley & Ployhart, 2005), and -.02 (Chan & Schmitt, 2002). Conscientiousness. SJTs have consistently demonstrated moderate, positive correlations with measures of conscientiousness. Christian et al. (2010), using a construct 3

13 matching approach, found that many SJTs are designed to measure conscientiousness. McDaniel et al. (2007) reported a meta-analytic correlation of.27 (uncorrected =.23) between SJTs and conscientiousness, although there was a significant amount of variability around this estimate. By way of example, reported correlations between SJTs and conscientiousness range from.33 (O Connell et al., 2007) to.23 (Chan & Schmitt, 2002) to.13 (Weekley & Ployhart, 2005). Agreeableness. SJTs demonstrate a moderate correlation with agreeableness. McDaniel et al. (2007) found a meta-analytic correlation between SJTs and agreeableness of.25 (uncorrected =.22). As is the case with cognitive ability and conscientiousness, this correlation varies significantly across the primary studies investigating the relationship. For example, Weekley and Ployhart (2005) found a correlation of.06 between SJTs and agreeableness, while Chan and Schmitt (2002) found an SJT-agreeableness correlation of.29. Emotional stability. Along with conscientiousness and agreeableness, emotional stability has been shown to be related to SJT scores. McDaniel et al. (2007) reported a metaanalytic correlation of.22 (uncorrected =.19) between SJTs and emotional stability. Much like conscientiousness and agreeableness, this correlation is relatively constant across studies. For example, Chan and Schmitt (2002) found a correlation of.20 between SJTs and emotional stability, while Weekley and Ployhart (2005) found a correlation of.17 between SJTs and emotional stability. Practical intelligence. Motowidlo et al. (1990) argued that SJT criterion-related validity may be driven by SJT s measurement of practical intelligence. Other researchers have taken this stance as well. Sternberg and Hedlund (2002) argued that successfully 4

14 responding to SJT situations is a function of practical intelligence, defined by Schmitt and Chan (2006) as the ability or expertise to effectively respond and successfully adapt to a variety of practical problems or situational demands (p. 150). Chan and Schmitt (2002) also suggested that SJTs, although multidimensional, assess practical intelligence to some extent. They also noted that SJTs appear to measure something that is not accounted for by personality, cognitive ability, and job knowledge measures. In factor analyzing SJT data, they found support for a single factor, but this factor only accounted for a very small percentage of the variance in SJTs (16% for one SJT version, and 18% for another). Stemler and Sternberg (2006) argue that this factor is practical intelligence. Schmitt and Chan (2006) suggest that this factor might be a method factor, or situational judgment, or practical intelligence, but acknowledge that it is unclear exactly what the nature of the single factor is. Lingering questions about SJT construct validity. SJTs are generally considered to be a measurement method rather than a measure of one particular construct (e.g., Arthur & Villado, 2008; McDaniel & Nguyen, 2001). However, it is very difficult to determine what construct(s) a particular SJT is measuring. One potential complication in determining construct validity is that SJTs are low-fidelity simulations (Lievens & Patterson, 2011) made of critical incidents (McDaniel & Nguyen, 2001; Weekley, Ployhart, & Holtz, 2006; Ployhart & MacKenzie, 2011) for the job in question. Jobs differ widely in the knowledge, skills, and abilities required to perform them adequately. As such, an SJT created using critical incidents for one job will likely measure different constructs than an SJT created for use with another job. However, it is still important to identify the constructs measured by SJTs even if these constructs vary across SJTs. One issue that arises in evaluating SJT construct validity 5

15 is a result of the way in which previous SJT research has been conducted. As noted by Christian et al. (2010), many SJT researchers do not report construct-relevant information, and focus only on SJT-performance relationships. Additionally, although SJTs are viewed as a method rather than a construct, Christian et al. (2010) point out that little work to date has been done to separate construct variance from method variance in SJT scores. They suggest that future SJT research should take a construct, rather than a method, approach to the development of SJTs, both by reporting convergent and discriminant validity evidence and by developing SJTs to measure a particular construct. Even if researchers were to heed this advice, however, the inherent multidimensionality of SJTs makes construct validity examination extremely difficult. As suggested by Schmitt and Chan (2006), job performance is multidimensional, and as a result SJTs created to accurately sample this criterion will also tend to be multidimensional. Schmitt and Chan (2006) suggest that situational judgment requires multiple traits and abilities, and this multiplicity of traits and abilities required manifests itself on SJTs at the level of the response option. Unfortunately, most common analysis techniques do a poor job of handling multidimensionality when it occurs at the item level, as is the case with SJTs (Li, 2010). In factor analysis, the most straightforward interpretations of results occur when each variable in the model loads only on one factor, a situation known as simple structure (e.g., Finch, 2011; McDonald, 2000). When simple structure is present, a test as a whole can measure multiple factors, but each item is constrained to be an indicator of only one factor. To correctly address the multidimensional structure of SJT items, however, it is necessary to allow items to load on more than one factor (Schmitt & Chan, 2006), as each item is 6

16 measuring more than one construct (in other words, SJTs are factorially complex; McDonald, 1999). Incorporating cross-loading items is not an ideal solution, though, as interpreting factors when cross-loaded items are present is difficult (e.g., Judge & Welbourne, 1994; Schmitt & Chan, 2006). Furthermore, generating a substantive interpretation of factors with cross-loaded items has the potential to become very ambiguous if items cross-load on more than two dimensions, or if some items cross-load on only two dimensions while other items cross-load on three or four dimensions. Given the ambiguity associated with interpreting factor analyses that incorporate cross-loaded items, it is clear that this is a less-than-ideal approach for modeling SJT responses. Additionally, as pointed out by Ackerman, Gierl, and Walker (2003) and Embretson and Reise (2000), CFA models often confound difficulty with dimensionality for dichotomous items. This occurs because in factor analysis, the highest correlations between items are observed when items are of equal difficulty (Gulliksen, 1945). Thus, items which are very difficult or very easy will correlate with other items of similar difficulty to a greater extent that than with the other items in the scale, even if the items are all measuring the same underlying dimension. The possibility of these artifactual factors can make it difficult to determine the number of substantive dimensions underlying the data. In order to appropriately model SJTs, it is necessary to use techniques which can model the factorially complex nature of SJT items and responses. Multidimensional item response theory is one analytical technique which can accomplish this. Multidimensional item response theory Multidimensional item response theory, or MIRT, is an analytical technique which has emerged as a way to accurately model relationships between test takers and test items 7

17 when the items are located along more than one latent dimension (Reckase, 1997). MIRT models are used to model the interaction between test takers and items when an item response requires test takers to use more than one skill or ability (Ackerman, 1994). As discussed by Reckase (1997), MIRT can be viewed as either an extension of unidimensional IRT models, or an extension of factor analysis. MIRT models differ from unidimensional IRT models in that rather than having a single θ-coordinate, they have a linear combination (vector) of θ-coordinates (Reckase, 2009). Lord and Novick (1968) are generally credited with detailing the basic requirements for a MIRT model (Reckase, 2009). These basic requirements included a complete latent space summarized by a θ vector, and local independence in a multidimensional θ space such that the response to one item is independent of the responses to any other item after controlling for item parameters and dimensions in the θ vector (Reckase, 2009). Factor analysis and MIRT share a number of similarities. Comparing MIRT and factor analysis, both methods define a set of hypothetical dimensions that can be used to recreate the original data (Reckase, 2009). Additionally, both methods seek to define scales based upon arbitrary origins and units of measurement (Reckase, 1997). However, despite the similarities between MIRT and factor analysis, there are several fundamental differences between the two methods. Factor analysis focuses on defining factors, rather than examining the interaction between test takers and items (Reckase, 1997). Factor analysis also treats features of the data such as means and standard deviations as nuisance factors and focuses on the correlation matrix of the data, while MIRT actively uses this information (Reckase, 1997). Furthermore, factor analysis is often used as a data reduction technique, reducing data 8

18 into a set of common factors, while MIRT does not aim to reduce and group data in this way (Reckase, 2009). MIRT also has the advantage of putting item and person parameters on a common metric to facilitate cross-sample and cross-measure comparisons, which is a feature that factor analysis does not have (Reckase, 2009). MIRT models can be divided into two classes: those that require simple structure (between-item dimensionality), and those that allow items themselves to be multidimensional (within-item multidimensionality). As discussed by Hartig and Hӧhler (2008), between-item multidimensionality occurs when the test is multidimensional, yet each item only measures one dimension. Within-item multidimensionality occurs when items in the test each measure more than one dimension. Such items are also called factorially complex (Ackerman et al., 2003). Choice of a within-item or between-item model is generally driven by theoretical considerations. If there is reason to believe that more than one skill or ability is required to answer the item, then the within-item model is the preferred choice (Hartig & Hӧhler, 2008). Parts of MIRT models. A basic representation of a MIRT model is presented by Reckase (2009): ( ) ( ) (1) This model demonstrates that the probability of responding to an item (U) given a vector of abilities (θ) is a function of parameters describing the test item (η), and the possible responses to the item (u). MIRT models share the same assumptions as unidimensional IRT models. First, the probability of a correct response to an item increases as any element in the θ-vector increases (the monotonicity assumption). In unidimensional IRT models, this 9

19 assumption is seen in the traditional S-shaped curve, which shows the increasing probability of a correct response to an item as θ increases; and this relationship depends on item location (b), item discrimination (a), and the guessing parameter (c). In MIRT models, this assumption is seen in the item characteristic surface. As MIRT models include more than one θ dimension, the item cannot be represented in two-dimensional space, and must instead be represented by a multidimensional surface, where the number of dimensions are dependent on the number of elements in the θ vector. Second, item responses are dependent only on a test taker s θ-vector and the parameters of the item (η), not on the answers to any other item in the test (the local independence assumption; Reckase, 2009). MIRT models item discrimination (a). Because this parameter is extended into multidimensional θ-space, however, it is conceptualized somewhat differently. MIRT models still incorporate the a parameter, also known as the slope or discrimination parameter (Reckase, 2009), as an indicator of how well the item distinguishes between test takers of different levels of theta. In MIRT, however, a parameters indicate the orientation of the equiprobable contours and the rate that the probability of a correct response changes from point to point in the θ-space (Reckase, 2009, p. 89). In other words, MIRT a parameters index how much the probability of answering an item correctly increases as an element in the θ vector increases, and this discrimination is contingent on the direction being travelled in θ space. As is the case with θ parameters, each item in MIRT has a vector of a parameters. The d parameter in MIRT does not have a direct correlate in unidimensional IRT. The d parameter is the intercept parameter (Reckase, 2009), or easiness intercept (Embretson & Reise, 2000). It is not equivalent to the concept of item location in a unidimensional IRT 10

20 model, as it does not provide a unique location parameter for an item because there are several locations in the θ space which would yield the same result (Reckase, 2009). The following equation provided by Reckase (2009) summarizes the function of the d parameter: (2) In this equation, k is equivalent to the exponent in a MIRT equation. If this exponent is set to 0, then this equation defines the line in θ space along which all locations with a.5 probability of answering the item correctly lie (Reckase, 2009). MIRT model fit. As with all analytic models, it is necessary to assess the fit of MIRT models. Various global fit indices for MIRT models have been proposed, such as the application of the Akaike Information Criteria and the Bayesian Information Criteria (Bolt & Johnson, 2009), and χ 2 difference tests (Yao & Schwarz, 2006). McDonald (2000) suggested that model dimensionality should be determined by substantive or theoretical considerations, but questions about the appropriateness of a specific dimensionality can be answered by testing increasingly complex models until a more complex model either cannot be identified or cannot be interpreted. Ackerman et al. (2003) recommended assessing MIRT model fit by comparing chi square values, or comparing MIRT model fit to unidimensional model fit. Research questions Research has repeatedly demonstrated that SJTs are multidimensional measurement methods (e.g., Chan & Schmitt, 2002; Christian et al., 2010; Clevenger et al., 2001; McDaniel et al., 2001; Weekley & Ployhart, 2005). Determining what constructs SJT items measure is of primary importance for using SJTs for personnel selection. Many SJT researchers (Christian et al., 2010; McDaniel et al., 2007; Gessner & Klimoski, 2006; 11

21 McDaniel & Nguyen, 2001; Schmitt & Chan, 2006) have called for more construct-focused SJT research. This construct focus is important for several reasons. First, it is important to understand how and why a predictor relates to job performance (Arthur & Villado, 2008; Christian et al., 2010; Messick, 1995). Second, as demonstrated by Christian et al. (2010), SJTs demonstrate higher criterion-related validity when theory is used to match SJT content to the criterion of interest. Thus, obtaining estimates for test taker ability at the construct level is necessary in order to accurately match SJTs with criteria of interest. A better examination of SJT constructs at the item level would allow for this issue to be examined. Taking a construct-focused approach to SJT research is problematic, however, as SJT items contain within-item multidimensionality which cannot be easily accounted for by factor analysis (Schmitt & Chan, 2006). MIRT, on the other hand, is a promising analytic method for assessing SJT dimensionality. MIRT can easily incorporate within-item multidimensionality (Hartig & Hӧhler, 2008). However, despite the clear advantages that MIRT offers to SJT researchers, to date little research has incorporated MIRT analyses as a way to investigate the constructs measured by SJTs. One exception is the work of Li (2010), who evaluated MIRT as a method of modeling SJTs. Although Li (2010) found that MIRT was not needed to correctly model SJT data, the model used treated the SJT as measuring a single judgment dimension and several testlet dimensions. No work has been done developing MIRT models which treat SJTs as measures with multiple substantive dimensions. Hence, this study seeks to investigate the following: 12

22 Research Question 1. How do the dimensions recovered by MIRT analyses relate to test takers scores on cognitive ability and personality measures? That is, do the dimensions recovered by MIRT map onto constructs known to be measured by SJTs, and are they interpretable as such? Research Question 2. When used to analyze SJT data, how does MIRT compare to factor analytic techniques in terms of the interpretability of recovered factors? Which method offers more useful, interpretable results? Research Question 3. Do MIRT-based ability estimates yield higher criterion-related validity than a) overall SJT scores, and b) factor analytically-derived factor scores? METHOD Participants Data was collected as part of a concurrent validation study for a series of managerial selection measures in 2007 and Participants included 1,799 front line managers currently employed in the financial, insurance, medical services, automotive, and telecommunications industries across the United States. Participants were not required to report demographic information; thus, demographic information was not available for all participants. Of the 76% of participants who reported their race, 74% were White, 12% were Black/African American, 10% were Hispanic or Latino, with the remaining 4% being of another race. 79% of participants reported their gender. Of these participants, 56% were male and 44% were female. Of the 56% of respondents who reported their age, 55% were under 40 years of age, and 45% were 40 years of age or older. To prevent missing data 13

23 problems in the MIRT analyses, all respondents with missing data for the SJT were removed. This resulted in a final sample size of 1,012 participants. Measures Situational judgment test. The SJT used in this study was developed specifically for front-line managerial positions, and assesses how effectively managerial personnel interact in coaching situations with their direct reports (SHL, 2008). The stimuli in the SJT were presented to test takers in a video-based format, and the instructions were knowledge-based. There were 39 SJT items, which were embedded within six scenarios. For each scenario, test takers were presented with a written description of a problem. After reading the scenario, the test takers proceeded onto the associated items. Each item had a short videoclip approximately five to ten seconds in length associated with it, and test takers responded based on the interaction presented in the videoclip. For each item, test takers were asked to choose the most effective and the least effective response option from four different response options. The scoring key for the test was developed using a rational scoring strategy, with keying determined by SME judgments of the most appropriate response to each situation. However, to facilitate the MIRT analyses in this study, the response options for each part of the item (most effective and least effective) were coded from zero (most incorrect) to three (most correct), and then scores for the best response and worst response were added together. Thus, scores for each item ranged from 0 to 6. Note that for some items, not all response options were chosen. In these cases, the response options for each part of the item were coded from 0 (most incorrect) to 2 (most correct), making the score range from 0 to 4 for 14

24 these items. To prevent problems in model estimation, score categories with fewer than 20 respondents were collapsed into the next category. Cognitive ability. The cognitive ability proxy measures used included two inbox (inbasket) assessments targeted specifically toward managerial and supervisory personnel (SHL, 2008). The inbox assessments measure managerial decision-making and monitoring skills. They are a technologically advanced version of in-basket assessments, which are a common type of simulation used in assessment centers. As previous research (e.g., Goldstein, Yusco, Braverman, Smith, & Chung, 1998) has demonstrated a positive correlation between inbasket scores and cognitive ability test scores, scores on the inbox assessments are an acceptable substitute for cognitive ability scores. During the inbox assessments, participants were told to assume the role of a leader as they respond to requests and information via technological communication methods, such as . Personality. The personality test used in this study is a computer adaptive measure targeted specifically toward professional, supervisory, and managerial personnel (SHL, 2009). Because it is computer adaptive, the number of items presented to any one test taker varies, but the maximum number of items that can be presented to a test taker is 300. The test measures achievement, collaboration, composure, confidence, flexibility, independence, influence, innovation, reliability, self-development, sense of duty, sociability, and thoroughness. These facets can be rationally grouped onto each of the Big Five dimensions of personality in the following way: Conscientiousness (sense of duty, achievement, reliability, thoroughness), agreeableness (collaboration), emotional stability (composure, 15

25 independence, thoughtfulness), openness to experience (innovation, self-development, flexibility), and extraversion (sociability, influence). Job performance. Job performance was assessed via respondents supervisory ratings of performance. All ratings of performance were made using the same rating instrument. Job performance was assessed via twenty-seven items assessing specific facets of job performance. These facets included areas such as conflict resolution, teambuilding, problem analysis, leadership, and integrity. Analysis MIRT Model. As the SJT in this study required test takers to choose both the best response and the worst response from the options presented, it was necessary to use a partial credit model. This was needed because a test taker could, for example, choose the correct response for the best response but choose the incorrect response for the worst response. Partial credit models can account for situations such as this. The model that was be used to analyze the SJT data in this study was the multidimensional generalized partial credit (MGPC) model. This model is compensatory, such that a high level of one ability can compensate for a low level of another ability (Reckase, 2009; Yao & Schwarz, 2006). The equation for this model as presented by Reckase (2009) is: ( ) (3) In this model, K i represents the maximum possible score for item i, where the lowest possible score is assumed to be 0, and k indicates the score earned on the item. As there are score categories, β iu indicates the threshold parameter for score category u, a i is a vector of 16

26 discrimination parameters, and v represents a θ-vector (Reckase, 2009). One notable feature of this model is β, which is a scalar parameter which functions as a combination of intercept (d) and threshold parameters. Threshold parameters describe the location in θ space at the intersection of two score categories (for example, between earning a score of 1 on the item and earning a score of 2 on the item; Embretson & Reise, 2000). Thus, the difficulty parameters cannot be separated from the threshold parameters in the model, and cannot be subtracted from θ (Reckase, 2009). Item screening. Prior to the analyses, poor quality items were excluded from the MIRT and CFA models. This screening was done independently for each set of analyses, such that excluded items were not the same for the MIRT analyses and the CFA analyses. To identify poor items for the MIRT analyses, a unidimensional generalized partial credit IRT model was generated. Ten items had a-parameter estimates of less than These items also did not meet the assumptions of IRT analyses, as the likelihood of responding to an item in a particular way did not depend on theta. These items were excluded from subsequent analyses. To identify poor items for the CFA analyses, a unidimensional CFA model was conducted. Eight items had non-significant loadings on the single factor and were excluded from further analysis. A ninth item was later removed, as it did not demonstrate significant loadings on final CFA model factors. MIRT estimation software. To estimate the MGPC model used in this study, the IRTPRO 2.1 software program (Cai, du Toit, & Thissen, 2011) was used. This program can estimate both unidimensional and multidimensional IRT models. The program has the capacity to estimate multidimensional partial credit models, and does not require the items to 17

27 display simple structure. Although the program has several options for model estimation, the Metropolis-Hastings Robbins-Monro algorithm (Cai, 2010) was used to estimate the models in the present study. This algorithm is well-suited for estimating models in which there are a large number of dimensions, as it converges faster than other estimation procedures, such as the Markov Chain Monte Carlo method (Cai, 2010). Expected a posteriori (EAP) estimation was used to generate person-level θ estimates. MIRT model dimensionality and fit. Dimensionality of the SJT data could not be estimated a priori. As such, model estimation proceeded in two steps. First, a series of four initial MIRT models were generated. Each model added one dimension to the previous model. In these models, all item discrimination parameters were free to vary. However, means were fixed to 0 and variances were fixed to 1 under the assumption that the data approximated a multivariate normal distribution. This was required in order to set the scale so that item parameters could be estimated. In one set of models, the covariance between dimensions was fixed to 0. In the second set of models, covariances between dimensions were fixed to 0.20, also to set the scale. The rationale for fixing the covariance to 0.20 was twofold. First, psychological variables are rarely uncorrelated (e.g., Lorr, 1957), so fixing covariances to 0 would not have been theoretically justified. Second, as variances were fixed to 1, a covariance of 0.20 was equivalent to a correlation of Correlations between personality variables, such as that between extraversion and agreeableness and openness to experience and conscientiousness, are often around 0.20 (e.g., van der Linden, te Nijenhuis, & Bakker, 2010). Given that SJTs generally measure several aspects of personality, and that there was no a priori knowledge of what constructs the SJT in the present study was 18

28 measuring, 0.20 was a reasonable estimation of the correlation between factors. After an initial run in which all items were allowed to load on all dimensions, each model was refined. First, all negative loadings were fixed to 0. Second, each dimension was defined by ensuring that there were at least two items which loaded solely on that factor, as this improves model estimation (McDonald, 2000). For dimensions in which there were not two naturallyoccurring simple structure items, items which had high loadings on that dimension and low loadings on the other dimensions had loadings for the low-loading dimensions fixed to 0. Three different criteria were used to determine the number of dimensions. First, the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) values were examined, as suggested by Bolt and Johnson (2009) and Yao (2003). Decreasing AIC and BIC values with the addition of an additional dimension are suggestive of improved fit with the additional dimension. Second, likelihood ratio (-2LL) tests, with degrees of freedom equal to the difference between the number of free parameters between the two models being evaluated (de Mars, 2012), were used to examine model fit. Third, model convergence was used as a final indicator of model fit, as McDonald (1999) stated that dimensions can be added one at a time to MIRT models until the model fails to identify. Screening of SJT data for testlets. The items within the SJT were nested, as items were arranged around six video-based scenarios. This nesting created testlets, or items which shared a common scenario (de Mars, 2012). The common scenario between the items may add additional variance, and this shared variance between items may create nuisance dimensions (de Mars, 2006). Although they are not of substantive interest, failing to account for testlets violates the local independence assumption of IRT models, and can bias 19

29 parameter estimates (e.g., de Mars, 2006; Ip, 2010; Rijmen, 2010). To date, no work has been done in developing models which account for testlets in measures which demonstrate within-item multidimensionality. The bifactor model, discussed by de Mars (2006, 2012) and Rijmen (2010) was determined to hold some promise in being able to account for withinitem multidimensionality and testlets. Under the bifactor model, each item loads on two dimensions: a substantive dimension, and a single testlet dimension. Loadings on all other testlet dimensions are fixed to zero. Covariances between dimensions are also fixed to zero (de Mars, 2006). Following this logic, an alteration to the model was made to allow for within-item multidimensionality by allowing items to load on multiple substantive dimensions, but allowing each item to load only on one testlet dimension and restricting loadings on all other testlet dimensions to zero. It is not always necessary to model testlet effects. As noted by de Mars (2012), testlet effects are sometimes very small, and modeling negligible testlet effects can capitalize on error variance and bias parameter estimates. To determine whether it was necessary to account for testlets in the MIRT model, a procedure developed by de Mars (2012) was used. In this procedure, a full bifactor model in which all testlets are accounted for is generated. Then, a second series of all-but-one bifactor models are generated. In this series of models, in each model one of the testlets is omitted from the model while all other testlets are included. This allows for the evaluation of the importance of accounting for each individual testlet. Model fit is then compared between the full bifactor model and each all-but-one model via -2LL tests, as well as BIC values. A better fit for the more complex model (i.e., the model in which all testlets are accounted for) indicates that the testlet effect for the 20

30 omitted testlet is significant. To conduct this analysis, a full bifactor model with all testlets included was modeled based on the results of the best-fitting MIRT model identified in the model estimation process. Then, a series of six all-but-one bifactor models were generated. Each of these models was then compared to the full bifactor model. CFA model estimation and fit. All factor analytic models were estimated using Mplus version 5.21 (Muthén & Muthén, 2007). As with the MIRT model, dimensionality of the SJT data could not be estimated a priori. Thus, as with the MIRT model, model estimation proceeded in two steps. First, an exploratory factor analysis using a promax rotation was conducted on one-third of the data. The maximum number of dimensions was set at seven. Determination of dimensionality was made by examining the eigenvalues and fit statistics (chi square goodness of fit and root mean square error of approximation). After using the EFA to determine dimensionality, a CFA was conducted on the remaining twothirds of the data using the dimensionality and the factor loadings identified by the EFA model. Factor loadings were adjusted in subsequent analyses until an adequately-fitting model was obtained. Model fit was evaluated using chi-square goodness of fit value and several fit indices, including the comparative fit index (CFI), Tucker-Lewis index (TLI), and root mean square error of approximation (RMSEA). These three indices were identified by Hu and Bentler (1999) as being sensitive to factor loading misspecification. Note that for both EFA and CFA analyses, data were treated as categorical because of the way in which the SJT was scored. Each score was best viewed as a category because there was more than one way to obtain each score. For example, a test taker could earn a score of 3 on an item by choosing the most correct response for the most effective action and choosing the most 21

31 incorrect response option for the least effective action. A test taker could also earn a score of 3 on an item by choosing the second best option for the most effective action and choosing the second-worst option for the least effective action. Job performance model estimation and fit. As the dimensionality of the job performance ratings was unknown, an EFA utilizing a promax rotation was conducted using one-third of the data to determine dimensionality. As with the SJT analyses, determination of dimensionality was made by examining the eigenvalues and fit statistics for each EFA model. After using the EFA to determine dimensionality, a CFA was conducted on the remaining two-thirds of the data using the dimensionality and factor loadings identified by the EFA model, and factor loadings were adjusted until an adequately-fitting model was obtained. Model fit was evaluated using the chi square goodness of fit value, CFI, TLI, and RMSEA values. Construct and criterion-related validity. To determine what was measured by each of the dimensions/factors recovered by the MIRT and CFA models, Pearson product-moment correlations were used to evaluate the relationship between person-level θ estimates and factor scores on the SJT for each dimension/factor and overall scores on inbox test scores and scores for the Big Five personality dimensions. Similarly, to evaluate the criterion-related validity of each dimension/factor recovered by the MIRT and CFA models, Pearson productmoment correlations were calculated between person-level θ and factor scores for each dimension and overall job performance scores. To further examine criterion-related validity, two hierarchical regression analyses were conducted using job performance as the dependent variable. For the first regression, the first step included both inbox decision-making and 22

32 monitoring scores, and Big Five personality scores were added in the second step. CFA factor scores were added in the third step, and MIRT θ scores were added in the fourth step. The second regression analysis was identical for the first two steps, but added overall SJT scores in the third step, CFA factor scores in the fourth step and MIRT θ scores in the fifth step. To further investigate the contributions of each predictor in explaining variance in job performance, relative weight analyses were conducted for both regression models. Relative weight analysis is useful in examining the variance in the dependent variable explained by each predictor when the predictors are correlated (e.g., Tonidandel &LeBreton, 2011). These analyses were completed in the R statistical software program, using code developed by Tonidandel and LeBreton (2011). RESULTS Descriptive statistics and item screening Descriptive statistics, including sample sizes, means, and standard deviations can be found in Table 1. Prior to further analysis, items were screened for quality. When evaluated in a unidimensional IRT model, ten items were eliminated from the MIRT analyses because they had a-parameter dimensions of less than Items 1, 7, 9, 11, 14, 20, 21, 25, 27, and 28 were eliminated using this criterion. When evaluated in a unidimensional CFA model, eight items were eliminated from CFA analyses because they had non-significant factor loadings on the dimension. Items 1, 7, 9, 14, 25, 28, 29, and 33 were eliminated using this criterion. Note that item 21 was later excluded from analysis during CFA modeling because it did not demonstrate a significant positive loading on any of the factors in the CFA model. 23

33 MIRT model dimensionality A series of seven multidimensional generalized partial credit (MGPC) models were generated. Models two, four, and six were orthogonal models, in which the covariance between each dimension was fixed to 0. Models three, five, and seven were oblique models, in which the covariance between each dimension was fixed to 0.2. Means were fixed to 0 and variances were fixed to 1 in all models. To determine whether the data was best represented by a model with oblique or orthogonal dimensions, comparisons between orthogonal and oblique models for each type of model were made using the -2LL, AIC and BIC values (note that degrees of freedom between orthogonal and oblique models did not differ, preventing -2LL difference testing). As can be seen in Table 2, -2LL, AIC, and BIC values were lower for the oblique models as compared to the orthogonal models. Thus, oblique models were determined to most accurately model the data. Since oblique models were determined to be a better fit to the data than orthogonal models, oblique models were used to evaluate dimensionality. Several criteria were used to evaluate model fit, including AIC and BIC values, -2LL test values, and convergence. All four models converged on a solution. As can be seen in Table 3, the three-dimensional model was the best fit to the data. Item parameters for the three-dimensional model can be found in Table 4. Evaluating presence of testlet effects Screening for the presence of significant testlet effects was conducted according to the process suggested by de Mars (2012), with a modification to allow multiple substantive dimensions in the bifactor models. Seven models were generated: a full bifactor model with 24

34 all testlets included and six all-but-one models which successively eliminated one testlet from each model. Each all-but-one model was compared to the full bifactor model to evaluate model fit (see Table 5). Note that a negative value for the -2LL value indicated a better fit for the full bifactor model, although for the purpose of testing fit this number was changed to a positive value. In three cases, the full bifactor model was a significantly better fit than the all-but-one model, although this difference was only significant for the comparison with the all-but-one model which was missing testlet one. In the other three cases the all-but-one model was a better fit than the full bifactor model, and this difference was significant for the comparisons with the all-but-one model missing testlet two and the all-but-one model missing testlet five. Because the full bifactor model was a significantly better fit for only one of the full bifactor model - all-but-one model comparisons, it was determined that testlet effects were not significantly large to justify accounting for them in the MIRT model. As pointed out by de Mars (2012), accounting for negligible testlet effects adds unnecessary complexity to the model, and can also bias parameter estimates. CFA model estimation EFA. A four-factor model was determined to be the best fit to the data. The eigenvalues for the first five factors were 4.51, 1.75, 1.63, 1.54, and 1.47, respectively. The four-factor EFA model was an adequate fit to the data, χ 2 (136) = , p <.05, RMSEA = Although the eigenvalue for the five-factor model was not substantially lower than the eigenvalue for the four-factor model, the addition of a fifth factor caused many of the factor loadings for the second factor to be drastically reduced, such that the second factor had an 25

RUNNING HEAD: MODELING LOCAL DEPENDENCE USING BIFACTOR MODELS 1

RUNNING HEAD: MODELING LOCAL DEPENDENCE USING BIFACTOR MODELS 1 RUNNING HEAD: MODELING LOCAL DEPENDENCE USING BIFACTOR MODELS 1 Modeling Local Dependence Using Bifactor Models Xin Xin Summer Intern of American Institution of Certified Public Accountants University

More information

Assessing first- and second-order equity for the common-item nonequivalent groups design using multidimensional IRT

Assessing first- and second-order equity for the common-item nonequivalent groups design using multidimensional IRT University of Iowa Iowa Research Online Theses and Dissertations Summer 2011 Assessing first- and second-order equity for the common-item nonequivalent groups design using multidimensional IRT Benjamin

More information

Assessing first- and second-order equity for the common-item nonequivalent groups design using multidimensional IRT

Assessing first- and second-order equity for the common-item nonequivalent groups design using multidimensional IRT University of Iowa Iowa Research Online Theses and Dissertations Summer 2011 Assessing first- and second-order equity for the common-item nonequivalent groups design using multidimensional IRT Benjamin

More information

Item response theory analysis of the cognitive ability test in TwinLife

Item response theory analysis of the cognitive ability test in TwinLife TwinLife Working Paper Series No. 02, May 2018 Item response theory analysis of the cognitive ability test in TwinLife by Sarah Carroll 1, 2 & Eric Turkheimer 1 1 Department of Psychology, University of

More information

CHAPTER 5 DATA ANALYSIS AND RESULTS

CHAPTER 5 DATA ANALYSIS AND RESULTS 5.1 INTRODUCTION CHAPTER 5 DATA ANALYSIS AND RESULTS The purpose of this chapter is to present and discuss the results of data analysis. The study was conducted on 518 information technology professionals

More information

Scoring Subscales using Multidimensional Item Response Theory Models. Christine E. DeMars. James Madison University

Scoring Subscales using Multidimensional Item Response Theory Models. Christine E. DeMars. James Madison University Scoring Subscales 1 RUNNING HEAD: Multidimensional Item Response Theory Scoring Subscales using Multidimensional Item Response Theory Models Christine E. DeMars James Madison University Author Note Christine

More information

Conference Presentation

Conference Presentation Conference Presentation Bayesian Structural Equation Modeling of the WISC-IV with a Large Referred US Sample GOLAY, Philippe, et al. Abstract Numerous studies have supported exploratory and confirmatory

More information

Technical Report: Does It Matter Which IRT Software You Use? Yes.

Technical Report: Does It Matter Which IRT Software You Use? Yes. R Technical Report: Does It Matter Which IRT Software You Use? Yes. Joy Wang University of Minnesota 1/21/2018 Abstract It is undeniable that psychometrics, like many tech-based industries, is moving in

More information

Peer reviewed. W. Lee Grubb III is an Assistant Professor of Management at East Carolina University.

Peer reviewed. W. Lee Grubb III is an Assistant Professor of Management at East Carolina University. Peer reviewed W. Lee Grubb III (grubbw@mail.ecu.edu) is an Assistant Professor of Management at East Carolina University. Abstract As organizations continue to strive to hire the most productive employees,

More information

Confirmatory factor analysis in Mplus. Day 2

Confirmatory factor analysis in Mplus. Day 2 Confirmatory factor analysis in Mplus Day 2 1 Agenda 1. EFA and CFA common rules and best practice Model identification considerations Choice of rotation Checking the standard errors (ensuring identification)

More information

Understanding the Dimensionality and Reliability of the Cognitive Scales of the UK Clinical Aptitude test (UKCAT): Summary Version of the Report

Understanding the Dimensionality and Reliability of the Cognitive Scales of the UK Clinical Aptitude test (UKCAT): Summary Version of the Report Understanding the Dimensionality and Reliability of the Cognitive Scales of the UK Clinical Aptitude test (UKCAT): Summary Version of the Report Dr Paul A. Tiffin, Reader in Psychometric Epidemiology,

More information

Second Generation, Multidimensional, and Multilevel Item Response Theory Modeling

Second Generation, Multidimensional, and Multilevel Item Response Theory Modeling Second Generation, Multidimensional, and Multilevel Item Response Theory Modeling Li Cai CSE/CRESST Department of Education Department of Psychology UCLA With contributions from Mark Hansen, Scott Monroe,

More information

How to Score Situational Judgment Tests: A Theoretical Approach and Empirical Test

How to Score Situational Judgment Tests: A Theoretical Approach and Empirical Test Virginia Commonwealth University VCU Scholars Compass Theses and Dissertations Graduate School 2014 How to Score Situational Judgment Tests: A Theoretical Approach and Empirical Test Christopher E. Whelpley

More information

THE MEDIATING ROLE OF WORK INVOLVEMENT IN A JOB CHARACTERISTICS AND JOB PERFORMANCE RELATIONSHIP

THE MEDIATING ROLE OF WORK INVOLVEMENT IN A JOB CHARACTERISTICS AND JOB PERFORMANCE RELATIONSHIP THE MEDIATING ROLE OF WORK INVOLVEMENT IN A JOB CHARACTERISTICS AND JOB PERFORMANCE RELATIONSHIP 1 By: Johanim Johari Khulida Kirana Yahya Abdullah Omar Department of Management Studies College of Business

More information

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Band Battery

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Band Battery 1 1 1 0 1 0 1 0 1 Glossary of Terms Ability A defined domain of cognitive, perceptual, psychomotor, or physical functioning. Accommodation A change in the content, format, and/or administration of a selection

More information

Adequacy of Model Fit in Confirmatory Factor Analysis and Structural Equation Models: It Depends on What Software You Use

Adequacy of Model Fit in Confirmatory Factor Analysis and Structural Equation Models: It Depends on What Software You Use Adequacy of Model Fit in Confirmatory Factor Analysis and Structural Equation Models: It Depends on What Software You Use Susan R. Hutchinson University of Northern Colorado Antonio Olmos University of

More information

Reliability and interpretation of total scores from multidimensional cognitive measures evaluating the GIK 4-6 using bifactor analysis

Reliability and interpretation of total scores from multidimensional cognitive measures evaluating the GIK 4-6 using bifactor analysis Psychological Test and Assessment Modeling, Volume 60, 2018 (4), 393-401 Reliability and interpretation of total scores from multidimensional cognitive measures evaluating the GIK 4-6 using bifactor analysis

More information

Maximizing validity of personality questionnaires. Michael D. Biderman

Maximizing validity of personality questionnaires. Michael D. Biderman Maximizing validity of personality questionnaires Michael D. Biderman River Cities Industrial-Organizational Psychology Conference University of Tennessee at Chattanooga 2014 Thanks to Nhung Nguyen Towson

More information

Strategic Intervention for Doctoral Completion

Strategic Intervention for Doctoral Completion Strategic Intervention for Doctoral Completion Action Research Series Strategic Volume Intervention I, Issue Summer For Doctoral 007 Completion Doctoral Advisors and Their Protégés Laura Gail Lunsford

More information

CHAPTER 5 RESULTS AND ANALYSIS

CHAPTER 5 RESULTS AND ANALYSIS CHAPTER 5 RESULTS AND ANALYSIS This chapter exhibits an extensive data analysis and the results of the statistical testing. Data analysis is done using factor analysis, regression analysis, reliability

More information

Psychometric Issues in Through Course Assessment

Psychometric Issues in Through Course Assessment Psychometric Issues in Through Course Assessment Jonathan Templin The University of Georgia Neal Kingston and Wenhao Wang University of Kansas Talk Overview Formative, Interim, and Summative Tests Examining

More information

Article Review: Personality assessment in organisational settings

Article Review: Personality assessment in organisational settings Article Review: Personality assessment in organisational settings Author Published 2009 Journal Title Griffith University Undergraduate Psychology Journal Downloaded from http://hdl.handle.net/10072/340326

More information

Partial Least Squares Structural Equation Modeling PLS-SEM

Partial Least Squares Structural Equation Modeling PLS-SEM Partial Least Squares Structural Equation Modeling PLS-SEM New Edition Joe Hair Cleverdon Chair of Business Director, DBA Program Statistical Analysis Historical Perspectives Early 1900 s 1970 s = Basic

More information

whitepaper Predicting Talent Management Indices Using the 16 Primary Personality Factors

whitepaper Predicting Talent Management Indices Using the 16 Primary Personality Factors whitepaper Predicting Talent Management Indices Using the 16 Primary Personality Factors John W. Jones, Ph.D.; Catherine C. Maraist, Ph.D.; Noelle K. Newhouse, M.S. Abstract This study investigates whether

More information

Glossary of Standardized Testing Terms https://www.ets.org/understanding_testing/glossary/

Glossary of Standardized Testing Terms https://www.ets.org/understanding_testing/glossary/ Glossary of Standardized Testing Terms https://www.ets.org/understanding_testing/glossary/ a parameter In item response theory (IRT), the a parameter is a number that indicates the discrimination of a

More information

ASSESSMENT CENTERS AND SITUATIONAL JUDGMENT TESTS 1

ASSESSMENT CENTERS AND SITUATIONAL JUDGMENT TESTS 1 This article appeared in the March 2010 issue of the PTC/MW Quarterly Newsletter, Vol. VI, No.1, pp 13-15 (Personnel Testing Council-Metropolitan Washington DC, www.ptcmw.org). ASSESSMENT CENTERS AND SITUATIONAL

More information

Procedia - Social and Behavioral Sciences 141 ( 2014 ) WCLTA 2013

Procedia - Social and Behavioral Sciences 141 ( 2014 ) WCLTA 2013 Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 141 ( 2014 ) 1315 1319 WCLTA 2013 A Study Of Relationship Between Personality Traits And Job Engagement

More information

Against All Odds: Bifactors in EFAs of Big Five Data

Against All Odds: Bifactors in EFAs of Big Five Data Against All Odds: Bifactors in EFAs of Big Five Data Michael Biderman University of Tennessee at Chattanooga www.utc.edu/michael-biderman Michael-Biderman@utc.edu 5/16/2014 www.utc.edu/michael-biderman

More information

A Cautionary Note on the Effects of Range Restriction on Predictor Intercorrelations

A Cautionary Note on the Effects of Range Restriction on Predictor Intercorrelations Journal of Applied Psychology Copyright 2007 by the American Psychological Association 2007, Vol. 92, No. 2, 538 544 0021-9010/07/$12.00 DOI: 10.1037/0021-9010.92.2.538 A Cautionary Note on the Effects

More information

EXPLORING THE DIMENSIONALITY OF SITUATIONAL JUDGMENT: TASK AND CONTEXTUAL KNOWLEDGE. Tammy L. Bess

EXPLORING THE DIMENSIONALITY OF SITUATIONAL JUDGMENT: TASK AND CONTEXTUAL KNOWLEDGE. Tammy L. Bess EXPLORING THE DIMENSIONALITY OF SITUATIONAL JUDGMENT: TASK AND CONTEXTUAL KNOWLEDGE By Tammy L. Bess A Thesis Submitted to the Faculty of Virginia Polytechnic Institute and State University in partial

More information

Statistics & Analysis. Confirmatory Factor Analysis and Structural Equation Modeling of Noncognitive Assessments using PROC CALIS

Statistics & Analysis. Confirmatory Factor Analysis and Structural Equation Modeling of Noncognitive Assessments using PROC CALIS Confirmatory Factor Analysis and Structural Equation Modeling of Noncognitive Assessments using PROC CALIS Steven Holtzman, Educational Testing Service, Princeton, NJ Sailesh Vezzu, Educational Testing

More information

Which is the best way to measure job performance: Self-perceptions or official supervisor evaluations?

Which is the best way to measure job performance: Self-perceptions or official supervisor evaluations? Which is the best way to measure job performance: Self-perceptions or official supervisor evaluations? Ned Kock Full reference: Kock, N. (2017). Which is the best way to measure job performance: Self-perceptions

More information

The computer-adaptive multistage testing (ca-mst) has been developed as an

The computer-adaptive multistage testing (ca-mst) has been developed as an WANG, XINRUI, Ph.D. An Investigation on Computer-Adaptive Multistage Testing Panels for Multidimensional Assessment. (2013) Directed by Dr. Richard M Luecht. 89 pp. The computer-adaptive multistage testing

More information

Dealing with Variability within Item Clones in Computerized Adaptive Testing

Dealing with Variability within Item Clones in Computerized Adaptive Testing Dealing with Variability within Item Clones in Computerized Adaptive Testing Research Report Chingwei David Shin Yuehmei Chien May 2013 Item Cloning in CAT 1 About Pearson Everything we do at Pearson grows

More information

The Effects of Proactive Entrepreneurship and Social. Adaptability on Innovation: A Study of Taiwanese. SMEs Operating in China

The Effects of Proactive Entrepreneurship and Social. Adaptability on Innovation: A Study of Taiwanese. SMEs Operating in China The Effects of Proactive Entrepreneurship and Social Adaptability on Innovation: A Study of Taiwanese SMEs Operating in China by Kai-Ping Huang Supervisor: Dr. Karen Wang Co-Supervisor: Dr. John Chelliah

More information

The Effects of Model Misfit in Computerized Classification Test. Hong Jiao Florida State University

The Effects of Model Misfit in Computerized Classification Test. Hong Jiao Florida State University Model Misfit in CCT 1 The Effects of Model Misfit in Computerized Classification Test Hong Jiao Florida State University hjiao@usa.net Allen C. Lau Harcourt Educational Measurement allen_lau@harcourt.com

More information

SITUATIONAL JUDGMENT TESTS: CONSTRUCTS ASSESSED AND A META-ANALYSIS OF THEIR CRITERION-RELATED VALIDITIES

SITUATIONAL JUDGMENT TESTS: CONSTRUCTS ASSESSED AND A META-ANALYSIS OF THEIR CRITERION-RELATED VALIDITIES PERSONNEL PSYCHOLOGY 2010, 63, 83 117 SITUATIONAL JUDGMENT TESTS: CONSTRUCTS ASSESSED AND A META-ANALYSIS OF THEIR CRITERION-RELATED VALIDITIES MICHAEL S. CHRISTIAN Eller College of Management University

More information

Chapter 11. Multiple-Sample SEM. Overview. Rationale of multiple-sample SEM. Multiple-sample path analysis. Multiple-sample CFA.

Chapter 11. Multiple-Sample SEM. Overview. Rationale of multiple-sample SEM. Multiple-sample path analysis. Multiple-sample CFA. Chapter 11 Multiple-Sample SEM Facts do not cease to exist because they are ignored. Overview Aldous Huxley Rationale of multiple-sample SEM Multiple-sample path analysis Multiple-sample CFA Extensions

More information

UK Clinical Aptitude Test (UKCAT) Consortium UKCAT Examination. Executive Summary Testing Interval: 1 July October 2016

UK Clinical Aptitude Test (UKCAT) Consortium UKCAT Examination. Executive Summary Testing Interval: 1 July October 2016 UK Clinical Aptitude Test (UKCAT) Consortium UKCAT Examination Executive Summary Testing Interval: 1 July 2016 4 October 2016 Prepared by: Pearson VUE 6 February 2017 Non-disclosure and Confidentiality

More information

Internet Shoppers Perceptions of the Fairness of Threshold Free Shipping Policies

Internet Shoppers Perceptions of the Fairness of Threshold Free Shipping Policies Internet Shoppers Perceptions of the Fairness of Threshold Free Shipping Policies Wen-Hsien Huang, Department of Marketing, National Chung Hsing University. Taiwan. E-mail: whh@nchu.edu.tw George C. Shen,

More information

Conjoint analysis based on Thurstone judgement comparison model in the optimization of banking products

Conjoint analysis based on Thurstone judgement comparison model in the optimization of banking products Conjoint analysis based on Thurstone judgement comparison model in the optimization of banking products Adam Sagan 1, Aneta Rybicka, Justyna Brzezińska 3 Abstract Conjoint measurement, as well as conjoint

More information

Kristin Gustavson * and Ingrid Borren

Kristin Gustavson * and Ingrid Borren Gustavson and Borren BMC Medical Research Methodology 2014, 14:133 RESEARCH ARTICLE Open Access Bias in the study of prediction of change: a Monte Carlo simulation study of the effects of selective attrition

More information

A standardization approach to adjusting pretest item statistics. Shun-Wen Chang National Taiwan Normal University

A standardization approach to adjusting pretest item statistics. Shun-Wen Chang National Taiwan Normal University A standardization approach to adjusting pretest item statistics Shun-Wen Chang National Taiwan Normal University Bradley A. Hanson and Deborah J. Harris ACT, Inc. Paper presented at the annual meeting

More information

IRT-Based Assessments of Rater Effects in Multiple Source Feedback Instruments. Michael A. Barr. Nambury S. Raju. Illinois Institute Of Technology

IRT-Based Assessments of Rater Effects in Multiple Source Feedback Instruments. Michael A. Barr. Nambury S. Raju. Illinois Institute Of Technology IRT Based Assessments 1 IRT-Based Assessments of Rater Effects in Multiple Source Feedback Instruments Michael A. Barr Nambury S. Raju Illinois Institute Of Technology RUNNING HEAD: IRT Based Assessments

More information

The previous chapter provides theories related to e-commerce adoption among. SMEs. This chapter presents the proposed model framework, the development

The previous chapter provides theories related to e-commerce adoption among. SMEs. This chapter presents the proposed model framework, the development CHAPTER 3: RESEARCH METHODOLOGY 3.1 INTRODUCTION The previous chapter provides theories related to e-commerce adoption among SMEs. This chapter presents the proposed model framework, the development of

More information

Tutorial Segmentation and Classification

Tutorial Segmentation and Classification MARKETING ENGINEERING FOR EXCEL TUTORIAL VERSION v171025 Tutorial Segmentation and Classification Marketing Engineering for Excel is a Microsoft Excel add-in. The software runs from within Microsoft Excel

More information

GREEN PRODUCTS PURCHASE BEHAVIOUR- AN IMPACT STUDY

GREEN PRODUCTS PURCHASE BEHAVIOUR- AN IMPACT STUDY ORIGINAL RESEARCH PAPER Commerce GREEN PRODUCTS PURCHASE BEHAVIOUR- AN IMPACT STUDY KEY WORDS: Green Product, Green Awareness, Environment concern and Purchase Decision Sasikala.N Dr. R. Parameswaran*

More information

THE MICRO-FOUNDATIONS OF DYNAMIC CAPABILITIES, MARKET TRANSFORMATION AND FIRM PERFORMANCE. Tung-Shan Liao

THE MICRO-FOUNDATIONS OF DYNAMIC CAPABILITIES, MARKET TRANSFORMATION AND FIRM PERFORMANCE. Tung-Shan Liao THE MICRO-FOUNDATIONS OF DYNAMIC CAPABILITIES, MARKET TRANSFORMATION AND FIRM PERFORMANCE Tung-Shan Liao Thesis submitted to the Business School, The University of Adelaide, in fulfilment of the requirements

More information

Overview. Approaches to Addressing Adverse Impact: Opportunities, Facades, and Pitfalls. What is Adverse Impact? The d-statistic

Overview. Approaches to Addressing Adverse Impact: Opportunities, Facades, and Pitfalls. What is Adverse Impact? The d-statistic Approaches to Addressing Adverse Impact: Opportunities, Facades, and Pitfalls John M. Ford Michael D. Blair Overview What is Adverse Impact? Importance Considerations Regarding Adverse Impact Practical

More information

Confirmatory Factor Analyses of

Confirmatory Factor Analyses of Confirmatory Factor Analyses of Multitrait-Multimethod Data: A Comparison of Alternative Models Herbert W. Marsh University of Western Sydney, Australia Michael Bailey University of Sydney, Australia Alternative

More information

A Comparison of Segmentation Based on Relevant Attributes and Segmentation Based on Determinant Attributes

A Comparison of Segmentation Based on Relevant Attributes and Segmentation Based on Determinant Attributes 30-10-2015 A Comparison of Segmentation Based on Relevant Attributes and Segmentation Based on Determinant Attributes Kayleigh Meister WAGENINGEN UR A Comparison of Segmentation Based on Relevant Attributes

More information

Research Note. Community/Agency Trust: A Measurement Instrument

Research Note. Community/Agency Trust: A Measurement Instrument Society and Natural Resources, 0:1 6 Copyright # 2013 Taylor & Francis Group, LLC ISSN: 0894-1920 print=1521-0723 online DOI: 10.1080/08941920.2012.742606 Research Note Community/Agency Trust: A Measurement

More information

Thinking Ahead: Assuming Linear Versus Nonlinear Personality-Criterion Relationships in Personnel Selection

Thinking Ahead: Assuming Linear Versus Nonlinear Personality-Criterion Relationships in Personnel Selection Patrick D. Converse & Frederick L. Oswald (2014) Thinking Ahead: Assuming Linear Versus Nonlinear Personality-Criterion Relationships in Personnel Selection, Human Performance, 27:1, 61-79, DOI: 10.1080/08959285.2013.854367

More information

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and private study only. The thesis may not be reproduced elsewhere

More information

Designing item pools to optimize the functioning of a computerized adaptive test

Designing item pools to optimize the functioning of a computerized adaptive test Psychological Test and Assessment Modeling, Volume 52, 2 (2), 27-4 Designing item pools to optimize the functioning of a computerized adaptive test Mark D. Reckase Abstract Computerized adaptive testing

More information

Masters Theses & Specialist Projects

Masters Theses & Specialist Projects Western Kentucky University TopSCHOLAR Masters Theses & Specialist Projects Graduate School 5-2011 An Evaluation of the Convergent Validity of Situational Assessment of Leadership-Student Assessment (SALSA

More information

CASE STUDY. Incremental Validity of the Wonderlic Motivation Potential Assessment (MPA)

CASE STUDY. Incremental Validity of the Wonderlic Motivation Potential Assessment (MPA) Michael C. Callans, M.S.W. Daniel Nguyen, Ph.D. Brett M. Wells, Ph.D. Introduction The Wonderlic Motivation Potential Assessment (MPA) is a 30-item questionnaire that measures the extent to which employees

More information

An Empirical Investigation of Consumer Experience on Online Purchase Intention Bing-sheng YAN 1,a, Li-hua LI 2,b and Ke XU 3,c,*

An Empirical Investigation of Consumer Experience on Online Purchase Intention Bing-sheng YAN 1,a, Li-hua LI 2,b and Ke XU 3,c,* 2017 4th International Conference on Economics and Management (ICEM 2017) ISBN: 978-1-60595-467-7 An Empirical Investigation of Consumer Experience on Online Purchase Intention Bing-sheng YAN 1,a, Li-hua

More information

Methodology. Inclusive Leadership: The View From Six Countries

Methodology. Inclusive Leadership: The View From Six Countries Inclusive Leadership: The View From Six Countries Methodology Participant survey responses were submitted to multiple-group structural equation modeling (MGSEM) following guidelines outlined by Kline 1

More information

Longitudinal Effects of Item Parameter Drift. James A. Wollack Hyun Jung Sung Taehoon Kang

Longitudinal Effects of Item Parameter Drift. James A. Wollack Hyun Jung Sung Taehoon Kang Longitudinal Effects of Item Parameter Drift James A. Wollack Hyun Jung Sung Taehoon Kang University of Wisconsin Madison 1025 W. Johnson St., #373 Madison, WI 53706 April 12, 2005 Paper presented at the

More information

FACTORS AFFECTING JOB STRESS AMONG IT PROFESSIONALS IN APPAREL INDUSTRY: A CASE STUDY IN SRI LANKA

FACTORS AFFECTING JOB STRESS AMONG IT PROFESSIONALS IN APPAREL INDUSTRY: A CASE STUDY IN SRI LANKA FACTORS AFFECTING JOB STRESS AMONG IT PROFESSIONALS IN APPAREL INDUSTRY: A CASE STUDY IN SRI LANKA W.N. Arsakularathna and S.S.N. Perera Research & Development Centre for Mathematical Modeling, Faculty

More information

The Concept of Organizational Citizenship Walter C. Borman

The Concept of Organizational Citizenship Walter C. Borman CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE The Concept of Organizational Citizenship Personnel Decisions Research Institutes, Inc., Tampa, Florida, and University of South Florida ABSTRACT This article

More information

ADVERSE IMPACT: A Persistent Dilemma

ADVERSE IMPACT: A Persistent Dilemma 1 ADVERSE IMPACT: A Persistent Dilemma David Chan Catherine Clause Rick DeShon Danielle Jennings Amy Mills Elaine Pulakos William Rogers Jeff Ryer Joshua Sacco David Schmidt Lori Sheppard Matt Smith David

More information

A study on the relationship of contact service employee s attitude and emotional intelligence to coping strategy and service performance

A study on the relationship of contact service employee s attitude and emotional intelligence to coping strategy and service performance , pp.75-79 http://dx.doi.org/10.14257/astl.2014.70.18 A study on the relationship of contact service employee s attitude and emotional intelligence to coping strategy and service performance Kim, Gye Soo

More information

Practical Exploratory Factor Analysis: An Overview

Practical Exploratory Factor Analysis: An Overview Practical Exploratory Factor Analysis: An Overview James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) Practical Exploratory Factor

More information

ITEM RESPONSE THEORY FOR WEIGHTED SUMMED SCORES. Brian Dale Stucky

ITEM RESPONSE THEORY FOR WEIGHTED SUMMED SCORES. Brian Dale Stucky ITEM RESPONSE THEORY FOR WEIGHTED SUMMED SCORES Brian Dale Stucky A thesis submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the

More information

Daniels College of Business University of Denver MSBA Program (58 Credit-Hours) and MSBA DUGG (48 Credit-Hours) Revised: May 17, 2018

Daniels College of Business University of Denver MSBA Program (58 Credit-Hours) and MSBA DUGG (48 Credit-Hours) Revised: May 17, 2018 University of Denver MSBA Program (58 Credit-Hours) and MSBA DUGG (48 Credit-Hours) Revised: May 17, 2018 Program 1. Graduates will develop and execute architectures, policies, and practices that properly

More information

How to Get More Value from Your Survey Data

How to Get More Value from Your Survey Data Technical report How to Get More Value from Your Survey Data Discover four advanced analysis techniques that make survey research more effective Table of contents Introduction..............................................................3

More information

A Method Factor Predictor of Performance Ratings. Michael D. Biderman University of Tennessee at Chattanooga. Nhung T. Nguyen Towson University

A Method Factor Predictor of Performance Ratings. Michael D. Biderman University of Tennessee at Chattanooga. Nhung T. Nguyen Towson University Method factor - 1 A Method Factor Predictor of Performance Ratings Michael D. Biderman University of Tennessee at Chattanooga Nhung T. Nguyen Towson University Billy Mullins Jason Luna Vikus Corporation

More information

Estimation of multiple and interrelated dependence relationships

Estimation of multiple and interrelated dependence relationships STRUCTURE EQUATION MODELING BASIC ASSUMPTIONS AND CONCEPTS: A NOVICES GUIDE Sunil Kumar 1 and Dr. Gitanjali Upadhaya 2 Research Scholar, Department of HRM & OB, School of Business Management & Studies,

More information

Running head: THE MEANING AND DOING OF MINDFULNESS

Running head: THE MEANING AND DOING OF MINDFULNESS Running head: THE MEANING AND DOING OF MINDFULNESS Supplementary Materials Fully latent SEM version of model 1 Supplementary Fig 1 outlines the direct effects for the fully latent equivalent to the path

More information

Factors Affecting Implementation of Enterprise Risk Management: An Exploratory Study among Saudi Organizations

Factors Affecting Implementation of Enterprise Risk Management: An Exploratory Study among Saudi Organizations Factors Affecting Implementation of Enterprise Risk Management: An Exploratory Study among Saudi Organizations Yousef Aleisa Abstract Enterprise risk management (ERM) has received significant attention

More information

Mastering Modern Psychological Testing Theory & Methods Cecil R. Reynolds Ronald B. Livingston First Edition

Mastering Modern Psychological Testing Theory & Methods Cecil R. Reynolds Ronald B. Livingston First Edition Mastering Modern Psychological Testing Theory & Methods Cecil R. Reynolds Ronald B. Livingston First Edition Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies

More information

Poster Title The Situational Judgement Test in Selection: A Medical Application

Poster Title The Situational Judgement Test in Selection: A Medical Application Poster Title The Situational Judgement Test in Selection: A Medical Application Abstract This poster describes the development of an SJT to select applicants for training in General Practice in the UK.

More information

Community Mental Health Journal, Vol. 40, No. 1, February 2004 ( 2004)

Community Mental Health Journal, Vol. 40, No. 1, February 2004 ( 2004) Community Mental Health Journal, Vol. 40, No. 1, February 2004 ( 2004) The Effect of Organizational Conditions (Role Conflict, Role Ambiguity, Opportunities for Professional Development, and Social Support)

More information

Low-Fidelity Simulations

Low-Fidelity Simulations Review in Advance first posted online on January 30, 2015. (Changes may still occur before final publication online and in print.) Low-Fidelity Simulations Annu. Rev. Organ. Psychol. Organ. Behav. 2015.

More information

ANZMAC 2010 Page 1 of 8. Assessing the Validity of Brand Equity Constructs: A Comparison of Two Approaches

ANZMAC 2010 Page 1 of 8. Assessing the Validity of Brand Equity Constructs: A Comparison of Two Approaches ANZMAC 2010 Page 1 of 8 Assessing the Validity of Brand Equity Constructs: A Comparison of Two Approaches Con Menictas, University of Technology Sydney, con.menictas@uts.edu.au Paul Wang, University of

More information

Coping Strategies of Project Managers in Stressful Situations

Coping Strategies of Project Managers in Stressful Situations BOND UNIVERSITY Coping Strategies of Project Managers in Stressful Situations Submitted in total fulfilment of the requirements of the degree of Doctor of Philosophy Alicia Jai Mei Aitken Student Number:

More information

Determining the accuracy of item parameter standard error of estimates in BILOG-MG 3

Determining the accuracy of item parameter standard error of estimates in BILOG-MG 3 University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Public Access Theses and Dissertations from the College of Education and Human Sciences Education and Human Sciences, College

More information

Chapter 5. Data Analysis, Results and Discussion

Chapter 5. Data Analysis, Results and Discussion Chapter 5 Data Analysis, Results and Discussion 5.1 Large-scale Instrument Assessment Methodology Data analysis was carried out in two stages. In the first stage the reliabilities and validities of the

More information

Sales Selector Technical Report 2017

Sales Selector Technical Report 2017 Sales Selector Technical Report 2017 Table of Contents Executive Summary... 3 1. Purpose... 5 2. Development of an Experimental Battery... 5 3. Sample Characteristics... 6 4. Dimensions of Performance...

More information

Multivariate G-Theory and Subscores 1. Investigating the Use of Multivariate Generalizability Theory for Evaluating Subscores.

Multivariate G-Theory and Subscores 1. Investigating the Use of Multivariate Generalizability Theory for Evaluating Subscores. Multivariate G-Theory and Subscores 1 Investigating the Use of Multivariate Generalizability Theory for Evaluating Subscores Zhehan Jiang University of Kansas Mark Raymond National Board of Medical Examiners

More information

EFA in a CFA Framework

EFA in a CFA Framework EFA in a CFA Framework 2012 San Diego Stata Conference Phil Ender UCLA Statistical Consulting Group Institute for Digital Research & Education July 26, 2012 Phil Ender EFA in a CFA Framework Disclaimer

More information

WHAT HAPPENS AFTER ERP IMPLEMENTATION: UNDERSTANDING THE IMPACT OF IS SOPHISTICATION, INTERDEPENDENCE AND DIFFERENTIATION ON PLANT-LEVEL OUTCOMES

WHAT HAPPENS AFTER ERP IMPLEMENTATION: UNDERSTANDING THE IMPACT OF IS SOPHISTICATION, INTERDEPENDENCE AND DIFFERENTIATION ON PLANT-LEVEL OUTCOMES WHAT HAPPENS AFTER ERP IMPLEMENTATION: UNDERSTANDING THE IMPACT OF IS SOPHISTICATION, INTERDEPENDENCE AND DIFFERENTIATION ON PLANT-LEVEL OUTCOMES CHAN MING MING FACULTY OF BUSINESS AND ACCOUNTANCY UNIVERSITY

More information

Assessing the Fitness of a Measurement Model Using Confirmatory Factor Analysis (CFA)

Assessing the Fitness of a Measurement Model Using Confirmatory Factor Analysis (CFA) International Journal of Innovation and Applied Studies ISSN 2028-9324 Vol. 17 No. 1 Jul. 2016, pp. 159-168 2016 Innovative Space of Scientific Research Journals http://www.ijias.issr-journals.org/ Assessing

More information

Han Du. Department of Psychology University of California, Los Angeles Los Angeles, CA

Han Du. Department of Psychology University of California, Los Angeles Los Angeles, CA Han Du Department of Psychology University of California, Los Angeles Los Angeles, CA 90095-1563 Email: hdu@psych.ucla.edu EDUCATION Ph.D. in Quantitative Psychology 2018 University of Notre Dame M.S.

More information

Contents Contents... ii List of Tables... vi List of Figures... viii List of Acronyms... ix Abstract... x Chapter 1: Introduction...

Contents Contents... ii List of Tables... vi List of Figures... viii List of Acronyms... ix Abstract... x Chapter 1: Introduction... Exploring the Role of Employer Brand Equity in the Labour Market: Differences between Existing Employee and Job Seeker Perceptions By Sultan Alshathry A thesis submitted to The University of Adelaide Business

More information

ASSESSMENT APPROACH TO

ASSESSMENT APPROACH TO DECISION-MAKING COMPETENCES: ASSESSMENT APPROACH TO A NEW MODEL IV Doctoral Conference on Technology Assessment 26 June 2014 Maria João Maia Supervisors: Prof. António Brandão Moniz Prof. Michel Decker

More information

Innovative Item Types Require Innovative Analysis

Innovative Item Types Require Innovative Analysis Innovative Item Types Require Innovative Analysis Nathan A. Thompson Assessment Systems Corporation Shungwon Ro, Larissa Smith Prometric Jo Santos American Health Information Management Association Paper

More information

*Javad Rahdarpour Department of Agricultural Management, Zabol Branch, Islamic Azad University, Zabol, Iran *Corresponding author

*Javad Rahdarpour Department of Agricultural Management, Zabol Branch, Islamic Azad University, Zabol, Iran *Corresponding author Relationship between Organizational Intelligence, Organizational Learning, Intellectual Capital and Social Capital Using SEM (Case Study: Zabol Organization of Medical Sciences) *Javad Rahdarpour Department

More information

Market Orientation and Business Performance: Empirical Evidence from Thailand

Market Orientation and Business Performance: Empirical Evidence from Thailand Market Orientation and Business Performance: Empirical Evidence from Thailand Wichitra Ngansathil Department of Management Faculty of Economics and Commerce The University of Melbourne Submitted in total

More information

Factor Analysis and Structural Equation Modeling: Exploratory and Confirmatory Factor Analysis

Factor Analysis and Structural Equation Modeling: Exploratory and Confirmatory Factor Analysis Factor Analysis and Structural Equation Modeling: Exploratory and Confirmatory Factor Analysis Hun Myoung Park International University of Japan 1. Glance at an Example Suppose you have a mental model

More information

Han Du. Department of Psychology University of Notre Dame Notre Dame, IN Telephone:

Han Du. Department of Psychology University of Notre Dame Notre Dame, IN Telephone: Han Du Department of Psychology University of Notre Dame Notre Dame, IN 46556 Email: hdu1@nd.edu Telephone: 5748556736 EDUCATION Ph.D. in Quantitative Psychology University of Notre Dame Expected: 2017

More information

2007 Kansas State University Community and Climate Survey

2007 Kansas State University Community and Climate Survey 2007 Kansas State University Community and Climate Survey In the Spring of 2007 the Kansas State University (K-State) Community and Climate Survey was distributed to all faculty to assess their perceptions

More information

Assessing the Business Case for Flexible Work Arrangements

Assessing the Business Case for Flexible Work Arrangements Portland State University PDXScholar Social Work Faculty Publications and Presentations School of Social Work 1-1-2007 Assessing the Business Case for Flexible Work Arrangements Eileen M. Brennan Portland

More information

A Study on the Relationship Between Job Satisfaction and Contextual Performance of Knowledge Workers

A Study on the Relationship Between Job Satisfaction and Contextual Performance of Knowledge Workers Proceedings of the 8th International Conference on Innovation & Management 549 A Study on the Relationship Between Job Satisfaction and Contextual Performance of Knowledge Workers Guo Ying School of Management,

More information

Individual Role Engagement Alignment Profile (ireap) Psychometric Review of the Instrument 2012

Individual Role Engagement Alignment Profile (ireap) Psychometric Review of the Instrument 2012 ireap Technical Psychometric Report October 2012 Individual Role Engagement Alignment Profile (ireap) Psychometric Review of the Instrument 2012 Contents Executive Summary... 3 Purpose... 4 Development...

More information

Supplementary material

Supplementary material A distributed brain network predicts general intelligence from resting-state human neuroimaging data. by Julien Dubois, Paola Galdi, Lynn K. Paul, and Ralph Adolphs Supplementary material Supplementary

More information