Running Head: THE PREDICTIVE VALIDITY OF COGNITIVE ABILITY TESTS: A UK. The predictive validity of cognitive ability tests: A UK meta-analysis.

Size: px
Start display at page:

Download "Running Head: THE PREDICTIVE VALIDITY OF COGNITIVE ABILITY TESTS: A UK. The predictive validity of cognitive ability tests: A UK meta-analysis."

Transcription

1 Running Head: THE PREDICTIVE VALIDITY OF COGNITIVE ABILITY TESTS: A UK META-ANALYSIS. The predictive validity of cognitive ability tests: A UK meta-analysis. Cristina Bertua (Independent Consultant, London, U.K.) Neil Anderson (University of Amsterdam, The Netherlands) Jesús, F. Salgado (University of Santiago de Compostela, Spain) KEY WORDS: Cognitive ability tests, predictive validity, job performance, validity generalisation. All three authors contributed equally to this paper which is based upon the first author s MSc. Dissertation at Goldsmith College, University of London. The preparation of this manuscript was partially supported by funding from the Army Personnel Research Establishment to Neil Anderson and a grant BSO from the Ministerio de Ciencia y Tecnología (Spain) to Jesús F. Salgado. The opinions presented in this paper are the authors and do not relate to either of these funding agencies. Address for correspondence and reprints: Neil Anderson, Department of Work and Organizational Psychology, University of Amsterdam, Roetersstraat 15, 1018 WB Amsterdam, The Netherlands. N.R.Anderson@uva.nl 1

2 Abstract A meta-analysis on the validity of tests of general mental ability (GMA) and specific cognitive abilities for predicting job performance and training success in the UK was conducted. An extensive literature search resulted in a database of 283 independent samples with job performance as the criterion (N =13,262), and 223 with training success as the criterion (N = 75,311). Primary studies were also coded by occupational group resulting in seven main groups (Clerical, Engineer, Professional, Driver, Operator, Manager, and Sales), and by type of specific ability test (Verbal, Numerical, Perceptual, and Spatial). Results indicate that GMA and specific ability tests are valid predictors of both job performance and training success, with operational validities in the magnitude of Minor differences between these UK findings and previous US meta-analyses are reported. As expected, operational validities were moderated by occupational group, with occupational families possessing greater job complexity demonstrating higher operational validities between cognitive tests and job performance and training success. Implications for the practical use of tests of GMA and specific cognitive abilities in the context of UK selection practices are discussed in conclusion. 2

3 Introduction Several recent surveys indicate that tests of general mental ability (GMA) and tests of specific cognitive abilities (e.g. numerical, verbal, spatial, etc) are increasingly popular amongst employer organizations in the UK for selection and assessment purposes (e.g. Hodgkinson & Payne, 1998; Keenan, 1995; Ryan, MacFarland, Baron, & Page, 1999; Salgado & Anderson, 2002; Salgado, Ones, & Viswesvaran, 2001). Whereas in the USA numerous meta-analytic studies have provided predictive and criterion-related validity evidence to support the use of GMA tests in selection (e.g. Hunter & Hunter, 1984; Schmidt, 2002; Schmidt & Hunter, 1998), there has been a notable absence of validity generalisation studies in the UK. This is a serious shortcoming in our understanding of the predictive efficacy of such tests. Given their increasing popularity amongst employers, selection psychologists and test suppliers in the UK are potentially open to claims of relying upon tests which have not been fully validated through independent meta-analytic procedures combining multiple proprietary tests. US Meta-Analyses of Cognitive Ability Tests A number of meta-analyses have been carried out in the USA investigating the criterion-related validity of GMA and cognitive ability tests (see Schmidt, 2002, Appendix A for a comprehensive summary of past findings). Amongst these, the largest meta-analyses based on occupational samples are those conducted by Hartigan and Wigdor (1989), Hunter (1986), Hunter and Hunter (1984), and Levine, Spector, Menon, Narayanon, and Canon- Bowers (1996). Overall, these have shown that the average operational validity for GMA and cognitive ability tests ranges from.38 to.47 for overall job performance and from.54 to.62 3

4 for training success (re-estimated using Hunter & Hunter s criterion reliability and range restriction estimates). Furthermore, Hunter and Hunter (1984) demonstrated that despite differences in jobs and organizations, the predictive validity of GMA and cognitive ability tests generalises across samples and settings. Consequently, it has been concluded that GMA and cognitive ability tests are robust predictors for all types of jobs (Salgado, 1999; Salgado, Ones & Viswesvaran, 2001; Schmidt & Hunter, 1998), and that their validity generalises across occupations in the USA. However, despite the large body of evidence supporting the validity of GMA and cognitive ability tests, there are a number of limitations within the current body of research. Firstly, there has been a general tendency towards examining general mental ability as a predictor of work behaviour, as opposed to the predictive validity of specific cognitive abilities. Secondly, only limited research has examined the predictive validity of GMA and specific cognitive abilities across different occupational groups. Finally, and perhaps most importantly, in examining these issues, there has been a general reliance on predominantly USA samples (Anderson, Born, & Cunningham-Snell, 2001; Schmidt, 2002). As highlighted by Herriot and Anderson (1997), the findings from US meta-analyses have been unreservedly cited as being generalisable to the UK, without consideration of possible cultural, social, legislative, and recruitment and appraisal differences between countries. These differences, it can be argued, may well impact on the magnitude of validities observed in GMA test validity between the USA and UK (see also, Salgado & Anderson, 2002; 2003) European and UK Meta-analyses of Cognitive Ability Tests A comprehensive review of the published studies revealed that no previous metaanalysis which considered the criterion-related validity of GMA tests in the U.K. was published. Robertson and Kinder (1993; see also Salgado, 1996) published a meta-analysis using data collected in the UK, but this meta-analysis focused on the validity of personality 4

5 measures. Their meta-analysis did however examine the incremental validity of personality measures after partialling-out the variance in the criterion measure attributable to cognitive tests. In their series of recently-published papers, Salgado, Anderson and colleagues have investigated the criterion-related validity of cognitive tests across other countries in the European Union, but no UK-specific meta-analysis appears to have been published to date (Salgado & Anderson, 2003; Salgado, Anderson, Moscoso, Bertua, & de Fruyt, 2003; Salgado, Anderson, Moscoso, Bertua, de Fruyt, & Rolland, 2003). This is undoubtedly a notable shortcoming in our understanding of the efficacy of cognitive ability tests for employee selection in the U.K. According to Levy-Leboyer (1994), there are important differences between the US and European organizations in how selection procedures are carried out. This is borne out by subsequent analyses by Salgado & Anderson (2002) into the popularity of cognitive ability tests in Britain, Europe and the USA as indicated by previous surveys of GMA test use in these countries. Across 16 major surveys conducted over the last 25 years, Salgado and Anderson found that organizations in the UK tended to use GMA measures substantially more than organizations in the USA, despite the dearth of British meta-analytic evidence to support this widespread popularity. Viswesvaran & Ones (2002) have further pointed out that countries in the European Community if considered individually are relatively homogenous compared with the USA as they have less within-country diversity. Of any European country, of course, it can be argued that the UK is closest to the USA in terms of its employment legislation (having opted out of the EU Social Chapter, for instance), hours of work, job security, and HR practices. As noted by Roe (1989) selection practices and perspectives in other European countries follow less the classical American predictivist model. Instead, they emphasise the social negotiation perspective (e.g. Herriot, 1989; Herriot & Anderson, 1997), prospective employee rights in the procedure, and applicant privacy and expectations of equitable and fair treatment by the prospective employer organization (Levy- 5

6 Leboyer, 1994). Other researchers have argued that another relevant difference is the difference in size typically between US and European organizations (see, for instance, Salgado et al., 2003a; 2003b). Again, comparisons between the UK and the US are particularly interesting given the cultural differences between the UK and other European countries, and the adoption by UK organizations of American HR procedures and working practices. Several of the tests upon which primary studies were based in our dataset were either developed in the U.S. but are popular in the U.K. for GMA measurement (e.g. the Minnesota Clerical Test, the Differential Aptitude Test, Bennett s Mechanical Comprehension Test), or were U.K. developed but now used also in the U.S.A. (e.g. Raven s Progressive Matrices: Jansen, 1998). These overlaps further suggest that similar predictorcriterion relations could be expected across both countries. Issues concerning the theoretical groundings, development, and use of cognitive ability measures for employee selection have been at the forefront of debate in US industrial, work, and organizational psychology recently (e.g. Ones & Viswesvaran, 2002; Viswesvaran & Ones, 2002). Indeed, the journal Human Performance has published a seminal special issue entirely dedicated to the role of GMA in selection and job performance. Given that cognitive tests are used considerably more extensively for selection in Britain than in the USA, it is timely and fitting that debate in the cultural and legislative context of the UK is encouraged. Indeed, major issues such as criterion-related validity, adverse impact, test construction, validation procedures, and claims for the efficacy of cognitive tests for employee selection in the UK have received scant attention (see for instance, Murphy, 2002; Ones & Anderson, 2002; Reeve & Hakel, 2002). As will be highlighted in the following sections, such limitations necessitate a comprehensive analysis of these issues. What is more, in view of the lack of comparable meta-analyses conducted on British samples, a country specific analysis of the validity of GMA and specific cognitive ability tests is warranted in order to accurately 6

7 assess the predictive validity of such tests in the UK. Therefore, the current investigation sought to address these limitations by conducting the first independent and comprehensive meta-analysis of GMA and specific cognitive ability tests across a range of occupations consisting exclusively of United Kingdom samples. General versus Specific Cognitive Abilities An extensive body of research conducted over the last 50 years has lead to the general consensus that cognitive abilities manifest a hierarchical structure (see for example, Carretta & Ree, 2000; Carroll, 1993; Jensen, 1998; Ree & Carretta, 1998). In conjunction with this, many tests have been developed to measure both GMA and specific cognitive abilities, such as numerical, spatial, verbal, and perceptual ability. However, even in the US, in contrast to the extensive research regarding the predictive validity of GMA, very little research has been conducted examining the predictive validity of specific cognitive ability tests. For example, Hunter and Hunter (1984) and Hartigan and Wigdor (1989) partially examined this issue by examining the predictive validity of a cognitive ability composite and a perceptual ability composite (as assessed by the GATB) within civil settings. The results from both of these studies revealed that the perceptual ability composite had generally lower predictive validity than the cognitive ability composite. For example, in Hunter & Hunter s (1984) presentation of the U.S. Employment Service validation studies, the mean validities found for the cognitive ability composite ranged from.23 to.58 for job performance, and from.50 to.65 for training success (depending on the job complexity). However, in the case of the perceptual ability composite, mean validities ranged from.24 to.52 for job performance and from.26 to.53 for training success. A further piece of research which supports the conclusion that perceptual ability tests have generally lower predictive validity than general cognitive ability is Hunter s (1981; 1984, cited in Hunter, 1986) re-analysis of Ghiselli s data (1966; 1973). 7

8 These results revealed that for general cognitive ability validities ranged from.27 to.61 for job performance, and from.37 to.87 for training success (corrected for measurement error and range restriction). However, for perceptual ability, lower estimates ranging from.20 to.46 were found for job performance. On the question of general versus specific cognitive abilities as predictors of subsequent job performance, findings from meta-analyses conducted in the US have been unequivocal. Several studies indicate GMA to be the most robust predictor with specific abilities adding little or no incremental validity to predictor-criterion relationships (e.g. Carretta & Ree, 1996; McHenry, Hough, Toquam, Hanson & Ashworth, 1990; Olea & Ree, 1994; Ree & Carretta, 1994; Ree & Earles, 1991; Ree, Earles & Teachout, 1994). However, tests of specific cognitive ability are highly popular for selection purposes in the UK with, for instance, many organizations using notionally separate tests of verbal, numerical, and abstract reasoning (that is, regardless of underlying construct correlations with g ). Meta-analyses in the US have typically examined the issue of the incremental validity of tests of specific abilities, however, not their stand-alone validity if used by selection practitioners as multiple tests of different aspects of cognitive ability. This is typically the way in which specific ability tests are used for selection in the UK, regardless of existing findings that specific abilities correlate very highly with GMA. Although not examining the validity of specific cognitive ability tests across a range of job groupings, some research has been conducted within narrower job groupings (e.g.; Hirsh, Northrop, & Schmidt, 1986: Levine, Spector, Menon, Narayanan, & Cannon-Bowers, 1996; Pearlman, Schmidt, & Hunter, 1980: Vinchur, Schippmann, Switzer, & Roth, 1998). For example, Levine et al. (1996) examined the criterion validity of perceptual and cognitive ability tests for craft jobs in the utility industry. In their study, they found that perceptual tests demonstrated a corrected validity of.34 when predicting job performance, and.36 when 8

9 predicting training success. However, these validity estimates may not accurately represent the predictive validity of perceptual ability tests since the classification of tests under their perceptual test category is problematic. The main conclusion to be drawn from these USA results is that the magnitude of the predictive validities estimated varies according to the type of cognitive ability test used, and that GMA or overall cognitive ability generally appears to be a better predictor of future job performance and training success than specific cognitive ability tests. In addition, as indicated by Hirsh et al s results, validity generalisation may not be evident for all tests in all cases. However, as mentioned at the outset, the current body of research is limited by the relative paucity of studies comprehensively examining the predictive validity of a range of specific cognitive ability tests across a range of job groupings. Therefore, one of the main aims of the current research was to provide a more detailed examination of the predictive validity of specific cognitive ability tests across a range of occupational groups. Also, in view of the variability in the validity magnitudes reported in these American meta-analytic investigations an important issue was to ascertain a more accurate estimate of the predictive validities of GMA and specific cognitive ability tests in the UK. Criterion Validity Across Occupational Groups One of the first examinations of the predictive validity of GMA and cognitive ability tests across different occupational groups is Hunter s re-analysis (1981; 1984) of Ghiselli s data (1966, 1973). These covered a range of occupational groups including managerial, clerical, sales, protective professions, service workers, vehicle operators, sales clerks, trades and crafts jobs and elementary industrial jobs. Following corrections for sampling error, measurement error and range restriction, Hunter reported a range of validities from.61 for sales persons to.27 for sales clerks when predicting job performance. For training success, 9

10 validities ranged from.87 for protective professionals to.37 for vehicle operators. However, due to the unavailability of sample size details and information concerning the variability of the coefficients, Hunter was unable to establish the generalisability of the results across each job family. Despite this, additional studies have subsequently been conducted which do address this limitation. For example, in Hunter and Hunter s (1984; Hunter, 1986) examination of the validity of the GATB in the US Department of Employment, corrected validities were estimated across broad categories of jobs defined by their level of complexity. Overall, they found that the criterion validity of cognitive tests when predicting job performance was moderated by occupational group membership. That is, they found that operational validity was highest for high complexity jobs, and decreased as the level of job complexity decreased. For example, corrected validities for job performance ranging from.58 for high level complexity general job groups down to.23 for low level complexity industrial job groups were reported. For training success, corrected validities ranged from.65 for high-level complexity industrial job groups to.50 for lower complexity general job groups. Therefore, the criterion validity of cognitive ability tests appears to be moderated by job complexity for both job performance and training success, but particularly for the former. Taken as a whole, these studies indicate that occupational group may be a relevant moderator of the predictive validity of cognitive ability tests. Yet, the moderating effects of occupational group is an issue which has not been comprehensively examined in previous meta-analyses even in the USA let alone in the UK. In order to achieve these goals, two work-related criteria were examined, overall job performance ratings and training success. This choice was based on three principal factors: 1) 10

11 USA meta-analyses have only used these two criteria and therefore, since one of the current aims is to compare these results with those of previous USA meta-analyses the same criteria were used here; and 2) the practical consideration that these criteria are the most frequently reported in the literature; 3) the scarcity of primary studies including alternative criteria (such as turnover, absenteeism, promotion etc) would have meant that meta-analyses including such criteria would not have been possible. questions: To summarise, the current meta-analytic investigation addressed four main research 1: Are GMA and cognitive ability tests valid predictors of job performance and training success in UK samples? 2: Does operational validity of GMA and cognitive ability tests generalise across UK samples and settings? 3: Does operational validity of GMA tests generalise across different occupational groups? 4: Are the results attained from this UK investigation comparable to those found in previous US and other European country meta-analyses? Method Compilation of Database The process of compiling a database of sufficient scope and size to permit investigation of the current issues entailed a number of key stages. The first of these involved 11

12 conducting an exhaustive literature search for potential studies to be included. Firstly, an extensive search was conducted using PsycInfo and BIDS databases. Secondly, a manual article by article search was performed, through major journals and other publications in the field of organizational psychology. For example, the Journal of Occupational and Organizational Psychology, International Journal of Selection and Assessment, Journal of the National Institute of Industrial Psychology, Occupational Psychology, Personnel Journal, Journal of Applied Psychology, European Journal of Applied Psychology, Psychological Review, Human Factor, The Occupational Psychologist, British Journal of Psychology, and the Guidance and Assessment Review, amongst others. Thirdly, test manuals and books thought likely to include data were also inspected for potential studies. Fourthly, individual well-known researchers, practitioners and test publishing companies were contacted and asked for reports containing criterion-related validity data. Finally, the reference sections of obtained articles were also inspected for additional papers not located by other means. Following the collection of studies two researchers served as judges, independently coding and classifying the studies and the information contained within. The inclusion criteria stipulated were that: (i), studies report a validity coefficient relating to GMA and/or cognitive ability measures and overall job performance and/or training success criteria; (ii), only UK samples should be included; (iii), samples should consist of employees or trainees, and not students (unless these were part of a formal occupational apprenticeship training program); (iv), there should be sufficient information to enable appropriate classification of the cognitive ability tests (e.g.; GMA, verbal and numerical ability) and criterion measures used (i.e.; overall job performance, training success). Classification of GMA and Cognitive Ability Tests. The first step in coding the study details involved classifying the mental ability test 12

13 measures used in primary studies into the GMA or cognitive ability test type categories of interest within the present investigation. These consisted of measures of general mental ability ( g or GMA), numerical, verbal, spatial-mechanical, and perceptual-clerical ability tests. As in previous studies (e.g. Ghiselli, 1966), GMA and cognitive ability tests were classified in line with Philip Vernon s classification of tests, according to the construct or ability factors measured (see for example, Vernon, 1956, 1961; Vernon & Parry, 1949). It is important to note that Vernon s model suggested that two levels captures the hierarchy of abilities and that more recently the massive factor analytic work by Carroll (1993) suggested that three levels can better capture the hierarchy of abilities. In both models, the third level corresponds to GMA and Vernon s first level is very similar to Carroll s second level. Ree and Carretta (1994) have also found that Vernon s model arised from factor analyses of the ASVAB in the US army. To enable the classification of measures, descriptions and test information available within individual articles were consulted. Where such information was lacking, or insufficient, clarification was sought from the psychometric literature. This included consulting relevant books (e.g. Carroll, 1993; Ghiselli, 1966; Vernon, 1961, 1971; Vernon & Parry, 1949), articles and test manuals which contained test descriptions or statistical information relating to the underlying ability factors measured. Each mental ability test was classified by each researcher into one of the categories mentioned previously (see Appendix 1 for a listing of the tests included under each test type categories). Classification of Jobs into Occupational Categories The classification of jobs into occupational categories involved using a number of information sources. Firstly, job and occupational category descriptions from individual articles included within the database were used to group jobs according to naturally occurring 13

14 job types (e.g.; all clerical samples were categorised under the clerical job category). In cases where there was insufficient information or where such explicit similarities were not available additional information was sought to clarify the appropriate classification. This included using information such as: (1) job and task descriptions, for the jobs contained within the individual studies, from the Dictionary of Occupational Titles (DOT: US Department of Labour, 1977); (2) job category classifications used in previous studies (e.g. Hunter & Hunter, 1984; Pearlman et al, 1980). Overall, this resulted in the classification of jobs according to 7 broad categories for the job performance ratings criterion database: Clerical and Administrative jobs; Engineers; Professionals; Drivers; Operators and Spotters; Managers and Supervisors; Sales and Advisors. The training success criterion database consisted of 6 broad categories: Clerical and Administrative; Engineers; Health Professionals; Drivers; Operators, Coders and Air Traffic Control; Trade and Skilled Workers. In addition to these, a further category for each criterion database was added, including mixed occupational groups cited as such within the original studies (see Appendix 2 for a listing of the jobs included within each occupational category). Compilation of Validity Distributions The next stage in developing the current database involved compiling validity distributions upon which each meta-analysis could be conducted. Only one validity coefficient was included from each sample for each ability test, and occupational category combination. In cases where more than one coefficient from the same sample was reported, (e.g. two numerical ability tests) these were combined using one of two methods. Where correlations between the measures were available, a composite was calculated using Mosier s formula to correct for attenuation (see Hunter & Schmidt, 1990, for a full description). In cases where inter-correlation information was unavailable, average correlations were 14

15 calculated. The resulting single coefficients were those used within the meta-analyses. Database The resulting database consisted of 56 individual papers and books reporting 283 independent samples for the ability test-criteria combination database, including 60 independent samples with overall job performance as the criterion (N = 13,262), and 223 independent samples with training success as the criterion (N = 75,311). For the ability testoccupation-criteria combination database, there was a total of 105 independent samples, 43 with overall job performance as the criterion (N = 6,644), and 62 with training success as the criterion (N = 20,005). It is important to note that a number studies were conducted before 1960 and this could suggest to some readers that possible changes in the nature of jobs, the type of applicants, and other factors might potentially affect the validity of the tests. This problem was exhaustively examined by the Panel of the National Sciences Foundation (Hartigan and Wigdor, 1989) and they found no evidence of a decline in validity over time. Also, it must be noted that an examination of the studies included in the database did not reveal that specific tests (e.g. spatial/mechanical tests) were more often used with an occupational group than with other groups. Procedure Once the database had been compiled, the psychometric meta-analytic formulas developed by Hunter and Schmidt (1990; Hunter & Schmidt, 2000) were applied. These allow the estimation of the percentage of variance in observed validities which can be attributed to artifactual errors, and the operational validity one can expect, once artifactual error sources are removed. The artifactual errors considered within the current investigation included, direct range restriction in the predictor scores, predictor and criterion unreliability and 15

16 sampling error. However, since our interest lies in the operational validity of GMA and cognitive ability tests (as opposed to their theoretical value), the observed mean validity is only corrected for criterion unreliability and range restriction in the predictor. Predictor unreliability estimates are only used to eliminate artifactual variability in the calculation of the standard deviation of the operational validity (Sdrho) (See Hunter & Schmidt, 1990; 2000, for further explanation). Artifact Distributions To correct for artifactual errors within the meta-analyses, the most common technique is the development of specific artifact distributions for the error sources of interest. Within the current investigation, this involved recording and collating all relevant information pertaining to range restriction and predictor and criterion unreliability, by consulting a number of information sources: (1) primary studies; (2) general references; (3) test manuals. Sufficient data regarding range restriction and predictor reliabilities were available to develop specific empirical artifact distributions. These provided a sample-weighted average of.60 (SD =.24) for range restriction. This value is similar to the one used by Hunter & Hunter (1984), and Hermelin and Robertson (2001). For predictor reliability, the average testretest reliability was used (as recommended by Schmidt & Hunter, 1999), resulting in an estimate of.85 (SD =.05). In the case of criterion reliability, there was insufficient information to enable the development of specific distributions. Therefore, the alternative of using previously well established criterion reliabilities was used (see Hunter & Schmidt, 1990). Since the estimate of interest in such cases is the inter-rater reliability (Hunter, 1986; Hunter & Hirsh, 1987; Schmidt & Hunter, 1996), the average reliability estimate of.52 (SD =.09) was used (Viswesvaran, Ones, and Schmidt, 1996). Although this estimate is slightly lower than the estimate of.60 used by Hunter and Hunter (1984), additional research does 16

17 suggest that this is an accurate estimate of job performance reliability (Rothstein, 1990; Salgado & Moscoso, 1996; Salgado et al., 2003a, b). For training success, the sampleweighted average reliability estimate of.80 (SD =.10) used by Hunter and Hunter (1984; see also Hunter, 1986) was used. Note that these artifact distributions were drawn from previous meta-analyses conducted in the USA. There may, of course, be differences in artifact values across studies conducted in other countries, including the UK. However, one previous UK meta-analysis similarly used these distribution values (Hermelin & Robertson, 2001). Results GMA and Specific Cognitive Ability Tests The first series of meta-analyses examined the predictive validity of GMA and specific cognitive ability tests as predictors of job performance and training success. Table 1 and 2 respectively present the results for each ability test - job performance, and training success combination. These show (from left to right) the number of validity coefficients (K) and total sample size (N) upon which the analysis was based. Also shown are the mean observed validities (r) and their standard deviation (SDr), the operational validities one can expect once artifactual error from range restriction in predictor scores and criterion unreliability has been removed (rho), and their standard deviation (SDrho). The next two columns present the percentage of variance explained by artifactual errors (%VE) and the 90% credibility values (90%CV). This last figure denotes the validity value at or above which 90% of all true validities lie and, consequently, the minimum value one can expect in 9 out of 10 cases. 17

18 Insert Tables 1 & 2 about here Job Performance A total of 60 independent samples with a total sample size of 13,262, contributed to these meta-analyses. The number of independent samples contributing to each ability test job performance combination meta-analysis ranged from a maximum of 20 for numerical ability tests to a minimum of 7 for both perceptual and spatial ability tests. As indicated by the operational validities reported, all ability tests demonstrate good predictive validity for overall job performance. Perceptual ability tests emerged as the best predictors, with an operational validity of.50 (SD =.00). All the variance in the observed validity was explained by artifactual errors and consequently the 90% credibility value was also.50. This indicates that the validity of perceptual tests does generalise across samples and settings. The percentage of variance explained is also indicative of second order sampling error, in which case the sample of coefficients included within the current analysis may not be totally representative of the general population. However, as highlighted by Hunter and Schmidt (1990), the main impact of second order sampling error is not on the estimation of means or operational validities, but rather its main impact is on the estimates of standard deviations. In view of this, the 90% credibility value observed may change as the number of studies and sample sizes increase. The next highest predictor was GMA, which also showed a high validity, since the operational validity was.48 (SD =.24). In this case, the 90% credibility value of.17 also indicated that the validity of GMA tests generalises across samples and settings. However, in addition to this the percentage of variance explained by artifactual errors (45%) and the standard deviation of the operational validity (Sdrho =.24) indicates that other factors may moderate the operational validity magnitude of GMA tests. 18

19 The third best predictors of job performance ratings were numerical ability tests, which showed an operational validity of.42 (SD =.12) and a 90% credibility value of.26, indicating that the validity of numerical validity tests also generalises across samples and settings. The percentage of variance explained by artifactual errors (75%) also indicates that the remaining variance can be considered attributable to additional artifactual error sources, not considered within the current analyses (e.g. imperfect construct measurement, range restriction in criterion scores and clerical errors. See also, Hunter & Schmidt, (1990) for a full listings of possible error sources). Verbal and spatial ability tests showed slightly lower operational validities of.39 (SD =.15) and.35 (SD =.00) respectively. Nonetheless, in both cases the 90% credibility values indicate that both have generalised validity across samples and settings. However, there was evidence of second order sampling error. As can be seen in Table 1, the standard deviation of rho for GMA, verbal ability and numerical ability is larger than the standard deviation of the observed validity. This is due to the fact that not all the observed variability was explained for the artifactual errors, and that the residual variance is corrected for the effects of predictor and criterion reliability in order to have an unbiased estimate of the standard estimate of rho. Some cells in Table 1 have a relatively small number of studies although the number is still acceptable for meta-analysis. However, we conducted a so-called file-drawer analysis (Rosenthal, 1979; Hirsh et al., 1996). With regards to this point, Ashworth, Callender, Osburn and Boyle (1992) have developed a method for assessing the vulnerability of validity generalization results to unrepresented or missing studies. Ashworth et al. (1992) suggested calculating the effects on validity when 10% of studies are missing and their validity is zero. Therefore, we calculated additional estimates to represent what the validity would be if we were unable to locate 10% of the studies carried out and if these studies showed zero validity. The last three columns in Table 1 report these new (hypothetical) estimates for every design 19

20 cell: the lowest rho value, new standard deviation, and lowest 90%CV. As can be seen, adding 10% of the studies with zero validity has no effect on our conclusion that there is validity generalization for GMA and specific cognitive ability for predicting job performance. Training Success As reported in Table 2 the total number of validity coefficients (K = 223) and sample size (N = 75,311) contributing to this series of analyses was larger than that for the job performance analyses. Across the range of ability type test training success combinations the number of coefficients ranged from a maximum of 53 to a minimum of 33, with sample sizes ranging from 17,982 to 12,679. Consequently, the large number of coefficients and huge total sample size can be expected to assure the stability of the results. The results indicate that all ability tests are good predictors of training success. Numerical ability tests emerged as the best predictors with an operational validity of.54 (SD =.09). 81% of the variance was explained by artifactual error and the 90% credibility value was.43. Consequently, it can be concluded that the validity of numerical tests does generalise across samples and settings and furthermore, there is little room for moderators. The next best predictors were GMA and perceptual ability tests, both showing an operational validity of.50 (SD =.13 &.12 respectively for GMA and perceptual tests). A similar percentage of the variance was explained in both cases, with 64% explained for GMA tests and 66% for perceptual tests. Finally the 90% credibility values were also similar, with.33 for GMA tests and.35 for perceptual ability tests. Therefore, the validity of both GMA and perceptual ability tests can be seen to generalise across samples and settings. Verbal ability tests were also found to have a high operational validity (.49, SD =.10), and the 90% credibility value of.36 indicates that their validity generalises across samples and settings. The final ability test type analysed was spatial ability tests. These showed an 20

21 operational validity of.42 (SD =.00) and the 90% credibility value was identical as all of the observed variance was explained by artifactual errors. Thus, the validity of these tests also generalises across samples and settings. However, there was also evidence of second order sampling error. The results of the file-drawer analysis using the Ashworth et al. (1992) method appear in the last three columns of Table 2. Although the number of studies and the total sample size did not require this analysis, it was carried out as an additional confirmation of our conclusions. The results of this file-drawer analyses also showed that, for training success, there is validity generalization and that the magnitudes of the new rho estimates are very similar to the original ones. Occupational groups The following series of analyses examined the predictive validity of GMA tests as predictors of both job performance and training success across the different occupational groups represented within the current database. In this series of meta-analyses we used the same studies included in the previous meta-analyses, but we have not included studies in which specific cognitive ability tests were used as an estimate of GMA. This was done because there was not a sufficient number of studies to examine the validity of every specific cognitive ability for each occupational group. This decision resulted in a smaller number of studies in comparison with the meta-analyses reported in Tables 1 and 2. The first series of meta-analyses, looking specifically at job performance, are presented in table 3, whilst the results for the training success criterion are presented in table 4. Insert Tables 3 & 4 about here 21

22 Job Performance The database used for this series of analyses consisted of 43 coefficients with a total sample size of 6,644. The largest operational validity found was for professional occupations. For this group of jobs the operational validity was.74 (SD =.23). Furthermore, the 90% credibility value of.45 indicates that validity generalises across professional jobs. However, although a large percentage of the variance was explained by artifactual errors (61%), the results indicate that there may be scope for an examination of possible moderating factors amongst this occupational group. The operational validities estimated for engineer and manager jobs were also high, with GMA tests showing an operational validity of.70 (SD =.42) and.69 (SD =.00) for engineers and managers respectively. In the case of managers, all of the observed variability in validities was explained by artifactual errors and consequently the 90% credibility value was also.69. Thus, the validity of GMA for predicting overall job performance generalises across managerial occupations. However, there was also evidence of second order sampling error. The 90% credibility value of GMA tests for engineer occupations (.16) also indicates that their validity generalises across all engineering occupations. However, this value along with the percentage of variance explained (30%) and the standard deviation of rho (.42) indicates that moderators may impact on the validity observed for these measures. The next highest ranking operational validities were for sales and operator occupations. For these occupations GMA tests were found to have an operational validity of.55 (SD =.31) and.53 (SD =.00) for sales and operator occupations respectively. The 90% credibility values for both occupations also indicated that validity generalises across both occupations (90%CV =.15 &.53 respectively). However, in the case of sales occupations, additional moderators may impact on the validity of GMA tests. Furthermore, there is also evidence of second order sampling error for operator occupations. 22

23 The final three occupational groups analysed were driver, clerical and mixed occupational groups. Amongst these groups, GMA tests were found to have moderate to high operational validities ranging from.32 (SD =.00) for clerical jobs,.37 for driver jobs (SD =.00), to.40 (SD =.00) for mixed occupations. In all cases, all of the variance was accounted for by the artifactual error sources considered here, and consequently validity generalised across all occupations. However, there was also evidence of second order sampling error. The file drawer analysis showed that the addition of 10% of new studies with zero validity had no significant effects on the validity magnitude and that, therefore, the conclusions remain the same for all occupations. Training Success The operational validity magnitudes of GMA tests were all large, ranging from.64 for engineering occupations to.47 for driver occupations (see table 4). Moreover, apart from indicating that they are very good predictors of training success, the 90% credibility values indicated that their validity generalises across occupational groups. 90% credibility values of.46 and.49 were found for engineer and professional occupations (respectively). In both cases the percentage of variance explained was high, with 68% and 88% being explained for engineer and professional occupations respectively. Furthermore, in the case of professional occupations the remaining variability in the validity of GMA tests can be considered attributable to additional potential error sources. Clerical, skilled, operator and mixed occupations all showed very similar operational validity magnitudes (.55 for clerical, skilled and mixed occupational groups and.54 for operator jobs). Furthermore, since the variance in validities was largely, if not totally, accounted for by artifactual errors, validity generalised across each occupational group, with 90% credibility values of.41,.37,.45, and.55 for clerical, skilled, operator and mixed 23

24 occupational groups respectively. The lowest operational validity of.47 (SD =.00) was for driver jobs, although as in all other occupations examined, this validity is still of sufficient magnitude to be of practical value. All of the variance in GMA s validity was accounted for by artifactual error and consequently, the 90% credibility value was identical to the operational validity. Therefore, it can be concluded that the validity of GMA tests generalises across driver occupations. Nevertheless, the evidence of second order sampling error indicates that the 90% credibility value may vary as sample sizes increase. As was found in the previous meta-analyses reported in this article, the results of the file-drawer analyses also showed that, for training success, there is validity generalization and that the magnitude of the new rho estimates was very similar to the original ones. Therefore, the conclusions remain the same after this analysis was done. Discussion Taken as a whole, the results of the present investigation indicate that GMA and cognitive ability tests are robust predictors of job performance and training success across a wide range of occupations in the UK. Furthermore, whilst some differences were observed across different occupational groups and different criteria, the findings from the present study are largely in line with those found in earlier meta-analytic studies in the USA. General Verses Specific Mental Abilities The crucial overall finding from this series of meta-analyses is that all GMA and cognitive ability tests included within the present investigation were found to be valid predictors of job performance and training success. For job performance, the variation of operational validities observed ranged from.50 (perceptual ability tests) to.35 (spatial tests), indicating that all tests demonstrate moderate to high predictive validity. For training success, 24

25 operational validities were even greater, with validities ranging from.54 for numerical ability tests to.42 for spatial ability tests. The larger operational validities observed for training success appear consistent with previous research, which reveals a tendency for higher predictive validities for training criteria compared to job performance (e.g. Pearlman et al, 1980). Furthermore, contrary to previous research (Hirsh et al, 1986) which failed to find validity generalisation for some tests, all the tests analysed here showed positive credibility values, which were substantially different from zero, thus indicating that validity does generalise when predicting job performance and training success criteria. It is interesting to note that, when comparing the differences in validity for GMA versus specific cognitive ability tests, the 90% credibility intervals are completely overlapped for job performance and training success. In other words, the GMA credibility interval included the respective intervals for verbal, numerical, perceptual and spatial abilities. A note of caution is warranted, however, with regard to direct comparisons between findings emerging from meta-analyses computed using different databases of primary studies. Whilst such comparisons are possible and valuable we should be mindful of differences in the composition and distribution of primary studies, especially concerning differences in the distribution of job complexity across primary studies (Salgado & Anderson, 2002; 2003). Note, for instance that Schmidt (2002) also found operational validities in the region of.50 for perceptual ability tests for jobs of similar complexity to those we included in the present UK-based meta-analysis. Job complexity has emerged from several meta-analyses internationally as the principal moderator of predictor-criterion relationships, and indeed this was the case in the present meta-analysis of UK studies of tests of GMA and specific abilities. We do not argue that different meta-analyses internationally cannot be compared per se, simply that some caution is warranted in comparing the distributions of primary studies 25

26 especially in terms of job complexity differences. An interesting finding of the present study concerns the variability in the magnitudes of validities observed for the different ability tests examined. For example, when predicting job performance, GMA and perceptual ability tests demonstrated the highest predictive validities (rho =.48 &.50, respectively). This pattern was similar for the training success criterion, where both GMA and perceptual ability tests showed an operational validity of.50, and numerical ability tests showed an operational validity of.54. Both sets of results are slightly surprising in view of previous research demonstrating that perceptual ability tests demonstrate lower predictive validities than GMA tests (e.g. Hartigan & Wigdor, 1989; Hunter, 1981, 1984, cited in Hunter, 1986; Hunter & Hunter, 1984). However, an important point to note, within the current analyses, is that both the Sdrho and the %VE for GMA tests (particularly when predicting job performance) indicates that there is room for moderators. With respect to the findings for perceptual tests, a further point to note is the possibility that tests included within the perceptual-clerical test category may have been more g saturated than for the other types of tests we examined in this study. For example, factorial analysis of clerical and instructions tests has revealed that such measures can prove to be as good a measure of general mental ability as abstraction and matrices tests (see for example, Vernon, 1949). Consequently, the high operational validities observed here for perceptual tests may be partly due to the tests measurement of general mental ability in addition to pure clerical and perceptual ability. A second point to note is the relative consistency of the operational validity magnitudes for the different ability tests examined here. These results are of particular interest, in view of current research examining the incremental validity of specific cognitive abilities. This has shown that, for training success and job performance, GMA is the best predictor, with little incremental validity for specific cognitive abilities (e.g.: Carretta & Ree, 26

A Meta-Analytic Study of General Mental Ability Validity for Different Occupations in the European Community

A Meta-Analytic Study of General Mental Ability Validity for Different Occupations in the European Community Journal of Applied Psychology Copyright 2003 by the American Psychological Association, Inc. 2003, Vol. 88, No. 6, 1068 1081 0021-9010/03/$12.00 DOI: 10.1037/0021-9010.88.6.1068 A Meta-Analytic Study of

More information

Can Synthetic Validity Methods Achieve Discriminant Validity?

Can Synthetic Validity Methods Achieve Discriminant Validity? Industrial and Organizational Psychology, 3 (2010), 344 350. Copyright 2010 Society for Industrial and Organizational Psychology. 1754-9426/10 Can Synthetic Validity Methods Achieve Discriminant Validity?

More information

Core Abilities Assessment

Core Abilities Assessment Core Abilities Assessment Evidence of Reliability and Validity 888-298-6227 TalentLens.com Copyright 2007 by NCS Pearson, Inc. All rights reserved. No part of this publication may be reproduced or transmitted

More information

Predictive Validity toward Job Performance of General and Specific Mental Abilities. A Validity Study across Different Occupational Groups

Predictive Validity toward Job Performance of General and Specific Mental Abilities. A Validity Study across Different Occupational Groups Business and Management Studies Vol., No. ; September 8 ISSN: 7-56 E-ISSN: 7-5 Published by Redfame Publishing URL: http:bms.redfame.com Predictive Validity toward Job Performance of General and Specific

More information

Predictive Validity toward Job Performance of General and Specific Mental Abilities. A Validity Study across Different Occupational Groups

Predictive Validity toward Job Performance of General and Specific Mental Abilities. A Validity Study across Different Occupational Groups Business and Management Studies Vol., No. ; September 8 ISSN: 7-56 E-ISSN: 7-5 Published by Redfame Publishing URL: http:bms.redfame.com Predictive Validity toward Job Performance of General and Specific

More information

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Band Battery

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Band Battery 1 1 1 0 1 0 1 0 1 Glossary of Terms Ability A defined domain of cognitive, perceptual, psychomotor, or physical functioning. Accommodation A change in the content, format, and/or administration of a selection

More information

Transformation in Royal Mail

Transformation in Royal Mail Transformation in Royal Mail An evidence-based approach to developing HR strategy White paper About Talent Q We design and deliver innovative online psychometric assessments, training and consultancy,

More information

PAF. Personnel Assessment Form. Technical Manual. Douglas N. Jackson, Ph.D. Personnel Assessment Form

PAF. Personnel Assessment Form. Technical Manual. Douglas N. Jackson, Ph.D. Personnel Assessment Form PAF Personnel Assessment Form Technical Manual Advancing the Science of Human Assessment since 1967. Personnel Assessment Form Douglas N. Jackson, Ph.D. Copyright 2004 by SIGMA Assessment Systems, Inc.

More information

Putting Spearman s Hypothesis to Work: Job IQ as a Predictor of Employee Racial Composition

Putting Spearman s Hypothesis to Work: Job IQ as a Predictor of Employee Racial Composition Submitted: 23 rd of May 2016 Published: 22 nd of July 2016 Putting Spearman s Hypothesis to Work: Job IQ as a Predictor of Employee Racial Composition Bryan J. Pesta Peter J. Poznanski Open Differential

More information

Putting Spearman s Hypothesis to Work: Job IQ as a Predictor of Employee Racial Composition

Putting Spearman s Hypothesis to Work: Job IQ as a Predictor of Employee Racial Composition Submitted: 23 rd of May 2016 Published: 22 nd of July 2016 Putting Spearman s Hypothesis to Work: Job IQ as a Predictor of Employee Racial Composition Bryan J. Pesta Peter J. Poznanski Open Differential

More information

THE OCCUPATIONAL AND SKILL STRUCTURE OF NEW APPRENTICESHIPS: A COMMENTARY

THE OCCUPATIONAL AND SKILL STRUCTURE OF NEW APPRENTICESHIPS: A COMMENTARY THE OCCUPATIONAL AND SKILL STRUCTURE OF NEW APPRENTICESHIPS: A COMMENTARY Working Paper 74 Dr Phil Toner 1 THE OCCUPATIONAL AND SKILL STRUCTURE OF NEW APPRENTICESHIPS: A COMMENTARY Abstract This article

More information

Article Review: Personality assessment in organisational settings

Article Review: Personality assessment in organisational settings Article Review: Personality assessment in organisational settings Author Published 2009 Journal Title Griffith University Undergraduate Psychology Journal Downloaded from http://hdl.handle.net/10072/340326

More information

The Impact of Job Complexity and Study Design on Situational. and Behavior Description Interview Validity

The Impact of Job Complexity and Study Design on Situational. and Behavior Description Interview Validity The Impact of Job Complexity and Study Design on Situational and Behavior Description Interview Validity Allen I. Huffcutt James M. Conway Department of Psychology Department of Psychology Bradley University

More information

Capturing employee perception

Capturing employee perception Capturing employee perception Why commodity employee surveys are failing our organisations Paper by Simon Kettleborough, Performance Through Inclusion Capturing employee perception Why commodity employee

More information

O ver recent years the desire for HR to act as a strategic partner within organizations

O ver recent years the desire for HR to act as a strategic partner within organizations An evidence-based approach to developing HR strategy: transformation in Royal Mail Alan ourne and Dale Haddon Alan ourne is based at Talent Q UK Ltd, Thame, UK. Dale Haddon is based at Royal Mail, London,

More information

Crowe Critical Appraisal Tool (CCAT) User Guide

Crowe Critical Appraisal Tool (CCAT) User Guide Crowe Critical Appraisal Tool (CCAT) User Guide Version 1.4 (19 November 2013) Use with the CCAT Form version 1.4 only Michael Crowe, PhD michael.crowe@my.jcu.edu.au This work is licensed under the Creative

More information

GENERAL MENTAL ABILITY, NARROWER COGNITIVE ABILITIES, AND JOB PERFORMANCE: THE PERSPECTIVE OF THE NESTED-FACTORS MODEL OF COGNITIVE ABILITIES

GENERAL MENTAL ABILITY, NARROWER COGNITIVE ABILITIES, AND JOB PERFORMANCE: THE PERSPECTIVE OF THE NESTED-FACTORS MODEL OF COGNITIVE ABILITIES PERSONNEL PSYCHOLOGY 2010, 63, 595 640 Lang, J. W. B, Kersting, M., Hülsheger, U. R. & Lang, J. (2010). General mental ability, narrower cognitive abilities, and job performance: The perspective of the

More information

Identifying and Developing Predictors of Job Performance

Identifying and Developing Predictors of Job Performance Identifying and Developing Predictors of Job Performance Patrick J. Curtin Deborah L. Whetzel Kenneth E. Graham Caliber Associates Pre-Conference Workshop, IPMAAC June 22, 2003 1 Agenda Introductions Test

More information

Comparative Analysis of the Reliability of Job Performance Ratings

Comparative Analysis of the Reliability of Job Performance Ratings Journal of Applied Psychology 1996, Vol. 81, No. 5, 557-574 Copyright 1996 by the American Psychological Association, Inc. 0021-9010/96/J3.00 Comparative Analysis of the Reliability of Job Performance

More information

This thesis is protected by copyright which belongs to the author.

This thesis is protected by copyright which belongs to the author. A University of Sussex DPhil thesis Available online via Sussex Research Online: http://sro.sussex.ac.uk/ This thesis is protected by copyright which belongs to the author. This thesis cannot be reproduced

More information

The Relationships Between Traditional Selection Assessments and Workplace Performance Criteria Specificity: A Comparative Meta-Analysis

The Relationships Between Traditional Selection Assessments and Workplace Performance Criteria Specificity: A Comparative Meta-Analysis Edinburgh Research Explorer The Relationships Between Traditional Selection Assessments and Workplace Performance Criteria Specificity: A Comparative Meta-Analysis Citation for published version: Rojon,

More information

Achieving and Relating: Validation of a Two-Factor Model of Managerial Orientation

Achieving and Relating: Validation of a Two-Factor Model of Managerial Orientation International Journal of Business and Social Science Vol. 5, No. 6(1); May 2014 Achieving and Relating: Validation of a Two-Factor Model of Managerial Orientation R. Douglas Waldo, DBA, SPHR Associate

More information

Impact of Meta-Analysis Methods on Understanding Personality-Performance Relationships. Murray R. Barrick. Department of Management and Organizations

Impact of Meta-Analysis Methods on Understanding Personality-Performance Relationships. Murray R. Barrick. Department of Management and Organizations Impact of Meta-Analyses 1 Impact of Meta-Analysis Methods on Understanding Personality-Performance Relationships Murray R. Barrick Department of Management and Organizations Tippie College of Business

More information

Talegent Whitepaper June Contact Centre Solution. Technology meets Psychology. Strategic Importance of Contact Centres

Talegent Whitepaper June Contact Centre Solution. Technology meets Psychology. Strategic Importance of Contact Centres Technology meets Psychology Talegent Whitepaper June 2014 Contact Centre Solution Strategic Importance of Contact Centres The strategic importance of hiring quality contact centre staff The challenges

More information

Test-Free Person Measurement with the Rasch Simple Logistic Model

Test-Free Person Measurement with the Rasch Simple Logistic Model Test-Free Person Measurement with the Rasch Simple Logistic Model Howard E. A. Tinsley Southern Illinois University at Carbondale René V. Dawis University of Minnesota This research investigated the use

More information

Hogan Research Methodology

Hogan Research Methodology TECHNICAL BRIEF Hogan Research Methodology Chapter 4: Criterion-Related Validity Evidence Concurrent Criterion Related Validity Study Predictive Criterion Related Validity Study This technical brief is

More information

The McQuaig Mental Agility Test

The McQuaig Mental Agility Test THE MCQUAIG INSTITUTE The McQuaig Mental Agility Test Technical Manual Rick D. Hackett, Ph.D. 8/5/214 Contents 1. Executive Summary... 1 2. Description of MMAT... 1 3. Administration & Scoring... 1 4.

More information

Derek Bosworth, Rhys Davies & Rob Wilson. May 2002

Derek Bosworth, Rhys Davies & Rob Wilson. May 2002 THE EXTENT, CAUSES, AND IMPLICATIONS OF SKILL DEFICIENCIES MANAGERIAL QUALIFICATIONS AND ORGANISATIONAL PERFORMANCE: AN ANALYSIS OF ESS 1999 Derek Bosworth, Rhys Davies & Rob Wilson May 2002 Institute

More information

Mastering Modern Psychological Testing Theory & Methods Cecil R. Reynolds Ronald B. Livingston First Edition

Mastering Modern Psychological Testing Theory & Methods Cecil R. Reynolds Ronald B. Livingston First Edition Mastering Modern Psychological Testing Theory & Methods Cecil R. Reynolds Ronald B. Livingston First Edition Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies

More information

Three Research Approaches to Aligning Hogan Scales With Competencies

Three Research Approaches to Aligning Hogan Scales With Competencies Three Research Approaches to Aligning Hogan Scales With Competencies 2014 Hogan Assessment Systems Inc Executive Summary Organizations often use competency models to provide a common framework for aligning

More information

The Science Behind Predicting Job Performance at Recruitment

The Science Behind Predicting Job Performance at Recruitment The Science Behind Predicting Job Performance at Recruitment Authors: Wyn Davies, Global Product Manager, Pearson TalentLens Angus McDonald, Chartered Psychologist Date: May 2018 MORE INSIGHT MORE IMPACTTM

More information

Pushing Forward with ASPIRE 1

Pushing Forward with ASPIRE 1 Pushing Forward with ASPIRE 1 Heather BERGDAHL Quality Coordinator, Statistics Sweden Dr. Paul BIEMER Distinguished Fellow, RTI International Dennis TREWIN Former Australian Statistician 0. Abstract Statistics

More information

Sales Selector Technical Report 2017

Sales Selector Technical Report 2017 Sales Selector Technical Report 2017 Table of Contents Executive Summary... 3 1. Purpose... 5 2. Development of an Experimental Battery... 5 3. Sample Characteristics... 6 4. Dimensions of Performance...

More information

Other Selection Procedures

Other Selection Procedures Other Selection Procedures A Sample of Other Selection Procedures Academic Achievement Job Knowledge Tests Simulation or Performance Tests Cognitive or Mental Abilities Tests Physical Abilities Tests Biodata

More information

Organizational Fit: The Value of Values Congruence In Context. Stephen G. Godrich The Open University. Abstract

Organizational Fit: The Value of Values Congruence In Context. Stephen G. Godrich The Open University. Abstract Organizational Fit: The Value of Values Congruence In Context Stephen G. Godrich The Open University Abstract This developmental paper looks at the issue of value congruence as being a key driver of fit

More information

EUROPEAN PHARMACEUTICAL MARKET RESEARCH ASSOCIATION (EphMRA) RESPONSE TO

EUROPEAN PHARMACEUTICAL MARKET RESEARCH ASSOCIATION (EphMRA) RESPONSE TO EUROPEAN PHARMACEUTICAL MARKET RESEARCH ASSOCIATION (EphMRA) RESPONSE TO IMPLEMENTING MEASURES IN ORDER TO HARMONISE THE PERFORMANCE OF THE PHARMACOVIGILANCE ACTIVITIES PROVIDED FOR IN DIRECTIVE 2001/83/EC

More information

Communications In The Workplace

Communications In The Workplace 81 Chapter 6 Communications In The Workplace This chapter examines current levels of consultation, information and communication in the workplace. It outlines the type of information available in the workplace

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zibarras, L. D. & Woods, S. A. (2010). A survey of UK selection practices across different organization sizes and industry

More information

Submission to the Basel Committee for Banking Supervision: Corporate governance principles for banks October 2014.

Submission to the Basel Committee for Banking Supervision: Corporate governance principles for banks October 2014. Submission to the Basel Committee for Banking Supervision: Corporate governance principles for banks October 2014. Elizabeth Sheedy* Barbara Griffin^ * Applied Finance Centre, Faculty of Business and Economics,

More information

The Joint Relationship of Conscientiousness and Ability with Performance: Test of the Interaction Hypothesis

The Joint Relationship of Conscientiousness and Ability with Performance: Test of the Interaction Hypothesis Journal of Management 1999, Vol. 25, No. 5, 707 721 The Joint Relationship of Conscientiousness and Ability with Performance: Test of the Interaction Hypothesis Michael K. Mount Murray R. Barrick University

More information

chapter 5 recruitment, selection and talent management

chapter 5 recruitment, selection and talent management STUDENT SUMMARY NOTES chapter 5 recruitment, selection and talent management Chapter overview This chapter begins by providing an overview of the critical processes of recruitment and selection. It emphasizes

More information

PERSONALITY AND PERFORMANCE 9. Personality and Performance at the Beginning of the New Millennium: What Do We Know and Where Do We Go Next?

PERSONALITY AND PERFORMANCE 9. Personality and Performance at the Beginning of the New Millennium: What Do We Know and Where Do We Go Next? PERSONALITY AND PERFORMANCE 9 Personality and Performance at the Beginning of the New Millennium: What Do We Know and Where Do We Go Next? Murray R. Barrick, Michael K. Mount and Timothy A. Judge* As we

More information

ADMINISTRATIVE INTERNAL AUDIT Board of Trustees Approval: 03/10/2004 CHAPTER 1 Date of Last Cabinet Review: 04/07/2017 POLICY 3.

ADMINISTRATIVE INTERNAL AUDIT Board of Trustees Approval: 03/10/2004 CHAPTER 1 Date of Last Cabinet Review: 04/07/2017 POLICY 3. INTERNAL AUDIT Board of Trustees Approval: 03/10/2004 POLICY 3.01 Page 1 of 14 I. POLICY The Internal Audit Department assists Salt Lake Community College in accomplishing its objectives by providing an

More information

Dependable on-the-job performance criterion

Dependable on-the-job performance criterion SECURITY OFFICER WORK SAMPLE 29 How Useful are Work Samples in Validational Studies? Douglas N. Jackson*, William G. Harris, Michael C. Ashton, Julie M. McCarthy and Paul F. Tremblay Some job tasks do

More information

The Stability of Validity Coefficients Over Time: Ackerman's (1988) Model and the General Aptitude Test Battery

The Stability of Validity Coefficients Over Time: Ackerman's (1988) Model and the General Aptitude Test Battery Journal of Applied Psychology 2001, Vol. 86, No. 1, 60-79 Copyright 2001 by the American Psychological Association, Inc. 0021-9010/01/S5.00 DOI: 10.1037//0021-9010.86.1.60 The Stability of Validity Coefficients

More information

Opportunities to Improve Testing Research and Practice

Opportunities to Improve Testing Research and Practice Opportunities to Improve Testing Research and Practice Joel P. Wiesen, Ph.D. jwiesen@appliedpersonnelresearch.com IPAC 2011 Conference Washington, D.C. July 18, 2011 Wiesen (2011), International Personnel

More information

Issues In Validity Generalization The Criterion Problem

Issues In Validity Generalization The Criterion Problem University of Central Florida Electronic Theses and Dissertations Masters Thesis (Open Access) Issues In Validity Generalization The Criterion Problem 2010 Raquel Hodge University of Central Florida Find

More information

WorkKeys Innovations: A Holistic Solution

WorkKeys Innovations: A Holistic Solution WorkKeys Innovations: A Holistic Solution presented at the 2007 Michigan WorkKeys Conference Steve Robbins, AVP, Applied Research, ACT, Inc. Overview Why we should care about combining cognitive- and personality-based

More information

A MATTER OF CONTEXT: A META-ANALYTIC INVESTIGATION OF THE RELATIVE VALIDITY OF CONTEXTUALIZED AND NONCONTEXTUALIZED PERSONALITY MEASURES

A MATTER OF CONTEXT: A META-ANALYTIC INVESTIGATION OF THE RELATIVE VALIDITY OF CONTEXTUALIZED AND NONCONTEXTUALIZED PERSONALITY MEASURES PERSONNEL PSYCHOLOGY 2012, 65, 445 494 A MATTER OF CONTEXT: A META-ANALYTIC INVESTIGATION OF THE RELATIVE VALIDITY OF CONTEXTUALIZED AND NONCONTEXTUALIZED PERSONALITY MEASURES JONATHAN A. SHAFFER West

More information

The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies

The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies ABSTRACT Background The selection methodology for UK general practice

More information

7 Conclusions. 7.1 General Discussion

7 Conclusions. 7.1 General Discussion 146 7 Conclusions The last chapter presents a final discussion of the results and the implications of this dissertation. More specifically, this chapter is structured as follows: The first part of this

More information

The Micro-Meritocracy: The distribution of merit throughout big class, micro class and gradational representations of the social structure

The Micro-Meritocracy: The distribution of merit throughout big class, micro class and gradational representations of the social structure Sub-brand to go here The Micro-Meritocracy: The distribution of merit throughout big class, micro class and gradational representations of the social structure Roxanne Connelly R.Connelly@ioe.ac.uk Social

More information

Investment Platforms Market Study Interim Report: Annex 8 Gap Analysis

Investment Platforms Market Study Interim Report: Annex 8 Gap Analysis MS17/1.2: Annex 8 Market Study Investment Platforms Market Study Interim Report: Annex 8 Gap July 2018 Annex 8: Gap Introduction 1. One measure of whether this market is working well for consumers is whether

More information

Brexit Survey November 2016

Brexit Survey November 2016 1 Summary: Brexit Survey November 2016 These results will not be used to take a political stance. They will inform our sector s Brexit negotiations and emphasise to government what our business needs are

More information

Workplace-based assessments in psychiatry: setting the scene

Workplace-based assessments in psychiatry: setting the scene 1 Workplace-based assessments in psychiatry: setting the scene Amit Malik and Dinesh Bhugra Background: changing socio-political context Over the last decade the socio-political world within which medicine

More information

Biographical Data in Employment Selection: Can Validities Be Made Generalizable?

Biographical Data in Employment Selection: Can Validities Be Made Generalizable? Journal of Applied Psychology 199, Vol. 75, o. 2, 175-184 Copyright 199 by the American Psychological Association, Inc. 21-91/9/S.75 Biographical Data in Employment Selection: Can Validities Be ade Generalizable?

More information

Investigating the Uniqueness and Usefulness of Proactive Personality in Organizational Research: A Meta-Analytic Review

Investigating the Uniqueness and Usefulness of Proactive Personality in Organizational Research: A Meta-Analytic Review Management Publications Management 7-10-2015 Investigating the Uniqueness and Usefulness of Proactive Personality in Organizational Research: A Meta-Analytic Review Matthias Spitzmuller Queens University

More information

9001:2015, ISO 14001:2015 & ISO

9001:2015, ISO 14001:2015 & ISO Quality management input comprises the standard requirements from ISO 9001:2015 which are deployed by our organization to achieve customer satisfaction through process control. Environmental input comprises

More information

Selection Definition. Selection criteria. Selection Methods

Selection Definition. Selection criteria. Selection Methods Selection Definition Selection is a variety of imperfect methods to aid the task of predicting which applicant will be most successful in meeting the demands of the job and be the best fit with the work

More information

THE Q-SORT METHOD: ASSESSING RELIABILITY AND CONSTRUCT VALIDITY OF QUESTIONNAIRE ITEMS AT A PRE-TESTING STAGE

THE Q-SORT METHOD: ASSESSING RELIABILITY AND CONSTRUCT VALIDITY OF QUESTIONNAIRE ITEMS AT A PRE-TESTING STAGE IE Working Paper DO8-3-I 5// THE Q-SORT METHOD: ASSESSING RELIABILITY AND CONSTRUCT VALIDITY OF QUESTIONNAIRE ITEMS AT A PRE-TESTING STAGE Abraham Y. Nahm Luis E. Solís-Galván S. Subba Rao University of

More information

Attitudes towards personnel selection methods in Lithuanian and Swedish samples

Attitudes towards personnel selection methods in Lithuanian and Swedish samples School of Social Sciences Psychology PS 5424 Spring 2008 Attitudes towards personnel selection methods in Lithuanian and Swedish samples Author: Simona Sudaviciute Supervisor: Abdul H. Mohammed, Ph D Examinor:

More information

GLOSSARY OF COMPENSATION TERMS

GLOSSARY OF COMPENSATION TERMS GLOSSARY OF COMPENSATION TERMS This compilation of terms is intended as a guide to the common words and phrases used in compensation administration. Most of these are courtesy of the American Compensation

More information

Chapter 16 Creating High-Performance Work Systems

Chapter 16 Creating High-Performance Work Systems Chapter 16 Creating High-Performance Work Systems MULTIPLE CHOICE 1 Which of the following statements captures the fundamental logic of high-performance work systems? a These are HR practices used to manage

More information

Kristin Gustavson * and Ingrid Borren

Kristin Gustavson * and Ingrid Borren Gustavson and Borren BMC Medical Research Methodology 2014, 14:133 RESEARCH ARTICLE Open Access Bias in the study of prediction of change: a Monte Carlo simulation study of the effects of selective attrition

More information

ADVERSE IMPACT: A Persistent Dilemma

ADVERSE IMPACT: A Persistent Dilemma 1 ADVERSE IMPACT: A Persistent Dilemma David Chan Catherine Clause Rick DeShon Danielle Jennings Amy Mills Elaine Pulakos William Rogers Jeff Ryer Joshua Sacco David Schmidt Lori Sheppard Matt Smith David

More information

4 RECRUITMENT AND HARD TO FILL VACANCIES

4 RECRUITMENT AND HARD TO FILL VACANCIES 4 RECRUITMENT AND HARD TO FILL VACANCIES This section of the survey investigates the extent to which firms have vacancies and whether this is a source of difficulty because some are difficult to fill.

More information

The circumstances in which anonymous marking is appropriate and when it is either not practical or inappropriate;

The circumstances in which anonymous marking is appropriate and when it is either not practical or inappropriate; College Policy on Marking and Moderation Introduction 1. This policy defines the College policy on the marking and moderation of all work that is formally assessed as part of a College award. It incorporates

More information

The Mahalanobis Distance index of WAIS-R subtest scatter: Psychometric properties in a healthy UK sample

The Mahalanobis Distance index of WAIS-R subtest scatter: Psychometric properties in a healthy UK sample British Journal of Clinical Psychology (1994), 33, 65-69 Printed in Great Britain 6 5 1994 The British Psychological Society The Mahalanobis Distance index of WAIS-R subtest scatter: Psychometric properties

More information

The secret to reducing hiring mistakes?

The secret to reducing hiring mistakes? IBM Software Thought Leadership Whitepaper The secret to reducing hiring mistakes? It s in the metrics By Dr. Rena Rasch, IBM Smarter Workforce The secret to reducing hiring mistakes? It s in the metrics

More information

Saville Consulting Wave Professional Styles Handbook

Saville Consulting Wave Professional Styles Handbook Saville Consulting Wave Professional Styles Handbook PART 4: TECHNICAL Chapter 18: Professional Styles Norms This manual has been generated electronically. Saville Consulting do not guarantee that it has

More information

Performance Appraisal: Dimensions and Determinants

Performance Appraisal: Dimensions and Determinants Appraisal: Dimensions and Determinants Ch.V.L.L.Kusuma Kumari Head of the department, Department of business studies, Malla reddy engineering college for women, Maisammaguda, Secunderabad. Abstract : The

More information

THE WORLD OF ORGANIZATION

THE WORLD OF ORGANIZATION 22 THE WORLD OF ORGANIZATION In today s world an individual alone can not achieve all the desired goals because any activity requires contributions from many persons. Therefore, people often get together

More information

The relationship between cognitive ability, the big five, task and contextual performance: a metaanalysis

The relationship between cognitive ability, the big five, task and contextual performance: a metaanalysis Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 9-22-2000 The relationship between cognitive ability, the big five, task and contextual

More information

A F E P. Re: Performance Reporting A European Discussion Paper

A F E P. Re: Performance Reporting A European Discussion Paper A F E P Association Française des Entreprises Privées EFRAG 35 Square de Meeûs B-1000 Brussels Paris, December 17, 2009 Re: Performance Reporting A European Discussion Paper We welcome the opportunity

More information

A Note on Sex, Geographic Mobility, and Career Advancement. By: William T. Markham, Patrick O. Macken, Charles M. Bonjean, Judy Corder

A Note on Sex, Geographic Mobility, and Career Advancement. By: William T. Markham, Patrick O. Macken, Charles M. Bonjean, Judy Corder A Note on Sex, Geographic Mobility, and Career Advancement By: William T. Markham, Patrick O. Macken, Charles M. Bonjean, Judy Corder This is a pre-copyedited, author-produced PDF of an article accepted

More information

HISTORY. The range and suitability of the work submitted

HISTORY. The range and suitability of the work submitted Overall grade boundaries HISTORY Grade: E D C B A Mark range: 0-7 8-15 16-22 23-28 29-36 The range and suitability of the work submitted The majority of senior examiners involved in assessment of this

More information

14 Organizing for strategic knowledge creation

14 Organizing for strategic knowledge creation 396 14 Organizing for strategic knowledge creation Often the limiting and enabling factor in organizational renewal is the organizational skill-base, and its capability to adapt. Therefore organizational-level

More information

CHAPTER 8 PERFORMANCE APPRAISAL OF A TRAINING PROGRAMME 8.1. INTRODUCTION

CHAPTER 8 PERFORMANCE APPRAISAL OF A TRAINING PROGRAMME 8.1. INTRODUCTION 168 CHAPTER 8 PERFORMANCE APPRAISAL OF A TRAINING PROGRAMME 8.1. INTRODUCTION Performance appraisal is the systematic, periodic and impartial rating of an employee s excellence in matters pertaining to

More information

GROUP EQUALITY & DIVERSITY POLICY

GROUP EQUALITY & DIVERSITY POLICY GROUP EQUALITY & DIVERSITY POLICY Group Equality & Diversity Policy Introduction Fair treatment is a moral and legal duty. Employers who treat employees fairly and flexibly will be best placed to recruit

More information

Reliability & Validity Evidence for PATH

Reliability & Validity Evidence for PATH Reliability & Validity Evidence for PATH Talegent Whitepaper October 2014 Technology meets Psychology www.talegent.com Outline the empirical evidence from peer reviewed sources for the validity and reliability

More information

INVITATION TO COMMENT: IASB AND IFRS INTERPRETATIONS COMMITTEE DUE PROCESS HANDBOOK

INVITATION TO COMMENT: IASB AND IFRS INTERPRETATIONS COMMITTEE DUE PROCESS HANDBOOK September 5, 2012 IFRS Foundation 30 Cannon Street London EC4M 6XH UNITED KINGDOM By email: commentletters@ifrs.org INVITATION TO COMMENT: IASB AND IFRS INTERPRETATIONS COMMITTEE DUE PROCESS HANDBOOK Dear

More information

TOWES Validation Study

TOWES Validation Study TOWES Validation Study Report: Criterion-Related Studies for the Psychometric Evaluation of TOWES February 2004 Prepared By: The TOWES Joint Venture: SkillPlan and Bow Valley College and Theresa Kline,

More information

Judah Katznelson, Research Psychologist U.S. Army Research institute for the Behavioral and Social Sciences Alexandria, Virginia 22333

Judah Katznelson, Research Psychologist U.S. Army Research institute for the Behavioral and Social Sciences Alexandria, Virginia 22333 , The Great Training Robbery Judah Katznelson, Research Psychologist U.S. Army Research institute for the Behavioral and Social Sciences Alexandria, Virginia 22333 I ^V Abstract i Employee selection and

More information

3. STRUCTURING ASSURANCE ENGAGEMENTS

3. STRUCTURING ASSURANCE ENGAGEMENTS 3. STRUCTURING ASSURANCE ENGAGEMENTS How do standards and guidance help professional accountants provide assurance? What are the practical considerations when structuring an assurance engagement? 3. STRUCTURING

More information

South Lanarkshire Leisure and Culture Job Evaluation Scheme Handbook

South Lanarkshire Leisure and Culture Job Evaluation Scheme Handbook South Lanarkshire Leisure and Culture Job Evaluation Scheme Handbook 1. Introduction 1.1 This handbook is designed to provide managers and employees with an overview of the job evaluation scheme and ensure

More information

CHAPTER - III RESEARCH METHODOLOGY

CHAPTER - III RESEARCH METHODOLOGY CHAPTER - III RESEARCH METHODOLOGY 83 CHAPTER - III RESEARCH METHODOLOGY 3.1 Introduction The earlier chapters were devoted to a discussion on the background of the present study, review of earlier literature

More information

Discussion Paper by the Chartered IIA

Discussion Paper by the Chartered IIA Discussion Paper by the Chartered IIA The Chartered IIA s discussion paper on corporate governance reform in the UK and the update of the UK Corporate Governance Code Introduction Despite the UK s global

More information

Discussion Paper by the Chartered IIA

Discussion Paper by the Chartered IIA Discussion Paper by the Chartered IIA The Chartered IIA s discussion paper on corporate governance reform in the UK and the update of the UK Corporate Governance Code Introduction Despite the UK s global

More information

Predicting Job Performance: Not Much More Than g

Predicting Job Performance: Not Much More Than g Journal of Applied Psychology 1994. Vol. 79, No. 4, 518-524 In the public domain Predicting Job Performance: Not Much More Than g Malcolm James Ree, James A. Earles, and Mark S. Teachout The roles of general

More information

CHAPTER 2 Foundations of Recruitment and Selection I: Reliability and Validity

CHAPTER 2 Foundations of Recruitment and Selection I: Reliability and Validity CHAPTER 2 Foundations of Recruitment and Selection I: Reliability and Validity If Nothing Else, My Students Should Learn Personnel recruitment and selection strategies based on information obtained through

More information

Saville Consulting Assessment Suite

Saville Consulting Assessment Suite Saville Consulting Assessment Suite www.peoplecentric.co.nz info@peoplecentric.co.nz +64 9 963 5020 Overview Swift Aptitude Assessments (IA& SA)... 3 Analysis Aptitudes (IA)... 4 Professional Aptitudes

More information

INTERNATIONAL STANDARD ON AUDITING 620 USING THE WORK OF AN AUDITOR S EXPERT CONTENTS

INTERNATIONAL STANDARD ON AUDITING 620 USING THE WORK OF AN AUDITOR S EXPERT CONTENTS INTERNATIONAL STANDARD ON 620 USING THE WORK OF AN AUDITOR S EXPERT (Effective for audits of financial statements for periods beginning on or after December 15, 2009) CONTENTS Paragraph Introduction Scope

More information

Chapter 5 RESULTS AND DISCUSSION

Chapter 5 RESULTS AND DISCUSSION Chapter 5 RESULTS AND DISCUSSION 5.0 Introduction This chapter outlines the results of the data analysis and discussion from the questionnaire survey. The detailed results are described in the following

More information

Multilevel Modeling and Cross-Cultural Research

Multilevel Modeling and Cross-Cultural Research 11 Multilevel Modeling and Cross-Cultural Research john b. nezlek Cross-cultural psychologists, and other scholars who are interested in the joint effects of cultural and individual-level constructs, often

More information

Nowack, K. (2006). Emotional Intelligence: Leaders Make a Difference. HR Trends, 17, 40-42

Nowack, K. (2006). Emotional Intelligence: Leaders Make a Difference. HR Trends, 17, 40-42 Nowack, K. (2006). Emotional Intelligence: Leaders Make a Difference. HR Trends, 17, 40-42 What is Emotional Intelligence? The most widely accepted model of emotional intelligence (EI) has been influenced

More information

The Validity of Broad and Narrow Personality Traits For Predicting Job Performance: The Differential Effects of Time

The Validity of Broad and Narrow Personality Traits For Predicting Job Performance: The Differential Effects of Time Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 7-30-2014 The Validity of Broad and Narrow Personality Traits For Predicting Job

More information

Examiner s report F5 Performance Management December 2017

Examiner s report F5 Performance Management December 2017 Examiner s report F5 Performance Management December 2017 General comments The F5 Performance Management exam is offered in both computer-based (CBE) and paper formats. The structure is the same in both

More information

National Skills Quality Assurance System

National Skills Quality Assurance System 2012 National Skills Quality Assurance System Government of Bangladesh 3 Manual 3: Registration of Training Organizations and Accreditation of Learning and Assessment Programs European Union Overview of

More information

BUSINESS ETHICS AND CORPORATE GOVERNANCE TERMINOLOGY AND CONCEPTS

BUSINESS ETHICS AND CORPORATE GOVERNANCE TERMINOLOGY AND CONCEPTS BUSINESS ETHICS AND CORPORATE GOVERNANCE TERMINOLOGY AND CONCEPTS Timeframe: Learning outcome: Prescribed reading: Recommend ed reading: Section overview: Minimum 20 hours Critically explain corporate

More information

The Accident Risk Management Questionnaire (ARM-Q): A Report on Two Validation Studies. Abstract. Introduction

The Accident Risk Management Questionnaire (ARM-Q): A Report on Two Validation Studies. Abstract. Introduction The Accident Risk Management Questionnaire (ARM-Q): A Report on Two Validation Studies Gerard J. Fogarty (fogarty@usq.edu.au) University of Southern Queensland, Toowoomba QLD 4350 Australia Todd Shardlow

More information