Conducting Interviews: What Can Go Wrong and What Can Go Right. Presented by Dr. Jim Higgins Executive Director, BCGi

Size: px
Start display at page:

Download "Conducting Interviews: What Can Go Wrong and What Can Go Right. Presented by Dr. Jim Higgins Executive Director, BCGi"

Transcription

1 Conducting Interviews: What Can Go Wrong and What Can Go Right Presented by Dr. Jim Higgins Executive Director, BCGi

2 Visit BCGi Online If you enjoy this webinar, Don t forget to check out our other training opportunities through the BCGi website. BCGi Standard Membership (free) Online community Monthly webinars on EEO compliance topics EEO Insight Journal (e-copy) BCGi Platinum (paid) Membership ($199/year) Fully interactive online community Includes validation/compensation analysis books EEO Tools including those needed to conduct AI analyses EEO Insight Journal (e-copy and hardcopy) Members only webinars and training and much more

3 HRCI Credit BCGi is an HRCI Preferred Provider CE Credits are available for attending this webinar Only those who remain with us for at least 80% of the webinar will be eligible to receive the HRCI training completion form for CE submission

4 About Our Sponsor: BCG Assisted hundreds of clients with cases involving Equal Employment Opportunity (EEO) / Affirmative Action (AA) (both plaintiff and defense) Compensation Analyses / Test Development and Validation Published: Adverse Impact and Test Validation, 2 nd Ed., as a practical guide for HR professionals Editor & Publisher: EEO Insight an industry e-journal Creator and publisher of a variety of productivity Software/Web Tools: OPAC (Administrative Skills Testing) CritiCall (9-1-1 Dispatcher Testing) AutoAAP (Affirmative Action Software and Services) C 4 (Contact Center Employee Testing) Encounter (Video Situational Judgment Test) AutoGOJA (Automated Guidelines Oriented Job Analysis ) COMPare: Compensation Analysis in Excel 4

5 Conducting Interviews: What Can Go Wrong and What Can Go Right Dr. Jim Higgins, Ed.D. Executive Director BCGi

6 Overview Background Types of Interviews Creating Interviews Categories of Issues What can go wrong (and what to do about it) A Case Story Conclusion

7 What counts as a Selection Procedure? Selection procedures include the full range of assessment techniques from: Traditional paper and pencil tests, Performance tests, Training programs, Probationary periods, Physical, educational, and work experience requirements, Informal or casual interviews and Unscored application forms. Uniform Guidelines on Employee Selection Procedures 16[Q]

8 Types of interviews Unstructured Interviews No specific questions Questions vary from applicant to applicant Allows for probing, follow-up questions Interviewer has tremendous flexibility in interaction with applicant Often includes a single interviewer

9 Types of interviews Structured Interviews Pre-developed questions Each applicant gets the same questions No probing or follow-up questions Interviewer has little flexibility Often relies on a panel (but not always)

10 The bottom line Whichever type interview is employed: Remember the requirements outlined by the Civil Rights Act of 1964! Every business decision must be: Job related and Consistent with business necessity

11 What this means Legally defensible interview questions must be based on an analysis of the job and measure: Important KSAs that differentiate applicant abilities as they relate to successful job performance That are required at entry to the job Duties KSAs Interview Questions

12 But I hate job analysis! A common response Job analysis is often ignored until facing a lawsuit or OFCCP audit However, it does not have to be painful!

13 Consider your risk tolerance Comprehensive Analysis Any Systematic Documentation No Job Analysis R i s k

14 Let s assume You have analyzed the job and identified appropriate KSAs Selected an interview as the appropriate selection tool What should you do next?

15 Create your interview There are four different types of interview questions, which contribute to varying levels of interview structure. These types are: Background Job Knowledge Situational Behavioral

16 Select a format based on the facts about interviews Unstructured Easy to develop Less complex to administer Less expensive (up front!) Low validity Low defensibility Structured Harder to develop More complex to administer A bit more expensive (up front!) Better validity Higher defensibility

17 So, what can go wrong? Problems fall into the categories of: Validity Reliability Applicant-specific characteristics Rater characteristics Administrative Issues

18 Validity: What is it? To say that a test is valid is to claim that it effectively measures what it is supposed to measure.

19 Validity of Interviews Low to moderate validity Reilly and Chao (1982) found an average validity coefficient of.19 (3.6%) Weisner and Cronshaw (1988) found a slightly higher validity coefficient of.26 with supervisor ratings of performance (6.8%) Low Incremental Validity Performance in unstructured interviews tends to rely more upon social skills and personality

20 How to improve the validity of interviews In a meta-analyses of interviews McDaniel, Whetzel, Schmidt, and Maurer, (1994) reviewed 245 different validity coefficients, and the following factors appeared to improve validity: Situational and jobrelated questions Highly structured Consistent interviewer Appropriate criteria Highly structured interviews have an average validity coefficient of.50 (25%)* * Saldago, 1999

21 Collection and evaluation of data Campion (1997) concluded that predictive validity was improved by certain design characteristics: Answers to questions rated separately Interviewers took detailed notes and based ratings on objective criteria Scores based on summing points across questions and not on rater intuition Interviewers received extensive training

22 Reliability Definition A measure is said to have a high reliability if it produces similar results under consistent conditions. (Wikipedia)

23 Considerations Temporal Reliability Reliability over time Test Re-test reliability Established by administering the exam to the same person over time and correlating the results Local Reliability Interrater Reliability Are different raters providing consistent ratings Simple Method: Correlation matrix Complex Method: Interclass Correlations

24 How to improve the reliability of interviews More questions Use the same interviewer(s) for all applicants Use clearly articulated scoring rubrics Intensive training for interviewers Consider a formal Chairperson Exceptionally Qualified Applicant s response clearly indicates: Criteria Qualified Applicant s response clearly indicates: Criteria Not Qualified Applicant fails to demonstrate ability to perform the job by: Criteria

25 Applicant characteristics Cultural background Social skills Verbal skills Anxiety in testing situations Confusion of expectations and or what is being assessed

26 What to do about applicant characteristics More complex that is seems Hiring manager has no control of them Key is to maintain awareness that many applicant characteristics have nothing to do with job performance Be sensitive to individual differences and do not appear judgmental Stick to your rating criteria!

27 Rater characteristics One of the most critical factors influencing validity Most assume they have good judgment Lack of awareness of biases

28 Which would make you feel more comfortable?

29 Rating Errors Many rating errors can result from perceptual biases or individual preferences of the interview panel members It is important that the interview panel members become aware of these errors and be cautioned to avoid them Rating errors diminish the reliability of the interview process

30 Common Rating Errors Types Similar To Me Effect: Allowing applicant similarities to one s self to impact ratings Halo Effect: Forming an overall impression of a candidate based upon his/her responses to one or two questions Leniency Effect: Giving all candidates high ratings. Its counterpart, severity effect, is the tendency to give all candidates low ratings Central Tendency Effect: Using only the middle portion of the rating scale

31 Common Rating Errors Types Contrast Effect: Rating a candidate relative to the candidate who was interviewed immediately before him/her First Impression Error: Making snap judgments based upon responses made in the first part of the interview Personal Bias: Allowing non-job-related prejudices and attitudes about cultural stereotypes, lifestyles, personalities, appearances, or other idiosyncratic perceptions to affect the ratings

32 Common Rating Errors Tyoes Negative Weighting: Placing more weight on negative information than positive information obtained in the interview process Expectation Error: Creating a set of expectations about a candidate based upon information reviewed prior to the interview Projection: Placing an interviewer s own value system into the rating process and believing that only a duplicate of himself/herself can be successful on the job Change in Standards: Applying the rating scales and benchmark answers differently for some candidates in comparison to others

33 What to do about rater errors At least five positive steps you can take: 1. Maintain a vigilant awareness that everyone is susceptible to these errors and take steps to control for them 2. Provide in-depth training to those who will be serving on interviews 3. Use a panel rather than rely on a single individual 4. Rely on structured interviews 5. Use well-designed standardized rating criteria

34 Administrative issues Not so much a problem as things to be aware of Simply a given if you rely on interviews

35 Common issues Interviews are labor intensive Most appropriate for spot examinations Potentially lots of moving parts Not appropriate for large candidate pools

36 A case study An examination for a mid-level manager Traditional approach was an interview We explored a multimodal assessment Writing Sample Leaderless Group Structured Interview

37 What we learned Whatever the interview measured, it did not correlate with either writing or leading Good writing did not predict either interviewing or leading The leaderless group clearly demonstrated skills that were highly relevant to an a person being an effective day-to-day leader Therefore it is best to, whenever possible, rely on multiple assessment techniques

38 Conclusion Interviews can be a useful tool for use in making hiring decisions Structured interviews are far superior to unstructured interviews Do not let applicant characteristics that are not job-related impact ratings Maintain an awareness of rater errors that are common to all people acknowledge them and take steps to avoid them Use multi-method assessments when possible

39 Questions Thank you!