Standard Setting: BCGi. Copyright 2009, Biddle Consulting Group, Inc. All Rights Reserved. Visit BCGi Online. While you are waiting for the webinar to

Size: px
Start display at page:

Download "Standard Setting: BCGi. Copyright 2009, Biddle Consulting Group, Inc. All Rights Reserved. Visit BCGi Online. While you are waiting for the webinar to"

Transcription

1 Standard Setting: Establishing Legally Defensible Pass-Points Points and Cut-Scores Copyright 2009 Biddle Consulting Group, Inc. BCGi Institute for Workforce Development Visit BCGi Online While you are waiting for the webinar to begin, Don t forget to check out our other training opportunities through the BCGi website. Join our community by signing up (its free) and we will notify you of our upcoming free training i events as well as other information of value to the HR community. 1

2 HRCI Credit BCG is an HRCI Preferred Provider CE Credits are available for attending this webinar Only those who remain with us for at least 80% of the webinar will be eligible to receive the HRCI training completion form for CE submission About Our Sponsor BCGi is sponsored by Biddle Consulting Group, Inc. 2

3 About Our Sponsor: BCG Assisted hundreds of clients with cases involving Equal Employment Opportunity (EEO) / Affirmative Action (AA) (both plaintiff and defense) EEO Litigation Support / OFCCP (federal contracting) Audit Support Compensation Analyses / Test Development and Validation Published: Adverse Impact and Test Validation, 2 nd Ed., as a practical guide for HR professionals Editor & Publisher: EEO Insight an industry e-journal Creator and publisher of a variety of productivity Software/Web Tools: OPAC (Administrative Skills Testing) CritiCall (9-1-1 Dispatcher Testing) AutoAAP (Affirmative Action Software and Services) C 4 (Contact Center Employee Testing) Encounter (Video Situational Judgment Test) Adverse Impact Toolkit (free online at AutoGOJA (Automated Guidelines Oriented Job Analysis ) Compare 2.0 (Compensation Analysis and Reporting). Industry Leader 5 Standard Setting: Establishing Legally Defensible Pass-Points Points and Cut-Scores Copyright 2009 Biddle Consulting Group, Inc. BCGi Institute for Workforce Development 3

4 Contact Information Daniel Kuang, Ph.D. Jim Higgins, Ed.D. Biddle Consulting Group, Inc. 193 Blue Ravine, Ste. 270 Folsom, CA Part 1: Background Presentation Outline What is standard setting? Standard setting in a legal context Part 2: Setting the raw cut-score Quick Primer: Angoff Method Raw cut-score in a legal context Part 3: Establishing the Modified Angoff cut score (MAC). Method 1: setting MAC with no item test data Method 2: setting MAC with item test data 4

5 Part 1 Background What is standard setting? Business necessity Job success Job Analysis Minimally competent examinee Minimally qualified applicant Test use and purpose Select-IN: identify qualified candidates Select-OUT: identify candidates who are not qualified 5

6 The Legal Context Arbitrary 70% cutoff: Scientifically wrong Legally vulnerable Title VII: Test Validity Test use Test purpose Select-in qualified/select-out unqualified Septa v. Lanning (2000) Equated business necessity with minimum competency : Many standard setting methods to get raw cut-score: Angoff method (1971) Judgmental Policy Capturing method (Jaeger, 1995) Contrast Groups Method (Berk, 1976) Bookmark (Mitzel et al., 2001) etc. Part 2 Setting Raw Cut-Scores 6

7 Setting Raw Cut-scores Angoff Method Rater ID Item Number Mean SD Mean SD The Legal Context Title VII: Alternative Employment Practice Raw cut-score is not (always) enough Adjust 1, 2, or 3 standard error units? US Supreme Court in US v. South Carolina (1978) provides 5-factors to consider: Size of SEM test reliability and precision Sampling Error representation representation of SMEs Inter-Rater Reliability: Internal consistency of SME ratings Supply/Demand of availability of workers Racial composition--adverse impact at 3 cutoffs In the context of Ricci v. DeStefano (2009) 7

8 Part 3 Establishing the Modified Angoff ffc Cut-Score Underlying Logic Measurement Error (R XX ) Adjust scores as a function test unreliability Standard Error of Measurement (SEM) Conditional Standard Error of Measurement (CSEM) Banding Test scores in a band based on SEM/CSEM are statistically equivalent 8

9 Score Banding: A Quick Overview Banding accounts for the measurement error in the test by ygrouping gsimilar scores into tied groups In this way, banding gives the benefit of the measurement error to the applicants Similar to academic grades As, Bs, Cs Sometimes provides greater diversity results with small compromises in utility when compared to other methods Score Banding: A Quick Overview Classical Standard Error of Measurement Observed Score = 40, SEM = SEM = 68% SEMs = 95% 3 SEMs = 99% Score Classically: Of all examinees who scored 40, 95% will have true scores between 38 and 42. 9

10 Score Banding: A Quick Overview Classical Test Theory (CTT) Banding: S.E. Measurement SEM SD 1 R xx S.E. Difference SED SEM 2 Confidence Interval CI95% z SED CTT-Bands fails to model: Test reliability varies across test score and test-taker taker ability Estimated True Score Regress to the mean Advanced measurement practices in the educational field have addressed these issues for decades Score Banding: SEM v CSEM Conditional v. Classical Standard Error of Measurement 3.0 Classical SEM = CS SEM Value Conditional SEM (ranges from 1.0 to 3.3) Test Mean (25.6) Score 10

11 Score Banding: SEM-Bands Traditional Banding Assumes # Obs Score # Obs Score Banding: CSEM-Bands Conditional SEM Banding Applies Actual Measurement Accuracy for Each Score Score 11

12 Setting MAC: No Item Data Available SEM and CSEM are computed from item-data New tests may not have scored item-data Solution: Lord s Binomial S. E. 2 x( n x) n 1 where x = observed test score n = test items Setting MAC: No Item Data Available Setting MAC with Lord s Binomial Compute SEM at raw cut-score using Lord s Binomial SEM Lord x( n x) n 1 Adjust 1, 2, or 3 SEMs below raw cut-score Method 1: Simple subtraction Method 2: Iteratively compute statistically non-equivalent intervals 12

13 Setting MAC: Iteratively Set CSEM bands Banding Using Conditional SEMs and Overlapping 1.65 Confidence Intervals (Scores of 40 w/csem = 2.0 and 31 w/csem = 3) CI (95%, 1 Direction) for Score 31's CSEM Value (4.95) 1.65 CI (95%, 1 Direction) for Score 40's CSEM Value (3.3) Score Setting MAC: Item Data Available Mollenkopf-Feldt CSEM (MF-CSEM) When Item-data t is available, apply the Mollenkopf-Feldt CSEM model Computing the MF-CSEM: Step-1: Create tau-equivalent split halves Forms: F1 and F2 Step-2: Compute means for each form X Form1, X Form2 Step-3: Compute difference between means X Diff X Form1 X Form2 13

14 Setting MAC: Mollenkopft-Feldt CSEM Computing the MF-CSEM: Step-4: Polynomial l regression of Test-Score ts (Score) on Form mean differences X Diff Regress Score: X Diff Scr on 1st, 2 nd, and/or 3 rd power of Scr Scr 3 Scr Scr Scr Step-5: Apply polynomial regression model in computing MF-CSEM, where MF-CSEM is predicted from the polynomial-regression model: 2 MF - CSEM Scr Scr Scr Scr Scr Scr 2 Scr 3 3 Setting MAC: Mollenkopft-Feldt CSEM Computing the MF-CSEM: Step-5: Adjust 1, 2, or 3SEM 3-SEMs below raw cutscore Method 1: Simple subtraction Method 2: Iteratively compute statistically nonequivalent intervals Note: These are only the high-level steps. In addition to these steps, BCG: Computes Estimated-True Scores (ETSs) to address issues of regression to the mean Run through an Example 14

15 End of Part 3 Should you have any questions or if we can be of further assistance please us or visit us online at: Questions Visit BCGi to learn about more free training opportunities! Or Contact the Presenter at: DKuang@biddle.com or Jhiggins@biddle.comi 15