1 / 99

Characteristics of Effective Selection Techniques

Characteristics of Effective Selection Techniques. Optimal Employee Selection Systems. Are Reliable Are Valid Based on a job analysis (content validity) Predict work-related behavior (criterion validity) Reduce the Chance of a Legal Challenge Face valid Don’t invade privacy

hani
Télécharger la présentation

Characteristics of Effective Selection Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Characteristics of Effective Selection Techniques

  2. Optimal Employee Selection Systems • Are Reliable • Are Valid • Based on a job analysis (content validity) • Predict work-related behavior (criterion validity) • Reduce the Chance of a Legal Challenge • Face valid • Don’t invade privacy • Don’t intentionally discriminate • Minimize adverse impact • Are Cost Effective • Cost to purchase/create • Cost to administer • Cost to score

  3. Reliability • The extent to which a score from a test is consistent and free from errors of measurement • Methods of Determining Reliability • Test-retest (temporal stability) • Alternate forms (form stability) • Internal reliability (item stability) • Scorer reliability

  4. Test-Retest Reliability • Measures temporal stability • Administration • Same applicants • Same test • Two testing periods • Scores at time one are correlated with scores at time two • Correlation should be above .70

  5. Test-Retest ReliabilityProblems • Sources of measurement errors • Characteristic or attribute being measured may change over time • Reactivity • Carry over effects • Practical problems • Time consuming • Expensive • Inappropriate for some types of tests

  6. Alternate Forms ReliabilityAdministration • Two forms of the same test are developed, and to the highest degree possible, are equivalent in terms of content, response process, and statistical characteristics • One form is administered to examinees, and at some later date, the same examinees take the second form

  7. Alternate Forms ReliabilityScoring • Scores from the first form of test are correlated with scores from the second form • If the scores are highly correlated, the test has form stability

  8. Alternate Forms ReliabilityDisadvantages • Difficult to develop • Content sampling errors • Time sampling errors

  9. Internal Reliability • Defines measurement error strictly in terms of consistency or inconsistency in the content of the test. • Used when it is impractical to administer two separate forms of a test. • With this form of reliability the test is administered only once and measures item stability.

  10. Determining Internal Reliability • Split-Half method (most common) • Test items are divided into two equal parts • Scores for the two parts are correlated to get a measure of internal reliability. • Spearman-Brown prophecy formula: (2 x split half reliability) ÷ (1 + split-half reliability)

  11. Spearman-Brown Formula (2 x split-half correlation) (1 + split-half correlation) If we have a split-half correlation of .60, the corrected reliability would be: (2 x .60) ÷ (1 + .60) = 1.2 ÷ 1.6 = .75

  12. Common Methods for Correlating Split-half Methods • Cronbach’s Coefficient Alpha • Used with ratio or interval data. • Kuder-Richardson Formula • Used for test with dichotomous items (yes-no true-false)

  13. Interrater Reliability • Used when human judgment of performance is involved in the selection process • Refers to the degree of agreement between 2 or more raters

  14. Going Hollywood Rate the Waiter’s Performance (Office Space – DVD segment 3)

  15. Reliability: Conclusions • The higher the reliability of a selection test the better. Reliability should be .70 or higher • Reliability can be affected by many factors • If a selection test is not reliable, it is useless as a tool for selecting individuals

  16. Reliabiity Demonstration

  17. Validity • Definition The degree to which inferences from scores on tests or assessments are justified by the evidence • Common Ways to Measure • Content Validity • Criterion Validity • Construct Validity

  18. Content Validity • The extent to which test items sample the content that they are supposed to measure • In industry the appropriate content of a test of test battery is determined by a job analysis

  19. Criterion Validity • Criterion validity refers to the extent to which a test score is related to some measure of job performance called a criterion • Established using one of the following research designs: • Concurrent Validity • Predictive Validity • Validity Generalization

  20. Concurrent Validity • Uses current employees • Range restriction can be a problem

  21. Predictive Validity • Correlates test scores with future behavior • Reduces the problem of range restriction • May not be practical

  22. Validity Generalization • Validity Generalization is the extent to which a test found valid for a job in one location is valid for the same job in a different location • The key to establishing validity generalization is meta-analysis and job analysis

  23. Typical Corrected Validity Coefficients for Selection Techniques

  24. Construct Validity • The extent to which a test actually measures the construct that it purports to measure • Is concerned with inferences about test scores • Determined by correlating scores on a test with scores from other test

  25. Face Validity • The extent to which a test appears to be job related • Reduces the chance of legal challenge • Increasing face validity

  26. Locating Test InformationExercise 6.1

  27. Selection Utility

  28. Utility The degree to which a selection device improves the quality of a personnel system, above and beyond what would have occurred had the instrument not been used.

  29. Selection Works Best When... • You have many job openings • You have many more applicants than openings • You have a valid test • The job in question has a high salary • The job is not easily performed or easily trained

  30. Common Utility Methods Taylor-Russell Tables Proportion of Correct Decisions The Brogden-Cronbach-Gleser Model

  31. Utility AnalysisTaylor-Russell Tables • Estimates the percentage of future employees that will be successful • Three components • Validity • Base rate (successful employees ÷ total employees) • Selection ratio (hired ÷ applicants)

  32. Taylor-Russell Example • Suppose we have • a test validity of .40 • a selection ratio of .30 • a base rate of .50 • Using the Taylor-Russell Tables what percentage of future employees would be successful?

  33. 50% .00 .10 .20 .30 .40 .50 .60 .70 .80 .90 .50 .58 .67 .74 .82 .88 .94 .98 1.0 1.0 .50 .57 .64 .71 .78 .84 .90 .95 .99 1.0 .50 .56 .61 .67 .73 .76 .84 .90 .95 .99 .50 .55 .59 .64 .69 .74 .79 .85 .90 .97 .50 .54 .58 .62 .66 .70 .75 .80 .85 .92 .50 .53 .56 .60 .63 .67 .70 .75 .80 .86 .50 .53 .55 .58 .61 .63 .66 .70 .73 .78 .50 .52 .54 .56 .58 .60 .62 .65 .67 .70 .50 .51 .53 .54 .56 .57 .59 .60 .61 .62 .50 .51 .52 .52 .53 .54 .54 .55 .55 .56 .50 .50 .51 .51 .52 .52 .52 .53 .53 .53 r. .05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95

  34. Proportion of Correct Decisions • Proportion of Correct Decisions With Test (Correct rejections + correct acceptances) ÷ Total employees Quadrant II Quadrant IV Quadrants I+II+III+IV • Baseline of Correct Decisions Successful employees ÷ Total employees Quadrants I + II Quadrants I+II+III+IV

  35. I II C r i t e r i o n IV III

  36. Proportion of Correct Decisions • Proportion of Correct Decisions With Test ( 10 + 11 ) ÷ (5 + 10 + 4 + 11) Quadrant II Quadrant IV Quadrants I+II+III+IV = 21 ÷ 30 = .70 • Baseline of Correct Decisions 5 + 10 ÷ 5 + 10 + 4 + 11 Quadrants I + II Quadrants I+II+III+IV = 15 ÷ 30 = .50

  37. Computing the Proportion of Correct DecisionsExercise 6.3

  38. Answer Exercise 6.3

  39. Answer to Exercise 6.3 • Proportion of Correct Decisions With Test ( 8 + 6 ) ÷ (4 + 8 + 6 + 2) Quadrant II Quadrant IV Quadrants I+II+III+IV = 14 ÷ 20 = .70 • Baseline of Correct Decisions 4 + 8 ÷ 4 + 8 + 6 + 2 Quadrants I + II Quadrants I+II+III+IV = 12 ÷ 20 = .60

  40. Brogden-Cronbach-Gleser Utility Formula • Gives an estimate of utility by estimating the amount of money an organization would save if it used the test to select employees. Savings =(n) (t) (r) (SDy) (m) - cost of testing • n= Number of employees hired per year • t= average tenure • r= test validity • SDy=standard deviation of performance in dollars • m=mean standardized predictor score of selected applicants

  41. Components of Utility Selection ratio The ratio between the number of openings to the number of applicants Validity coefficient Base rate of current performance The percentage of employees currently on the job who are considered successful. SDy The difference in performance (measured in dollars) between a good and average worker (workers one standard deviation apart)

  42. Calculating m • For example, we administer a test of mental ability to a group of 100 applicants and hire the 10 with the highest scores. The average score of the 10 hired applicants was 34.6, the average test score of the other 90 applicants was 28.4, and the standard deviation of all test scores was 8.3. The desired figure would be: • (34.6 - 28.4) ÷ 8.3 = 6.2 ÷ 8.3 = ?

  43. Calculating m • You administer a test of mental ability to a group of 150 applicants, and hire 35 with the highest scores. The average score of the 35 hired applicants was 35.7, the average test score of the other 115 applicants was 24.6, and the standard deviation of all test scores was 11.2. The desired figure would be: • (35.7 - 24.6) ÷ 11.2 = ?

  44. Standardized Selection Ratio

  45. Example • Suppose: • we hire 10 auditors per year • the average person in this position stays 2 years • the validity coefficient is .40 • the average annual salary for the position is $30,000 • we have 50 applicants for ten openings. • Our utility would be: (10 x 2 x .40 x $12,000 x 1.40) – (50 x 10) = $133,900

  46. Putting it all Together Exercise 6.2: Utility

  47. Answer Exercise 6.2

  48. 80% .00 .10 .20 .30 .40 .50 .60 .70 .80 .90 .80 .85 .90 .94 .96 .98 .99 1.0 1.0 1.0 .80 .85 .89 .92 .95 .97 .99 1.0 1.0 1.0 .80 .84 .87 .90 .93 .96 .98 .99 1.0 1.0 .80 .83 .86 .89 .92 .94 .96 .98 1.0 1.0 .80 .83 .85 .88 .90 .93 .95 .97 .99 1.0 .80 .82 .84 .87 .89 .91 .94 .96 .98 1.0 .80 .82 .84 .86 .88 .90 .92 .94 .96 .99 .80 .81 .83 .84 .86 .88 .90 .92 .94 .97 .80 .81 .82 .83 .85 .86 .87 .89 .91 .94 .80 .81 .81 .82 .83 .84 .84 .85 .87 .88 .80 .80 .81 .81 .82 .82 .83 .83 .84 .84 Selection Ratio r. .05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95

More Related