1 / 22

Selection and Staffing II: Advanced Issues

Selection and Staffing II: Advanced Issues. Learning Objectives. Understand issues outside of predictive power that affect “validity” To be able to understand and apply: Selection Ratio Base Rate Understand the concept of Utility

ghammer
Télécharger la présentation

Selection and Staffing II: Advanced Issues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Selection and Staffing II:Advanced Issues

  2. Learning Objectives • Understand issues outside of predictive power that affect “validity” • To be able to understand and apply: • Selection Ratio • Base Rate • Understand the concept of Utility • Understand ways of combining information to make hiring decisions • Multiple regression vs. multiple hurdles • Clinical vs. mechanical combination

  3. Validity: Concepts • First, let’s define what the terms mean: • Validity coefficient: Correlation of our predictor test to the outcome of interest (e.g., job performance) • Selection Ratio: The proportion of applicants that are actually selected into positions • Base Rate: The proportion of individuals in the population that can perform the job at a at least minimally proficient level

  4. Quantifying Validity: Correct vs. Incorrect Decisions • When hiring, we make a correct decision when we select someone who performs acceptably well, AND when we do not select someone who would not • When hiring, we make an incorrect decision when we select someone who does not perform acceptably well AND when we fail to hire someone who would

  5. Correct vs. Incorrect Decisions

  6. Correct vs. Incorrect Decisions

  7. Selection Ratios and Base Rates, Take 1 • Alright, so let’s say our firm has 10 openings in Widget Design. Our recruitment has netted us 100 applicants. • Our selection ratio is 10% • Now, widget design is a pretty complicated business, so only about 30% of people can do it well enough to meet our minimum standards. • Our base rate is 30%

  8. Selection Ratios and Base Rates, Take 1

  9. Selection Ratios and Base Rates, Take 1

  10. Selection Ratios and Base Rates, Take 1

  11. Selection Ratio, Take 2 • Selection is just filling our job openings with people from the applicant pool • Usually we hire top down, best to worst • e.g., First we take the people with 4.0 GPAs, then the 3.9s, then the 3.8s, and so on (some of us would never get jobs if the world always worked this way…) • The selection ratio is just the ratio of the number of people we hire to the total number of applicants (hires/applicants) • Those 10 Widget Designers we have to hire • If we do have 100 applicants, our SR is 10% • But, if we’ve only got 20 applicants, our SR bloats to 50%

  12. Base Rate, Take 2 • When the base rate is high, nearly everyone we might hire would be a competent worker • When the base rate is low, nearly everyone we might hire would not be a competent worker • The base rate is just the percentage of applicants who would be competent workers if we hired them. • We’re not too useful when base rates are high, because almost anyone off the street would be a good worker • We’re also not too useful when the base rates is low, because we’re more apt to make incorrect decisions (just because so few people would be good workers)

  13. Bottom Line • So when can we do the most good? That is to say, when can we make the most effective decisions, all else being equal? • Make a good test • Make sure we can be picky (favorable selection ratio) • And the base rate is…?

  14. Utility: The Short Course • The basic utility equation: U = SDyZxrxy • Where U is the expected utility • SDy is the standard deviation of job performance in dollars • Zx is the average standardized score on the selection test of applicants hired • r(x,y) is the correlation between selection test and job performance criterion

  15. Utility Example

  16. Cut Scores • When we have a predictor, we have to decide who to hire • Set a cut score, or a score on the predictor that if candidates score below, they will not be hired • Do this in one of two ways • Criterion-referenced: cut score corresponds to minimally acceptable performance (set by regression equation) • Norm-referenced: based on the scores themselves (i.e., 60% = F)

  17. Combining Information • The simplest selection system has one predictor • Based on that predictor, we make an offer • Sometimes we have multiple predictors • Say, a formal test and a job interview • Combine the info, then make an offer • Multiple Hurdles • First we check to see if you have a college degree • Then we give you a test, take top 50% • Then we interview the remaining candidates • Make offer based only on the interview

  18. Combining Information • Multiple regression – compensatory approach • Again, test + interview • A good interview will compensate for a weak test score • Multiple hurdles – non-compensatory • “Weed out” process • Have to pass each stage of the system

  19. Combining Information • Clinical vs. Mechanical prediction • Mechanical methods use a formula to combine the information (may just be a sum or average) • Clinical methods use a human judge to combine the info based on his/her judgment • Meehl’s “Little Black Book” (1954) • Clinical versus statistical prediction • Mechanical methods produce superior decisions • Except for “broken legs”

  20. Sawyer’s (1996) follow-up • Distinguished between method of measurement and method of data combination • Mechanical collection – objective tests (cognitive ability, personality, etc) • Clinical collection – human judge/rater (interview, simulation rating)

  21. Combining info • Full clinical: clinical collection + clinical combination (all done by human judge) • Full mechanical: mechanical collection + mechanical combination (all done actuarially) • Mechanical synthesis: mechanical and/or clinical collection + mechanical combination • Clinical synthesis: mechanical and/or clinical collection + clinical combination • Mechanical synthesis outperforms, even when combining clinically collective information

  22. Bottom line • The key thing is to combine the information mechanically • People can be very good at collecting information (e.g., in an interview) • But, we’re unsystematic in putting it all together • This is one of the most robust findings in psychology, but a lot of people like to ignore it • Even if you just add the numbers up, with no fancy statistics, you’re probably way ahead of the game

More Related