1 / 33

Personnel Selection

Personnel Selection. 6. Reliability. Reliability —consistency of scores; refers to whether a specific technique, applied repeatedly to the same concept, would have the same result each time.

risa-yang
Télécharger la présentation

Personnel Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Personnel Selection 6

  2. Reliability • Reliability—consistency of scores; refers to whether a specific technique, applied repeatedly to the same concept, would have the same result each time. • high reliability is a necessary condition for high validity, but high reliability does not ensure validity • level of reliability represented by a correlation coefficient, showing the extent of the reliability • coefficients of reliable selection methods are .8 or higher, indicating a degree of consistency in scores

  3. Validity • Validity—concerns whether a measure is actually measuring what it claims to be measuring. • empirical or criterion-related validity—the statistical relationship between performance or scores on some predictor or selection methods (e.g., a test or an interview) and performance on some criterion measure such as on-the-job effectiveness (e.g., sales, supervisory ratings, job turnover, employee theft) • level of validity represented by a correlation coefficient, -1 to +1, showing the direction and strength of the relationship • higher correlations indicate stronger validity for the selection method • content validity—assesses the degree to which the content of a selection method represents (or assesses) the requirements of the job Albemarle V. Moody (CH 3) and UGESPs • Validity generalization- invokes evidence from past studies applied to similar jobs

  4. Utility • Utility—concerns the economic gains derived from using a particular selection method. • estimate the increase in revenue as a function of the use of the selection method after subtracting the cost of the method • methods with high validity and low cost usually have high utility • for high utility, an organization must be selective (LOW SELECTION RATIO)

  5. The Effectiveness of Personnel Selection Continued on next slide

  6. Application Blanks and Biographical Data • Weighted application blanks—an objective weighting system based on an empirical research study; responses are statistically related to one or more important criteria such that the critical predictive relationships can be identified. • application blanks are scored, similar to a test • a total score is given for each job candidate and the score is compared to other candidates' scores • Good predictor of employee turnover for non-managerial jobs • Biographical information blanks—similar to WABs except the items of a BIB tend to be more personal and experiential-based on personal background and life experiences.

  7. Reference Checks and Background Checks • Roles of reference and background checks. • verify information provided by the applicant regarding previous employment and experience • Negligent hiring issues • assess the potential success of the person for the new job (validity = .26) • Potential problems. • lawsuits directed at previous employers for defamation of character, fraud, and intentional infliction of emotional distress stop many former employers from providing any information other than dates of employment and jobs • extremely positive letters of reference • More useful and valid methods available. • a "letter of reference" or recommendation sent to former employers that is essentially a performance appraisal form • a rating of job-related knowledge, skill, or ability of a candidate

  8. Fair Credit Reporting Act (FCRA) • Regulates how agencies provide “investigative consumer reports.” • State laws may also curb how reports may be compiled and used. • To abide by law, employer must: • give notice in writing to job candidate • provide candidate with summary of rights under federal law • certify to investigating company that you will comply with the law • provide a copy of the report in a letter to the candidate.

  9. Personnel Testing: Cognitive Ability Tests • Measure aptitude, or general mental ability (GMA). • aptitude – capacity to acquire knowledge based on learning from multiple sources (Wonderlic, SAT, GRE, Wechsler). Also known as general mental ability tests. • Cognitive ability tests have been found to predict performance for nearly all jobs (validity = .51). • Other cognitive assessments include achievement, and knowledge-based tests. • achievement – measure knowledge obtained in a standardized setting (exams in a college course). • knowledge-based tests – assess a sample of what is required on the job (validity = .48).

  10. Personnel Testing: Cognitive Ability Tests • Cognitive ability tests with top-down hiring tends to cause disparate impact. • Can be difficult to justify even if they are job relevant, since there are often other devices to use instead of aptitude tests. • Avoiding disparate impact when using cognitive ability tests. • set a low cutoff score – increases people who pass, but often defeats the purpose of using the measure at all. • Banding scores – less effect than characteristics of applicant pool.

  11. Continued on next slide

  12. Reducing Adverse Impact

  13. Personnel Testing: Personality and Motivation • HRM professionals agree that ability means very little without the motivation to use it. • Motivation is a function of Personality, a person’s consistent pattern of behavior across situations. • Some people believe behavior is entirely situation-specific (context-based), while others believe that behavior is entirely inherent (personality-based).

  14. Personnel Testing: Personality and Motivation • Dominant theory of personality used in business is the Five Factor Model (FFM), where personality is believed to consist of five meta-traits: • Extraversion – related to managerial success (r = .21) • Emotional stability – valid across all jobs (r = .24 for management) • Agreeableness – related to team success • Conscientiousness – valid across all jobs (.25 for management) • Openness to experience – receptive to training; related to creativity • Other traits/ characteristics: • Emotional Intelligence (EI) – average validity of r = .23 • Core Self-Evaluations (CSE) – incremental validity over FFM

  15. Personnel Testing: Personality and Motivation • Other forms of personality and motivation testing: Projective tests that provide ambiguous stimuli and hide/disguise scoring. • Thematic Apperception test (T.A.T.) • Rorschach test • Miner Sentence Completion test • Graphology/handwriting analysis

  16. Personnel Testing: Predicting Particular Criteria • Predicting specific criteria with personality. • honesty/integrity tests • accident proneness/safety orientation • customer service orientation • tenure/turnover • Reducing voluntary turnover. • referrals reduce turnover • candidates with more contacts at the organization • tenure in previous jobs • intention to quit • disguised-purpose attitudinal scales (e.g., JCQ)

  17. Drug Testing • Urinalysis testing. • immunoassay test—applies an enzyme solution to a urine sample and measures changes in the density of the sample • drawbacks • twenty dollars per applicant • test is sensitive to some legal drugs as well as illegal drugs; positive tests followed by a more reliable confirmatory test • confirmatory tests—more complete and reliable • two causes of incorrect positives • passive inhalation, a rare event, caused by involuntarily inhaling marijuana • laboratory blunders • Hair analysis. • more expensive, more reliable, and less invasive

  18. Performance Testing • Behavioral responses required by test takers that are similar to responses required on the job. • work sample • assessment centers-measure KASOCS through series of work samples • situational judgment tests • Validity and adverse impact of performance testing. • validity similar to cognitive ability tests • assessment centers more defensible in court than cognitive ability tests

  19. Performance Appraisals/Competency Assessments • Little research on the validity of performance-based competency assessment or performance appraisal in general for predicting performance at a higher level. • assessment center and 360 data have higher predictive validity of job performance than supervisor ratings for retail store managers. • assessment center and 360 systems have less adverse impact than supervisor ratings for retail store managers. • 360 competency assessments have incremental validity over assessment center validity Source: Hagan, C. M., Konopaske, R., Bernardin, H. J & Tyler, C.L. (2007) Predicting Assessment Center Performance with 360-degree, Top-down, and Customer-based Competency Assessments. Human Resource Management Journal

  20. Interviews • Most widely used selection “test.” • Tend to be conducted towards the end of the hiring process. • Structured interviews more valid (r=.51) versus unstructured (r=.31). • Clear discrepancies between research findings and practice (See Figure 6-9).

  21. Underlying Bias in Interviews • Interviewer hindered by: • first impression • stereotypes • lack of adequate job information • Interview procedure impaired by: • different information utilization • different questioning content • lack of interviewer knowledge regarding the job requirements

  22. Interviews and Discrimination • Issues prompting litigation/bad decisions. • vague, inadequate hiring standards • subjective, idiosyncratic interview evaluation criteria • biased questions unrelated to the job • inadequate interviewer training

  23. Interviews and Discrimination • Issues for determining interview discrimination. • discriminatory intent—occurs when interviewers ask non-job-related questions of only one protected group of job candidates and not of others • do certain questions convey an impression of underlying discriminatory attitudes? • discriminatory impact—occurs when the questions asked of all job candidates implicitly screen out a majority of protected group members • does the interview inquiry result in a differential, or adverse, impact on protected groups? • if so, are the interview questions valid and job related?

  24. Improving Interview Effectiveness • Keys to more reliable and valid interview formats. • interview questions based on job analysis as opposed to psychological or trait information • structured interviews—a standardized approach to systematically collecting and rating applicant information • carefully defining what information is to be evaluated and systematically evaluating that information using consistent rating standards • behavioral interviews have higher validity • If unstructured, use 3 (or more) independent interviewers

  25. Increasing Validity Through Interview Formats • Interview formats. • structured interviews—a standardized approach to systematically collecting and rating applicant information • can be conducted live or via telephone • highly structured • interviewers ask the same questions of all candidates in the same order • questions are based on a job analysis • questions are periodically reviewed for relevance, accuracy, completion, ambiguity, and bias • semi-structured • general guidelines provided with recording forms for note-taking and summary ratings

  26. Increasing Validity Through Interview Formats • Interview formats. • group/panel interviews—multiple interviewers who independently record and rate applicant responses during the interview session • situational or behavioral interviews—applicants describe how they would behave in specific situations • questions based on the critical incident method of job analysis • each question is accompanied with a rating scale • interviewers evaluate applicants according to the effectiveness or ineffectiveness of their responses • Behavioral interviews focus on past behaviors/accomplishments while situational questions ask for reactions to hypothetical situations • Stronger validity for behavioral versus situational

  27. Employment Interviews:Research vs. Practice

  28. Integrating and EvaluatingCandidate Information • Methods used to combine data from different tests. • weigh scores from each approach equally after standardizing the data; convert scores to a common scoring key • each applicant given a standard score on each predictor • standard scores summed • candidates ranked according to the summed scores • weigh scores based on extent to which each is correlated with the criterion of interest (e.g., performance, turnover) • expert judgment regarding the weight that should be given to each source • experts review the content and procedures of each of the methods • experts give each method a relative predictive weight that is then applied to applicant scores • individual/clinical assessment (IA) involves holistic, subjective combination of test data by a single individual – not as valid as actuarial/ statistical weighting. • Actuarial prediction has superior validity to clinical or IA

  29. Summary of Staffing Decisions

More Related