1 / 23

The Criterion Choices

The Criterion Choices. Dr. Steve Training & Development INP6325. Why Evaluate Training?. What can be gained from evaluating training? Determine effectiveness of program. Demonstrate benefits of training to top management and stakeholders

andrew
Télécharger la présentation

The Criterion Choices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Criterion Choices Dr. Steve Training & Development INP6325

  2. Why Evaluate Training? What can be gained from evaluating training? • Determine effectiveness of program. • Demonstrate benefits of training to top management and stakeholders • Demonstrate job-relatedness of training (legal implications) • Research value in aiding future training development • Ability to make personnel decisions (promotion, retention, etc.)

  3. Why not evaluate training? • In a survey of 611 orgs, 92% claim to evaluate training programs, however the vast majority of these were simply trainee reaction, rather than learning or transfer What’s preventing them? • Evaluation not often emphasized by management • Training directors do not know how • HR may not understand importance • View that evaluation is expensive and risky

  4. Determining Criteria To determine criteria, must know purpose of evaluation: • To predict job success, must evaluate training for relationship between training performance and job performance • Must determine whether one training program is better than another or no formal training at all.

  5. Pessimistic view of criteria selection Guion (1961) • I/O psychologist has a hunch about a problem • Read a vague, ambiguous job description • Form a fuzzy concept of the ultimate criterion • Develop a set of measures that can be combined to approximate ultimate criterion • Judge relevance of measure: deficiency & contamination • Data required for the measure are not available in company personnel files (and never will be) • Select best available criterion

  6. Evaluating training effectiveness • To assess effectiveness of training, must: • Develop criteria that assesses trainees’ learning of the KSAs necessary for the job • Assess performance at the end of training (progress) • Assess performance after period of time on job (transfer)

  7. Criterion Selection Ultimate Criterion – better time management A – KSAs not in Needs Assessment or Criterion B – Criterion Deficiency - KSAs in Needs Assessment, but not Criterion C – Criterion Contamination (error + bias) - KSAs in Criterion, but not Needs Assessment D – Criterion Relevance - KSAs in both Needs Assessment & Criterion B D C A Actual Criterion - files reports on time - Avoids overtime - Meets deadlines

  8. Criterion Deficiency • Criterion Deficiency – training program intended to teach certain KSAs required for the job, but criteria used to evaluate training are missing KSAs Example: Postal Clerk - mail sorting skill not part of the training - Sort Mail - Weigh pkg - K of pricing

  9. Criterion Contamination • Criterion Contamination – extraneous variables included in the criteria that were not part of training program • Opportunity bias – some individuals might have a greater opportunity for successful job performance which had nothing to do with training Example: Salesperson - performance affected by location (opportunity) as much as by training salesmanship Geographic location - KSAs

  10. Criterion Reliability • Criterion Reliability – consistency of criterion measure. • Example: inter-rater reliability of supervisory ratings • Negatively impacted by: • Competence of judges • Simplicity of behaviors • Overtness of behaviors • Operational definition of behaviors

  11. Composite Criteria Multiple Criteria Two Views of Criterion Development Mastery of needs assessment Mastery of needs assessment A Skill at writing task statements C Skill at presenting Results A + B + C = X Grade: A if X > 90% B Knowledge of task analysis techniques Problem: may meet some criteria, but not others. What constitutes success? Problem: not very diagnostic May not know where one was successful or failed

  12. Proximal vs. Distal Criteria • Proximal – short term criteria • Distal – Long term criteria • Example: Political training • Proximal: performance during campaign • Distal: performance in office USA TODAY/CNN/Gallup Poll results Below are the results of a USA TODAY/CNN/Gallup Poll: 1. Question: Do you approve or disapprove of the way George W. Bush is handling his job as president?

  13. Levels of Criteria Proximal Distal • Reaction – opinion of trainees (survey) • Affective reactions • Utility judgments • Learning – mastery of training material (test) • Immediate knowledge • Knowledge retention • Behavior/Skill demonstration • Behavioral – trainee job performance (ratings) • Transfer • Results – org profit by training ($$$$)

  14. Reaction Criteria • Guidelines for Reaction Criteria Development • Questions based on information from needs assessment • Q’naire includes quantifiable data (do not use ONLY open-ended questions) • Q’naire should be anonymous • Should include SOME open-ended questions • Pilot test Q’naire for length and comprehension • Benefit: provides info from all trainees, not just those with extreme opinions • Disadvantage: may have nothing to do w/ eventual performance, but if training is perceived as poor it is less likely to be taken seriously or skills retained

  15. Reaction Criteria Example

  16. Learning Criteria • Test at end of training addressing material covered during training • Example: Can receptionist trainee recall the steps to transfer a phone call on the company’s phone system? • Pre-test/Post-test comparison

  17. Behavior Criteria • Criteria should come directly from Task and KSA analyses. • Use experimental methods to demonstrate improvements due to training. • Assess whether performance goals are met. • Example: Bus Driver performance: • Stops and restarts without rolling back • Tests brakes at tops of hills • Uses mirrors to check traffic • Signals following traffic • Stops before crossing sidewalk when coming out of driveway • Stops clear of pedestrian crosswalks

  18. Results Criteria Measure of training program in terms of meeting organizational goals • Money saved = lower turnover, lower absenteeism, improved morale, improved productivity, etc. • Utility – Whether training saves more money than it costs • Ex: if no formal training in place, senior workers lose time showing junior workers what to do.

  19. Utility Analysis U = (T x N x dt x Sdy) – (N x C) • U = $ value of training program (Value) – (Cost) • T =# years duration of training effect on performance • N = # trainees • dt = effect size or true difference in pre-post perf • = (Xc – Xe) /ryy • Sdy = std deviation of performance in $ of untrained group • C = cost per trainee • Example: U = (2yrs x 100N x .5 x $5,000) – (100N x $200) = $500,000 - $20,000 = $480,000 over 2 yrs

  20. Utility Analysis Reasons for NOT using utility analysis • Data may not be readily available or unreliable • Seeking non-monetary benefits of training • Other variables confound results

  21. Choosing Criteria Conclusions: • Reaction information is important to know for whether trainees will accept training program, but it does not translate into effectiveness in terms of learning, transfer, or monetary savings • Learning criteria is a good predictor of results • Learning criteria is a modest predictor of transfer • Behavior criteria is a modest predictor of results

  22. Other Criteria Concerns • Can’t judge effectiveness of training strictly on outcome (summative), must also look at process (formative) • Process measures tell us source of outcome changes • Outcome alone is not diagnostic • Importance variables affecting process include: differences between trainers, settings, student samples, motivation of groups, etc.

  23. Subjective vs. Objective Measures • Subjective – ratings, opinions • Problem of rater biases (halo, central tendency, leniency) • Easy to use • Was rater well-trained to make ratings? • Subjective ratings may be improved by training the rater • Objective – countable, observable measures such as production, absences, defects missed, etc. • Problem of opportunity bias • Objective measures should be used when possible, but must be aware of potential contamination and account for it

More Related