1 / 24

Human Resources Training and Individual Development

Human Resources Training and Individual Development. February 11: Training Evaluation. Objectives. Explain why evaluation is important Identify & choose outcomes to evaluate Identify how to measure outcomes Discuss validity threats and experimental designs

adamdaniel
Télécharger la présentation

Human Resources Training and Individual Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Resources Training and Individual Development February 11: Training Evaluation

  2. Objectives • Explain why evaluation is important • Identify & choose outcomes to evaluate • Identify how to measure outcomes • Discuss validity threats and experimental designs • Discuss issues to consider in evaluation design to improve the ability to make inferences

  3. What is Training Evaluation? Assessing the effectiveness of the training program in terms of the benefits to the trainees and the company • process of collecting outcomes to determine if the training program was effective • from whom, what, when, and how information should be collected

  4. Importance of Evaluation • Why evaluate training?

  5. Purpose of Evaluation • Summative Evaluation: Collecting data to assess learning & other criteria • Formative Evaluation: collecting data to assess how to make the program better

  6. What Should Be Evaluated? • Cognitive Learning • Skills Learning • Affect • ‘Objective’ results • ROI

  7. How Do You Measure Outcomes? • Cognitive Learning • Skills • Affect • Results • ROI

  8. Criteria for Evaluation • Criteria should be based on training objectives • all objectives should be evaluated • Criteria should be relevant (uncontaminated, not deficient), reliable, practical, and they should discriminate • Criteria should include reactions, learning (verbal, cognitive, attitudes), results & ROI

  9. Outcomes: Relevance, Contamination and Deficiency • Criteria relevance • Criterion contamination • Criterion deficiency

  10. Criterion deficiency, relevance, and contamination Outcomes Identified by Needs Assessment and Included in Training Objectives Outcomes Measured in Evaluation Deficiency Contamination Relevance Outcomes Related to Training Objectives

  11. Outcomes: Reliability, Discrimination and Practicality • Reliability • Discrimination • Practicality

  12. Evaluation Design: Purpose • What is the objective of the training program? • What do you want to accomplish with the evaluation?

  13. Evaluation Design • How do you determine whether the program has worked or not? • Measuring outcomes • Outcome constructs have changed as it was expected • The training program was responsible for the change, and not something else • The broader question is: How can you infer causality?

  14. Causal Inferences • Knowledge is most applicable when it can be expressed in terms of cause-and-effect relationships • One thing causes another if: • Temporal precedence • Covariation • No alternative explanations • Control

  15. Experimental Designs • Test whether one or more manipulated variables have an effect on specific criteria when controlling for other factors • Treatment – manipulated variables • Two features • A. Timing of treatment and measurement ensures temporal precedence • B. Attempt to eliminate alternative explanations • If A and B, then covariation is interpreted as causality

  16. Experimental Designs: Threats to Validity • Threats to validity refer to a factor that will lead one to question either: • The believability of the study results (internal validity), or • The extent to which the evaluation results are generalizable to other groups of trainees and situations (external validity)

  17. The Correlation Coefficient • A relationship index • What is r for a perfect positive relationship? • How about a perfect negative relationship? • No relationship? • What is the interpretation of the squared correlation coefficient?

  18. Correlation and Causality • Ex. 1: Job satisfaction – job performance • Ex. 2: Reading – IQ

  19. Evaluation Design • No one “best way” • Some ways definitely better than others • Need to rely on logic to design an evaluation program that will allow you to make inferences Purpose of Training Design evaluation so that you can make inferences Constraints

  20. Considerations for Evaluation Design • Use pre-test & post-test • Have a comparison group • Random Assignment

  21. Evaluation • Why are the “best designs” not used?

  22. Self-Talk Training Example • Increase confidence of out of work managers • subject pool -- managers who have given up job search • 1/2 given self-talk training and encouraged to continue job search • results: “self-talk training worked because the treatment group had a higher rate of reemployment” • implications: “self-talk is useful in increasing reemployment of the hard core unemployed” • What is wrong with this?

  23. Implications for Evaluation Design • Explicitly consider evaluation at all levels • reactions, learning (verbal, skills, attitudes) results, ROI • Make links from objectives clear • Specify types of outcome measures (include examples) • Specify evaluation strategy

  24. Next Time • Traditional training methods • Noe Chapter 7 • Broadwell and Dietrich (1996)

More Related