1 / 19

Evaluating Training Programs

Evaluating Training Programs. How can training programs be evaluated?. Measures used in evaluating training programs Various ways of designing the evaluation procedures Describe the measurement process itself. Donald Kirkpatrick. Kirkpatrick developed a model of training evaluation in 1959

Télécharger la présentation

Evaluating Training Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Training Programs

  2. How can training programs be evaluated? • Measures used in evaluating training programs • Various ways of designing the evaluation procedures • Describe the measurement process itself

  3. Donald Kirkpatrick • Kirkpatrick developed a model of training evaluation in 1959 • Arguably the most widely used approach • Simple, Flexible and Complete • 4-level model

  4. Measures of Training Effectiveness • REACTION - how well trainees like a particular training program. Evaluating in terms of reaction is the same as measuring trainees' feelings. It doesn't measure any learning that takes place. And because reaction is easy to measure, nearly all training directors do it.

  5. Reaction (cont) • It's important to measure participants' reactions in an organized fashion using written comment sheets that have been designed to obtain the desired reactions. • The comments should also be designed so that they can be tabulated and quantified. • The training coordinator/trained observer should make his own appraisal of the training to supplement participants' reactions. • The combination of two evaluations is more meaningful than either one by itself.

  6. Reaction (cont) • When training directors effectively measure participants' reactions and find them favorable, they can feel proud. But they should also feel humble; the evaluation has only just begun. • May have done a masterful job measuring reactions, but no assurance that any learning has taken place. Nor is that an indication that participants' behavior will change because of training. And still further away is any indication of results that can be attributed to the training.

  7. Collecting reaction measures after training important: • Memory distortion can affect measures taken at a later point. • There is often a low return rate for questionnaires mailed to people long after they have completed the training.

  8. Learning • Defined in a limited way: What principles, facts, and techniques were understood and absorbed by trainees? (We're not concerned with on-the-job use of the principles, facts, and techniques.)

  9. Here are some guideposts for measuring learning: • Measure the learning of each trainee so that quantitative results can be determined. • Use a before-and-after approach so that learning can be related to the program. • As much as possible, the learning should be measured on an objective basis. • Where possible, use a control group (not receiving the training) to compare with the experimental group that receives the training. • Where possible, analyze the evaluation results statistically so that learning can be proven in terms of correlation or level of confidence.

  10. Behavior- Evaluation of training in terms of on-the-job behavior is more difficult than reaction and learning evals, because one must consider many factors. Here are several guideposts for evaluating training in terms of behavioral changes: • Conduct a systematic appraisal of on-the-job performance on a before-and-after basis. • The appraisal of performance should be made by one or more of the following groups (the more the better): trainees, trainees' supervisors, subordinates, peers, and others familiar with trainees' on-the-job performance. • Conduct a statistical analysis to compare before-and-after performance and to relate changes to the training. • Conduct a post-training appraisal three months or more after training so that trainees have an opportunity to put into practice what they learned. Subsequent appraisals may add to validity of the study.

  11. Results • The objectives of most training programs can be stated in terms of the desired results, such as reduced costs, higher quality, increased production, and lower rates of employee turnover and absenteeism. • It's best to evaluate training programs directly in terms of desired results. But complicated factors can make it difficult to evaluate certain kinds of programs in terms of results. • It's recommended that training directors begin to evaluate using the criteria in the first three steps: reaction, learning, and behavior.

  12. Utility Analysis • Cost-benefit analysis: compare costs of training program with the benefits received (both monetary and non-monetary) • Costs: direct costs, indirect costs, overhead, development costs, and participant compensation • Benefits: improvement in trainee attitudes, job performance, quality of work, creativity

  13. How Should a Training Evaluation Study be Designed? • Case Study • Training >>>>Measures Taken After Training • Problem- no measures taken prior to training so no way to know whether assertiveness training brought any change • Pretest-Posttest Design • Measures Taken Before Training >> Training >>>Measures Taken After Training • Little to no value as multitude of unknown factors could be the real cause of change in performance.

  14. A. PRETEST - POSTTEST METHOD 1. Most commonly used method in training. 2. Does not clearly identify training as the reason for improved knowledge or performance. B. AFTER-ONLY DESIGN WITH A CONTROL GROUP 1. Control group is used to determine whether training made a difference. 2. No Pretests are given. 3. Both groups take posttest after training. 4. The after-only design with a control group allows trainers to tell whether changes are due to their programs.

  15. C. PRETEST-POSTTEST DESIGN WITH A CONTROL GROUP 1. Employees are randomly assigned to a treatment group or a control group. 2. Only treatment group receives training. 3. Both groups take a posttest. 4. Advantages a. Pretest results ensure equality between the groups. b. Statistical analysis determines whether differences in posttest results are significant. D. TIME-SERIES DESIGN 1. Uses a number of measures both before and after training. 2. Purpose is to establish individuals' patterns of behavior and then see whether a sudden leap in performance followed a training program. 3. Weakness: because of relatively long time period covered, changes in behavior can be attributed to circumstances other than the program.

  16. More Sophisticated Evaluation Designs • Solomon Four Group Design – ideal for ascertaining whether a training intervention had a desired effect on training behavior. Unlike the designs discussed – this design involves the use of more than one control group.

  17. Evaluation statistically • Preferred choice for analyzing training intervention when considering statistical power & lower costs is Analysis of Variance with an after-only control group design. • The next best approach is the Analysis of Covariance using the pretest score as a covariant.

  18. Self report - • trainees are asked to evaluate themselves on variables related to the purpose of training • measures complicate the measurement of change because of the problems involved in the definition of change itself. • 3 Types of Change w/ Self-Report Data are: • Alpha change • Beta change • Gamma change

  19. Barriers that Discourage Training Evaluation (p. 161- 163) • Top mgmt doesn’t usually require evaluation • Most senior-level training mgrs don’t know how to go about evaluating training programs • Senior-level training managers don’t know what to evaluate • Evaluation is perceived as costly & risky.

More Related