1 / 14

Health Program Evaluation: Impact and Outcome Evaluation Designs

Health Program Evaluation: Impact and Outcome Evaluation Designs. CHSC 433 Module 5/Chapter10 L. Michele Issel, PhD UIC School of Public Health. Objectives. Develop a sampling strategy Develop a design for an effect evaluation Appreciate the trade-offs between rigor and costs.

mohammed
Télécharger la présentation

Health Program Evaluation: Impact and Outcome Evaluation Designs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Health Program Evaluation:Impact and Outcome Evaluation Designs CHSC 433 Module 5/Chapter10 L. Michele Issel, PhD UIC School of Public Health

  2. Objectives • Develop a sampling strategy • Develop a design for an effect evaluation • Appreciate the trade-offs between rigor and costs

  3. Avoid the 3rd Failure Next slide adds a failure to the two discussed previously. Design the effect evaluation to avoid evaluation failure ~ the failure to detect program effects due to poor design, measurement, or sample of the evaluation.

  4. Rigor Considerations inEffectEvaluation • Sampling strategy • Validity and Reliability • Related to design • Related to method • Related to measures • Power to find effects

  5. Sampling • Plan for selecting participants in the evaluation • Plan for selecting/choosing a control or comparison group • Plan for reaching and accessing the evaluation participants or groups • Plan assures generalizability

  6. Sample Size Influence • Cost of the evaluation • Time and effort required • Power to detect a difference made by the program

  7. Power • Statistical power begins with sample size • Power is the degree of certainty that will be able to find a measurable and significant effect from the intervention • Is a trade-off between

  8. Not so Random • Randomization Randomly assign to receive the program or not • Random selection Randomly select from among program participants and non-participating target audience for inclusion in the evaluation

  9. Effect Evaluation Designs Refer to research or epidemiology textbook chapters on experimental quasi-experimental non-experimental designs

  10. Design Considerations • Ethical issues with random selection or random assignment • Design to assess impact objectives • Timing of data collection relative to receiving the intervention • Who are the “controls”???

  11. Design Cost vs Rigor

  12. Measure Program Impact Across the Pyramid

More Related