1 / 25

Impact Evaluation for Evidence-Based Policy Making

Impact Evaluation for Evidence-Based Policy Making. Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative. How to turn this child…. …into this child. Why Evaluate?. Fiscal accountability Allocate limited budget to what works best Program effectiveness

Télécharger la présentation

Impact Evaluation for Evidence-Based Policy Making

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative

  2. How to turn this child…

  3. …into this child

  4. Why Evaluate? • Fiscal accountability • Allocate limited budget to what works best • Program effectiveness • Managing by results: do more of what works • Political sustainability • Negotiate budget • Inform constituents

  5. BEHAVIOR Traditional M&E and Impact Evaluation • monitoring to track implementation efficiency (input-output) impact evaluation to measure effectiveness (output-outcome) MONITOR EFFICIENCY INPUTS OUTPUTS OUTCOMES EVALUATE EFFECTIVENESS $$$

  6. Question types and methods • M&E: monitoring & process evaluation • Is program being implemented efficiently? • Is program targeting the right population? • Are outcomes moving in the right direction? Descriptive analysis • Impact Evaluation: • What was the effect of the program on outcomes? • How would outcomes change under alternative program designs? • Is the program cost-effective? Causal analysis

  7. Answer with traditional M&E or IE? • Are nets being delivered as planned? • Do IPTs increase cognitive ability? • What is the correlation between HIV treatment and prevalence? • How does HIV testing affect prevention behavior? • M&E • IE • M&E • IE

  8. Efficacy & Effectiveness • Efficacy: • Proof of Concept • Pilot under ideal conditions • Effectiveness: • At scale • Normal circumstances & capabilities • Lower or higher impact? • Higher or lower costs?

  9. Use impact evaluation to…. • Test innovations • Scale up what works (e.g. de-worming) • Cut/change what does not (e.g. HIV counseling) • Measure effectiveness of programs (e.g. JTPA ) • Find best tactics to change people’s behavior (e.g. bring children to school) • Manage expectations

  10. What makes a good impact evaluation?

  11. Evaluation problem • Compare same individual with & without a program at the same point in time • BUT Never observe same individual with and without program at same point in time • Formally the impact of the program is: α = (Y | P=1) - (Y | P=0) • Example • How much does an anti-malaria program lower under-five mortality?

  12. Solving the evaluation problem • Counterfactual: what would have happened without the program • Estimate counterfactual • i.e. find a control or comparison group • Counterfactual Criteria • Treated & counterfactual groups have identical initial average characteristics • Only reason for the difference in outcomes is due to the intervention

  13. “Counterfeit” Counterfactuals • Before and after: • Same individual before the treatment • Non-Participants: • Those who choose not to enroll in program, or • Those who were not offered the program • Problem: We can not determine why some are treated and some are not

  14. Before and After Example • Food Aid • Compare mortality before and after • Observe mortality increases • Did the program fail? • “Before” normal year, but “after” famine year • Cannot separate (identify) effect of food aid from effect of drought

  15. Before & After • Compare Y before & after intervention Before & after counterfactual = B Estimated impact = A-B • Control for time varying factors True counterfactual = C True impact = A-C A-B is under-estimated Y Before After C A B B t-1 t Time Treatment

  16. Non-Participants…. • Compare non-participants to participants • Counterfactual: non-participant outcomes • Problem: why did they not participate? • Estimated Impact αi = (Yit | P=1) - (Ykt| P=0) , • Hypothesis : (Ykt| P=0) = (Yit| P=0)

  17. Exercise: Why participants and non-participants might differ? Child had diarrhea Access to clinic Costal and mountain Epidemic and non-epidemic People with HIV Access to clinic • Mothers who came to the health unit for ORT and mothers who did not? • Communities that applied for funds for IRS and communities that did not? • People who receive ART and people who do not?

  18. Health program example • Treatment offered • Who signs up? • Those who are sick • Areas with epidemics • Have lower health status that those who do not sign up • Healthy people/communities are a poor estimate of counterfactual

  19. What's wrong? • Selection bias: People choose to participate for specific reasons • Many times reasons are directly related to the outcome of interest • Cannot separately identify impact of the program from these other factors/reasons

  20. Need to know… • Why some get assigned to treatment and others to control group. If reasons correlated with outcome • cannot separately identify program impact from • these other “selection” factors • The process by which data is generated

  21. Possible Solutions… • Guarantee comparability of treatment and control groups • ONLY remaining difference is intervention • How? • Experimental design/randomization • Quasi-experiments • Regression Discontinuity • Double differences • Instrumental Variables

  22. These solutions all involve… • EITHER Randomization • Give all equal chance of being in control or treatment groups • Guarantees that all factors/characteristics will be on average equal between groups • Only difference is the intervention • OR Transparent & observable criteria for assignment into the program

  23. Finding controls: opportunities • Budget constraints: • Eligible who get it = potential treatments • Eligible who do not = potential controls • Roll-out capacity: • Those who go first = potential treatments • Those who go later = potential controls

  24. Finding controls: ethical considerations • Do not delay benefits: Rollout based on budget/capacity constraints • Equity: equally deserving populations deserve an equal chance of going first • Transparent & accountable method • Give everyone eligible an equal chance • If rank based on criteria, then criteria should be measurable and public

  25. Thank you

More Related