1 / 27

Impact Evaluation Methods: Difference in difference & Matching

David Evans Impact Evaluation Cluster, AFTRL. Africa Program for Education Impact Evaluation. Impact Evaluation Methods: Difference in difference & Matching. Slides by Paul J. Gertler & Sebastian Martinez. AFRICA IMPACT EVALUATION INITIATIVE, AFTRL. Measuring Impact. Randomized Experiments

cricket
Télécharger la présentation

Impact Evaluation Methods: Difference in difference & Matching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. David Evans Impact Evaluation Cluster, AFTRL Africa Program for Education Impact Evaluation Impact Evaluation Methods: Difference in difference & Matching Slides by Paul J. Gertler & Sebastian Martinez AFRICA IMPACT EVALUATION INITIATIVE, AFTRL

  2. Measuring Impact • Randomized Experiments • Quasi-experiments • Randomized Promotion – Instrumental Variables • Regression Discontinuity • Double differences (Diff in diff) • Matching

  3. Case 5: Diff in diff • Compare change in outcomes between treatments and non-treatment • Impact is the difference in the change in outcomes • Impact = (Yt1-Yt0) - (Yc1-Yc0)

  4. Average Treatment Effect Outcome Treatment Group Control Group Time Treatment

  5. Outcome Time Treatment Measured effect without pre-measurement Treatment Group Control Group

  6. EstimatedAverage Treatment Effect Treatment Group Control Group Outcome Average Treatment Effect Time Treatment

  7. Diff in diff • What is the key difference between these two cases? • Fundamental assumption that trends (slopes) are the same in treatments and controls (sometimes true, sometimes not) • Need a minimum of three points in time to verify this and estimate treatment (two pre-intervention)

  8. Average Treatment Effect Outcome Treatment Group Third observation Control Group Second observation First observation Time Treatment

  9. Examples • Two neighboring school districts • School enrollment or test scores are improving at same rate before the program (even if at different levels) • One receives program, one does not • Neighboring _______

  10. Mean change CPC Case 5 - Diff in Diff Linear Regression Multivariate Linear Regression 27.66** 25.53** Estimated Impact on CPC (2.68) (2.77) ** Significant at 1% level Case 5: Diff in Diff Case 5 - Diff in Diff Not Enrolled Enrolled t-stat 8.26 35.92 10.31

  11. Case 2 - Case 4 - Case 1 - Before Case 3 - Case 5 - Diff in Enrolled/Not Regression and After Randomization Diff Enrolled Discontinuity Multivariate Multivariate Multivariate Multivariate Linear Multivariate Linear Linear Linear Linear Regression Regression Regression Regression Regression Estimated Impact 34.28** -4.15 29.79** 30.58** 25.53** on CPC (2.11) (4.05) (3.00) (5.93) (2.77) ** Significant at 1% level Impact Evaluation Example –Summary of Results Impact Evaluation Example –Summary of Results

  12. Example • Old-age pensions and schooling in South Africa • Eligible if household member over 60 • Not eligible if under 60 • Used household with member age 55-60 • Pensions for women and girls’ education

  13. Measuring Impact • Randomized Experiments • Quasi-experiments • Randomized Promotion – Instrumental Variables • Regression Discontinuity • Double differences (Diff in diff) • Matching

  14. Matching • Pick the ideal comparison group that matches the treatment group from a larger survey. • The matches are selected on the basis of similarities in observed characteristics. • For example? • This assumes no selection bias based on unobserved characteristics. • Example: income • Example: entrepreneurship Source: Martin Ravallion

  15. Propensity-Score Matching (PSM) • Controls: non-participants with same characteristics as participants • In practice, it is very hard. The entire vector of X observed characteristics could be huge. • Match on the basis of the propensity score P(Xi) = Pr (participationi=1|X) • Instead of aiming to ensure that the matched control for each participant has exactly the same value of X, same result can be achieved by matching on the probability of participation. • This assumes that participation is independent of outcomes given X (not true if important unobserved outcomes are affecting participation)

  16. Steps in Score Matching • Representative & highly comparable survey of non-participants and participants. • Pool the two samples and estimate a logit (or probit) model of program participation: Gives the probability of participating for a person with X • Restrict samples to assure common support (important source of bias in observational studies) For each participant find a sample of non-participants that have similar propensity scores Compare the outcome indicators. The difference is the estimate of the gain due to the program for that observation. Calculate the mean of these individual gains to obtain the average overall gain.

  17. Density of scores for participants Density Region of common support High probability of participating given X 0 1 Propensity score

  18. Steps in Score Matching • Representative & highly comparable survey of non-participants and participants. • Pool the two samples and estimate a logit (or probit) model of program participation: Gives the probability of participating for a person with X • Restrict samples to assure common support (important source of bias in observational studies) • For each participant find a sample of non-participants that have similar propensity scores • Compare the outcome indicators. The difference is the estimate of the gain due to the program for that observation. • Calculate the mean of these individual gains to obtain the average overall gain.

  19. PSM vs an experiment • Pure experiment does not require the untestable assumption of independence conditional on observables • PSM requires large samples and good data

  20. Lessons on Matching Methods • Typically used for IE when neither randomization, RD or other quasi-experimental options are not possible (i.e. no baseline) • Be cautious of ex-post matching: • Matching on variables that change due to participation (i.e., endogenous) • What are some variables that won’t change? • Matching helps control for OBSERVABLE differences

  21. More Lessons on Matching Methods • Matching at baseline can be very useful: • Estimation: • Combine with other techniques (i.e. diff in diff) • Know the assignment rule (match on this rule) • Sampling: • Selecting non-randomized control sample • Need good quality data • Common support can be a problem

  22. Case 7 - PROPENSITY SCORE: Pr(treatment=1) Variable Coef. Std. Err. -0.03 0.00 Age Head -0.05 0.01 Educ Head -0.02 0.00 Age Spouse -0.06 0.01 Educ Spouse 0.42 0.04 Ethnicity -0.23 0.07 Female Head Constant 1.6 0.10 P-score Quintiles Quintile 1 Quintile 2 Quintile 3 Quintile 4 Quintile 5 T C t-score T C t-score T C t-score T C t-score T C t-score Xi 68.04 67.45 -1.2 53.61 53.38 -0.51 44.16 44.68 1.34 37.67 38.2 1.72 32.48 32.14 -1.18 Age Head 1.54 1.97 3.13 2.39 2.69 1.67 3.25 3.26 -0.04 3.53 3.43 -0.98 2.98 3.12 1.96 Educ Head 55.95 55.05 -1.43 46.5 46.41 0.66 39.54 40.01 1.86 34.2 34.8 1.84 29.6 29.19 -1.44 Age Spouse 1.89 2.19 2.47 2.61 2.64 0.31 3.17 3.19 0.23 3.34 3.26 -0.78 2.37 2.72 1.99 Educ Spouse 0.16 0.11 -2.81 0.24 0.27 -1.73 0.3 0.32 1.04 0.14 0.13 -0.11 0.7 0.66 -2.3 Ethnicity 0.19 0.21 0.92 0.42 0.16 -1.4 0.092 0.088 -0.35 0.35 0.32 -0.34 0.008 0.008 0.83 Female Head Case 7: Matching

  23. Case 7 - Matching Linear Regression Multivariate Linear Regression 1.16 7.06+ Estimated Impact on CPC (3.59) (3.65) ** Significant at 1% level, + Significant at 10% level Case 7: Matching

  24. Case 2 - Case 4 - Case 1 - Before Case 3 - Case 5 - Diff in Case 6 - IV Case 7 - Enrolled/Not Regression and After Randomization Diff (TOT) Matching Enrolled Discontinuity Multivariate Multivariate Multivariate Multivariate Multivariate Linear Multivariate Linear Linear Linear Linear Linear Regression Regression Regression Regression Regression 2SLS Regression Estimated Impact 34.28** -4.15 29.79** 30.58** 25.53** 30.44** 7.06+ on CPC (2.11) (4.05) (3.00) (5.93) (2.77) (3.07) (3.65) ** Significant at 1% level Impact Evaluation Example –Summary of Results

  25. Measuring Impact • Experimental design/randomization • Quasi-experiments • Regression Discontinuity • Double differences (Diff in diff) • Other options • Instrumental Variables • Matching • Combinations of the above

  26. Remember….. • Objective of impact evaluation is to estimate the CAUSAL effect of a program on outcomes of interest • In designing the program we must understand the data generation process • behavioral process that generates the data • how benefits are assigned • Fit the best evaluation design to the operational context

More Related