1 / 33

Quasi Experimental Methods I

Florence Kondylis. Non- Experimental Methods. Quasi Experimental Methods I. What we know so far. Aim: We want to isolate the causal effect of our interventions on our outcomes of interest Use rigorous evaluation methods to answer our operational questions

anila
Télécharger la présentation

Quasi Experimental Methods I

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Florence Kondylis Non-Experimental Methods Quasi Experimental Methods I

  2. What we know so far Aim: We want to isolate the causal effect of our interventions on our outcomes of interest • Use rigorous evaluation methods to answer our operational questions • Randomizing the assignment to treatment is the “gold standard” methodology (simple, precise, cheap) • What if we really, really (really??) cannot use it?! >> Where it makes sense, resort to non-experimental methods

  3. When does it make sense? • Can we find a plausible counterfactual? • Natural experiment? • Every non-experimental method is associated with a set of assumptions • The stronger the assumptions, the more doubtful our measure of the causal effect • Question our assumptions • Reality check, resort to common sense!

  4. Example: Fertilizer Voucher Program • Principal Objective • Increase maize production • Intervention • Fertilizer vouchers distribution • Non-random assignment • Target group • Maize producers, land over 1 Ha & under 5 Ha • Main result indicator • Maize yield

  5. (+) Impact of the program Illustration: Fertilizer Voucher Program (1) (+) Impact of externalfactors 6

  6. Illustration: Fertilizer Voucher Program (2) (+) BIASED Measure of the program impact “Before-After” doesn’t deliver results we can believe in! 7

  7. Illustration: Fertilizer Voucher Program (3) « Before» differencebtwn participants and nonparticipants « After » differencebtwn participants and non-participants >> What’s the impact of our intervention? 8

  8. Difference-in-Differences Identification Strategy (1) Counterfactual: 2 Formulations that say the same thing • Non-participants’ maize yield after the intervention, accounting for the “before” difference between participants/nonparticipants (the initial gap between groups) • Participants’ maize yield before the intervention, accounting for the “before/after” difference for nonparticipants (the influence of external factors) • 1 and 2 are equivalent

  9. Difference-in-DifferencesIdentification Strategy (2) Underlying assumption: Without the intervention, maize yield for participants and non participants’ would have followed the same trend >> Graphic intuition coming…

  10. Data -- Example 1

  11. Data -- Example 1

  12. Impact = (P2008-P2007)-(NP2008-NP2007) = 0.6 – 0.8 = -0.2 P2008-P2007=0.6 NP2008-NP2007=0.8

  13. Impact = (P-NP)2008-(P-NP)2007 = 0.5 - 0.7 = -0.2 P-NP2008=0.5 P-NP2007=0.7

  14. Assumption of same trend: Graphic Implication Impact=-0.2

  15. Summary • Negative Impact: • Very counter-intuitive: Increased input use should not decrease yield once external factors are accounted for! • Assumption of same trend very strong • 2 groups were, in 2007, producing at very different levels • Question the underlying assumption of same trend! • When possible, test assumption of same trend with data from previous years

  16. Questioning the Assumption of same trend: Use pre-pr0gram data >> Reject counterfactual assumption of same trends !

  17. Data – Example 2

  18. Impact = (P2008-P2007)-(NP2008-NP2007) = 0.6 – 0.2 = + 0.4 NP08-NP07=0.2

  19. Assumption of same trend: Graphic Implication Impact = +0.4

  20. Conclusion • Positive Impact: • More intuitive • Is the assumption of same trend reasonable? • Still need to question the counterfactual assumption of same trends ! • Use data from previous years

  21. Questioning the Assumption of same trend: Use pre-pr0gram data >>Seems reasonable to accept counterfactual assumption of same trend ?!

  22. Caveats (1) • Assuming same trend is often problematic • No data to test the assumption • Even if trends are similar the previous year… • Where they always similar (or are we lucky)? • More importantly, will they always be similar? • Example: Other project intervenes in our nonparticipant villages…

  23. Caveats (2) • What to do? >> Be descriptive! • Check similarity in observable characteristics • If not similar along observables, chances are trends will differ in unpredictable ways >> Still, we cannot check what we cannot see… And unobservable characteristics might matter more than observable (ability, motivation, patience, etc)

  24. Matching Method + Difference-in-Differences (1) Match participants with non-participants on the basis of observable characteristics Counterfactual: • Matched comparison group • Each program participant is paired with one or more similar non-participant(s) based on observable characteristics >> On average, participants and nonparticipants share the same observable characteristics (by construction) • Estimate the effect of our intervention by using difference-in-differences

  25. Matching Method (2) Underlying counterfactual assumptions • After matching, there are no differences between participants and nonparticipants in terms of unobservable characteristics AND/OR • Unobservable characteristics do not affect the assignment to the treatment, nor the outcomes of interest

  26. How do we do it? • Design a control group by establishing close matches in terms of observable characteristics • Carefully select variables along which to match participants to their control group • So that we only retain • Treatment Group: Participants that could find a match • Comparison Group: Non-participants similar enough to the participants >> We trim out a portion of our treatment group!

  27. Implications • In most cases, we cannot match everyone • Need to understand who is left out • Example Matched Individuals Portion of treatment group trimmed out Nonparticipants Participants Score Wealth

  28. Conclusion (1) • Advantage of the matching method • Does not require randomization

  29. Conclusion (2) • Disadvantages: • Underlying counterfactual assumption is not plausible in all contexts, hard to test • Use common sense, be descriptive • Requires very high quality data: • Need to control for all factors that influence program placement/outcome of choice • Requires significantly large sample size to generate comparison group • Cannot always match everyone…

  30. Summary • Randomized-Controlled-Trials require minimal assumptions and procure intuitive estimates (sample means!) • Non-experimental methods require assumptions that must be carefully tested • More data-intensive • Not always testable • Get creative: • Mix-and-match types of methods! • Adress relevant questions with relevant techniques

  31. Thank you Financial support from Is gratefully acknowledged

More Related