890 likes | 1.03k Vues
The Overall Agenda. When Will We Ever Learn: general introduction to Impact Evaluation When Random Assignment is Possible? Implementing and Evaluating RCTs When Random Assignment is Not Possible? Quasi-experimental methods propensity scores, matching, IV, Regression Discontinuity and DinD.
E N D
The Overall Agenda • When Will We Ever Learn: general introduction to Impact Evaluation • When Random Assignment is Possible? • Implementing and Evaluating RCTs • When Random Assignment is Not Possible? • Quasi-experimental methods propensity scores, matching, IV, Regression Discontinuity and DinD.
When Will We Ever Learn?What is impact evaluation, and when and how should we use it? Session 1 Scott Rozelle Stanford University
Amazing Ideas • Sleeping Bag Incubator • Treadle Pump Irrigation • Agricultural Price Services through Mobile Phones • Computer Assisted Learning for Remedial Tutoring
New programs in China (huge fiscal investment) • Rural Health Insurance (NCMS or hezuo yiliao in China) • New Subsidy Program (liangshi butie) • New Education Programs (e.g., raising teacher salaries … or … eliminating tuition for high school) • Financial Crisis Stimulus Package (investments by central gov’t; investments by localities)
How many of the innovations/programs that we heard about on the news … … how many of the new technologies/programs that we have become excited about … … how many have been rigorously evaluated? • Do we have empirical evidence, based on a carefully constructed counterfactuals, that these breakthroughs/programs work, can positively affect the lives of the poor and do so in a cost effective way? Unfortunately, the answer is almost certainly some, but, not many …
Huge Global Initiatives • UN Millennium Development Villages • USAID Bilateral Investment Program • World Bank / ADB’s Loan Program
Statement of Facts: “Accelerating social progress in low- and middle-income countries requires knowledge about what kinds of social programs are effective. Yet all too often, such basic knowledge is lacking because governments, development agencies, and foundations/NGOs have few incentives to start and sustain the impact evaluations that generate this important information.” (International Evaluation Gap Working Group) “When it comes to attribution, there is shockingly little concrete evidence about what works and what does not” (Author of report: When will we ever learn) I was at a conference about 2 to 3 years ago, where one young researcher claimed (in front of scores of older, experienced development economists after 40+ years of development economics and we had not learned anything … until his work (of course)
The Excuses • We don’t have time • It costs too much to do rigorous impact evaluation • It is unethical • Project implementation is site and context specific
The Excuses • We don’t have time • It costs too much to do rigorous impact evaluation • It is unethical • Project implementation is completely site and context specific • We already know!
Example: • J. Sachs we already know ITN’s work (insecticide treated nets) for the prevention of malaria • In fact, this time rigorous public health trials support it: • 90 villages: give residents ITN’s • 90 villages: give residents “0” • In treatment villages, reductions of malaria, anemia and other benefits • even positive spillovers: villages/hamlet around the treatment villages (within 300 meters) also benefitted through reduction of malaria (although no ITN’s) … miracle?
Policy implications • People were not buying them … • Despite people being “very afraid” of malaria … • Why? • Stanford University team’s hypothesis: one-time cost too high • Leads to a new RCT
Micro credit or free • Through an NGO that had “cells of members” [10 to 20 to 30 households per village) in 100s of villages in India, did RCT with two treatment arms: • Treatment village 1: give away for free to all NGO members • Treatment village 2: sell ITN’s to households as part of a Micro credit (peer monitoring) project • Control villages: “0” • What is the outcome?
Impact: ZERO[none: for malaria / none: for anemia …NONEnone in treatment village 1 / none in treatment village 2 / none in control villages] • Explanation: • Have no definitive proof (though now we may know why villagers do not buy them … they don’t seem to work … • Theory: • Revisit the original trial … and Revisit and live in own project villages • People do not always use ITN’s … trouble / hard to hang / uncomfortable / too many people, not enough nets
An explanation • How is it that if people do not use them (even in the original public health trial treatment villages) that they have an impact in the villages AND on surrounding villages? • Only real difference between original trial (100% of households in trial) and Stanford’s trial (10 to 20% of households in trial) maybe it is that all mosquitos are killed and populations collapse when all households have ITN’s … this would account for efficacy in trial and the spillover … • However, in the partial roll out villages, the ITN’s not effective!
ITN’s do work … but, with a caveat • Current most plausible explanation: • In the large public health trial, when all of the villagers received the ITN’s … and were encouraged to use them (and did, at first) … ALL of the mosquitos died … this reduces malaria in the treatment villages and the surrounding hamlets • Jeffey’s response? • Of course, that is why we give them to all of the families … of course, he had no idea … • Maybe he did “know” … but, surely he does not understand … [but, then none of us do now]
The Rural Education Action Project of Stanford University is a Research Organization / NGO / Government Organization / Policy Action partnership. At Stanford University Collaborators in China
What Can Be Done to Overcome the Gap in Human Capital between the rural, unskilled, poor and the urban, skilled middle class? Fundamental question which we try to answer:
To understand the barriers keeping the rural poor from closing the gap AND learn what can be done…. REAP Works in Two Ways . . . 1.) We design and implement new program interventions AND we do the evaluations 2.) We partner with NGOs and gov’t agencies who are trying to implement projects We advise. They carry out. We evaluate.
REAP Partners Including our best partner (of course): LICOS
REAP’s Educational Challenge Areas Health, Nutrition and Education Technology and Human Capital Access to Secondary Education and Beyond
REAP Projects in China (1) • Health, Nutrition and Education • Overcoming the Anemia Puzzle in Rural China • Worm Count: Intestinal Worms in Rural China • Is One Egg Enough? School Nutrition Programs in Rural Shaanxi • Vitameal or Vitamins? Grades and Nutrition in Shaanxi • Experimenting with Nutrition: Ningshan County • Paying for Performance in the Battle against Anemia • Conditional Cash Transfers and Cost Effectiveness in the Battle Against Anemia • Nutritional Training in Ningxia • Eggs and Grades • Reducing Transaction Costs: Chewable Vitamins in Gansu • Best Buy Toolkit: Nutrition, Deworming & Vision Interventions in Rural Schools
REAP Projects in China (2) • Technology and Human Capital 12. Computer Assisted Learning in Beijing Area Migrant Schools 13. Computer Assisted Learning in Rural Boarding Schools 14. Computer Assisted Learning in Rural Minority Areas 15. One Laptop Per Child: Does It Help? 16. Nutritional Training and Mobile Messaging
REAP Projects in China (3) Access to Quality Secondary Education and Beyond 17. Boarding School Management 18. Pre School Vouchers for Needy Families 19. Evaluating Pre School Teacher Training (Nokia, China) 20. Early Commitment of Financial Aid for University 21. SOAR Foundation: What if High School Were Free? 22. Scholarships with Strings Attached: Community Service 23. Financial Aid in Shilou County 24. Contracting for Dreams in Ningshan County 25. Summer Fresh Migrant School Teacher Training Program 26. Peer Tutoring versus Paying for Grades 27. Vouchers, Vocational Education and Career Counselling 28. Scholarships at Four Tier One Universities 29. Breaking the Cycle of Poverty: Cash Transfers for Jr. High
REAP Projects in China 22 9 23 13 17 12 25 8 6 24 10 20 11 4 26 3 19 7 4 25 1 15 16 5 16 15 5 24 21 14 16 15 23 25 18 25 2 2
Today’s (this session’s) plan • Introduce the concept of IE • Definitions and examples of what is right and what is not right • RCT’s … when possible, sexy gold standards! • When you can’t randomize (still a lot of excitement) • IE is not enough: Supplementary tools • Issues in choosing an IE strategy • Selecting a control group • RCTs or Quasi-experimental approaches?” A lot more rigor in sessions 2 and 3 …
What is impact? • Impact = the outcome with the intervention compared to what it would have been in the absence of the intervention • Unpacking the definition • Can include unintended outcomes • Can include others not just intended beneficiaries • No reference to time-frame, which is context-specific • At the heart of it is the idea of a attribution – and attribution implies a counterfactual (either implicit or explicit)
Defined in this way – UNFORTUNATELY (as discussed above) – we (the international development community) have little evidence on impact of development programs [in other words: we don’t (systematically) know the results of many of the uncountable number of programs that development agencies, gov’ts and other organizations have been implementing in recent years]
The attribution problem:factual and counterfactual With project Project impact With project Impact varies over time Impacts also are defined over time … Little attention has been given to the dynamics over time … though people think about this …
Change in the CAL program effect on the standardized math test scores over time 0.12 0.11 0 The CAL program effect occurred by the midterm evaluation, less than two months after the start of the program.
Change in the CAL program effect on the standardized math test scores over time 0.12 0.11 0 There is no improvement between month two and month three..
Impact of nutrition at infancy in Guatemala • After 2 years greater BMI • After 10 years higher grades in school • After 15 years higher school attainment • After 40 years higher wages / income
What has been the impact of the French revolution? And, even longer run … “It is too early to say” Zhou Enlai
Lets examine a less grandiose intervention • The venue: Poor areas of South West China … a remote mountainous region … populated by groups of Dai and Dong minorities … • In 1980s and 1990s only small share of girls attended school … most were involved with farming, tending livestock and raising siblings … • An NGO began giving scholarships in the early 1990s … objective: increase attendance of girls … they claim in their very polished promotion material and in the many workshops that they attend that they have been effective in their mission …
What do we need to measure impact?Girl’s primary school enrollment NOTE: if you measure this well, what is it? Outcome monitoring The majority of evaluations have just this information … which means we can say absolutely nothing about impact
What does 92 percent mean? • Is it high? • Is it low? • What does a single number mean? • What do we compare this to? Even if done well … output monitoring in its simplest form TELLS US NOTHING about impact
“Before versus after” single-difference comparisonsBefore versus after = 92 – 40 = 52 “scholarships have led to rising schooling of young girls in the project villages” This ‘before versus after’ approach is more careful outcome monitoring, which has become popular recently. Outcome monitoring has its place, but: outcome monitoring ≠ impact evaluation
The changing macro environment … and rising employment opportunities and wages Percent of cohort Yuan / month Employment in the off farm labor market – 16 to 25 year olds Off farm wage rate
Rates of completion of elementary male and female students in all rural China’s poor areas Share of rural children 1993 1993 2008 2008
Outcome monitoring does not tell us about effectiveness Results… cannot as a rule be attributed specifically, either wholly or in part, to the intervention
An (important) asideCollecting data in order to measure outcomes “before an intervention” • Can we collect data about outcomes before interventions, after the intervention (that is: is recollection data valid?) • No (or be careful): work by economists have shown clearly that there are lots of biases introduced to IE by relying on recollection data (most of them psychological) • If individuals have been given a treatment, they often will selectively remember … they will exaggerate the benefits as a way of showing their gratefulness … • Those in the control groups will often want to show they are less fortunate and understate their condition (or improvement) • Empirically, recollection data have lots of biases … hard to determine the direction … • Best practice (only practice?): collect baseline before the project begins
Post-treatment control comparisonsSingle-difference = 92 – 84 = 8 Another common approach (lets compare to another set of villages):
But we don’t know if treatment and control groups were similar before… • How often are intervention villages / schools / clinics / etcetera / chosen in a way that make them systematically different than control villages? [either for convenience / political necessity / feasibility / cost considerations / or from leaving it to the local partner who uses who-knows-what-type of selection method] • In the SW China villages, the NGO went to a poor county, but, the local bureau of education chose the villages … and chose them along the road … • Is attendance in elementary school lower in the control villages because the NGO did not pass out scholarships, or because villagers in control villages had less use for education (or the cost of going to school higher)
Post-treatment control comparisonsSingle difference = 92 – 84 = 7 Another common approach (lets compare to another set of villages): Main point: Post treatment control comparisons are only valid if treatments and controls were identical at the time the intervention began …
Double difference =(92-40)-(84-26) = 52-58 = -6 Therefore: lets collect data for all of the cells? Conclusion: Longitudinal (panel) data, with a control group, allow for the strongest impact evaluation design (BUT: still need matching … if they are different at the start of the project … is there something different in the village which would affect the village’s response to the intervention?)
Main points so far • Analysis of impact implies a counterfactual comparison • Outcome monitoring is a factual analysis, and so cannot tell us about impact • The counterfactual is most commonly determined by using a rigorously/carefully chosen control group If you are going to do impact evaluation you need a credible counterfactual using a control group (not necessarily RCT / but, still need control)
“Gold Standard” Randomized Control Trials Medical Zero What is the counterfactual?
“Gold Standard” Randomized Control Trials Crop field trials Zero What is the counterfactual?