1 / 13

On the Bias in Estimating Program Effects Using Clinic Based Data

On the Bias in Estimating Program Effects Using Clinic Based Data. Ashu Handa University of North Carolina at Chapel Hill Mari-Carmen Huerta London School of Economics. Objective. Can clinic based data give good estimates of program impact?

tuan
Télécharger la présentation

On the Bias in Estimating Program Effects Using Clinic Based Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the Bias in Estimating Program Effects Using Clinic Based Data Ashu Handa University of North Carolina at Chapel Hill Mari-Carmen Huerta London School of Economics

  2. Objective • Can clinic based data give good estimates of program impact? • Most large scale nutrition interventions do not have accompanying social experiment • Nature of program; costs; political constraints • Clinic based data on nutritional status is available virtually everywhere • Useful to know whether these data can be used for program evaluation

  3. Approach • Compare non-experimental and experimental estimates of program impact • Intervention: Progresa • Experimental estimates of impact on child height already exist • Derive non-experimental estimate using data on beneficiaries from health clinics • Compare the two estimates to assess bias

  4. Why Experiments? • Randomly selected comparison group allows for • Control of observed characteristics that might affect outcome • Control of unobserved characteristics that might affect outcome (selection bias) • Less of a problem in a mandatory program such as Progresa • Bias caused by using non-experimental comparison group may be negative or positive

  5. Summary of Experimental Results • Gertler et.al. and Behrman & Hoddinott (BH) • Use same basic data set; two measurements 12-16 months apart • Sample is kids age 12-36 months at baseline • Gertler: Estimates growth in height in cms • BH: Estimate growth in height measured by z-score • Both include child, household and community level control variables • Both report positive and significant estimates in the range of 15% of mean growth (1 cm per year) • Gertler: Only impact is on kids 12-17 months of age

  6. Clinic Based Sample • Individual data from all Progresa clinics between end 1997 and end 1999 • Different dates of incorporation • Use to identify program impact • Select kids with two measures of height taken 6-18 months apart (median=13 months) • Measure 1 in early 1998; measure 2 in early 1999 • Estimate growth in height measured by z-score (same as BH) • Kids 0-48 months of age; no control variables

  7. Identification Strategy • Use child’s incorporation date to define length of exposure to program (define 9 groups) • Selection problem – ‘control’ group initially healthier • Growth specification will eliminate some bias

  8. Estimation

  9. Non-experimental estimates relative to benchmark • 20% for 12-36 age group • 25% for 12-48 age group • 40% for 12-36 age group using listed treatment

  10. Discussion • Clinic based estimates significantly lower • 20% to 40% depending on specification • Downward bias due to measurement error • Listed treatment not equal to actual treatment • Listed treatment measures average impact of total program—different concept • Omitted variable bias reduces estimate • Omitted control variables (not used in clinic-based study) positively related to participation but negatively related to growth • Leads to downward bias in non-experimental impact

  11. So how reliable is the non-experimental estimate? • The glass is half empty • Estimates are positive, but significantly lower than benchmark • Leads to conclusion that program less effective than it actually is • The glass is half full • Gives positive and significant estimates • Cost of measuring impact is virtually zero • Understanding program operation allows assessment of nature of bias • How close do we really need to be for policy?

More Related