1 / 53

Randomized Impact evaluations

Randomized Impact evaluations. Vandana Sharma, MD, MPH Handicap International Meeting, Dec 4, 2013. Outline. The Abdul Latif Jameel Poverty Action Lab (J-PAL) What is evaluation Randomized evaluations What is randomized evaluation How to conduct a randomized evaluation

thanh
Télécharger la présentation

Randomized Impact evaluations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Randomized Impact evaluations Vandana Sharma, MD, MPH Handicap International Meeting, Dec 4, 2013

  2. Outline • The Abdul Latif Jameel Poverty Action Lab (J-PAL) • What is evaluation • Randomized evaluations • What is randomized evaluation • How to conduct a randomized evaluation • Challenges and difficulties • Example 1: Education and HIV in Kenya • Example 2: Voluntary Health workers in Nigeria

  3. Abdul LatifJameel Poverty Action Lab (J-PAL): Science into Action • J-PAL is • A center within MIT department of Economics, established in 2003 • A network of researchers around the world • Dedicated to ensure the fight against poverty is based on scientific evidence • In particular our focus is on learning lessons from the randomized evaluations of anti-poverty projects (poverty broadly defined) • What do we do? • Conduct rigorous impact evaluations • Build capacity • Impact policy

  4. J-PAL - Network of economists running RCTs 91 academics, 441 evaluations in more than 55 countries worldwide

  5. What is Evaluation? • A systematic method for collecting, analyzing and using information to answer questions about policies and programs • Process Evaluation: • Did the policy/program take place as planned? • Impact Evaluation: • Did the policy/program make a difference?

  6. Evaluation is Crucial • Resources are limited • Little hard evidence on what is the most effective • Many decisions are based on intuition or on what is in fashion • Rigorous evaluations allow accountability

  7. Evaluation is Useful • Helps policymakers to better invest • Ameliorate programs • Identify best practices

  8. The Lancet, February 13, 2010

  9. Impact evaluations • Impact evaluations measure program effectiveness by comparing outcomes of those (individuals, communities, schools, etc.) who received program and those who did not • Objective is to measure the causal impact of a program or intervention on an outcome • Examples: • How much did free distribution of bednets decrease malaria incidence? • How much did an information campaign about HIV reduce risky sexual behavior? • Which of two supply chain models was most effective at eliminating drug shortages?

  10. Impact Evaluations • In order to attribute cause and affect between the intervention and the outcome, we need to measure the Counterfactual • What would have happened to beneficiaries in the absence of the program?

  11. Impact: What is it? Intervention Counterfactual PrimaryOutcome Impact Time

  12. Impact: What is it? Intervention Counterfactual Primary Outcome Impact Time

  13. Impact: What is it? Impact = • The outcome some time after the program has been introduced vs. • The outcome at that same point in time had the program not been introduced (the ”counterfactual”)

  14. Impact Evaluations • BUT – We can’t observe the same individual with and without the program at the same point in time! • Since counterfactual is non-observable, the key goal of all impact evaluation methods, is to construct or mimic the conterfactual • Need an adequate comparison group • Individuals who, except for the fact that they were not beneficiaries of the program, are similar to those who received the program • Could be done by: • Before and After • Using the non-beneficiaries as the control group

  15. Before and After Before Introduction of Bednets After Introduction of Bednets 2 malaria episodes in 6 months • 6 malaria episodes in 6 months

  16. Before and After • Was the bednet program effective at reducing malaria incidence? • Are there other factors which could have led to the observed reduction? • Seasonal changes • Rising income- households invest in other measures • Other programs

  17. Before and After • Important to monitor before-after • Insufficient to show impact of program • Too many factors changing over time • Counterfactual: What would have happened in the absence of the project, with everything else the same

  18. Participants vs Non-Participants • Compare recipients of the program to • People who were not eligible for the program • People who chose not to enroll/participate in the program • Example: After bednet distribution, compare households with bednets vs those without Impact of bednets?

  19. Participants vs Non-Participants • What else could be going on? • People who choose to get the bednet may be different than those who do not • Observable Differences • Income • Education • Unobservable Differences • Risk factors • Other preventative measures

  20. Participants vs Non-participants • No way to know how much of difference is due to the bednets Other Factors Impact of bednets

  21. Participants vs Non-Participants • Non-beneficiaries may be very different than beneficiaries • Programs are often targeted to specific areas (ex poorer areas, areas that lack specific services) • Individuals are often screened for participation in program • The decision to participate is often voluntary • Thus non-beneficiaries are often not a good comparison group because of pre-exisiting differences (selection bias) • Selection bias disappears in the case of randomization

  22. RANDOMIZED EXPERIMENTAL DESIGN IS THE GOLD STANDARD

  23. Random Assignment • Identify a large enough group of individuals who can all benefit from a program • Randomly assign them to either: • Treatment Group: will benefit from the program • Control Group: not allowed to receive the program (during the evaluation period) • Random assignment implies that the distribution of both observable and unobservable characteristics in treatment and control groups is statistically identical.

  24. Random Assignment • Because members of the groups (treatment and control) do not differ systematically at the onset of the experiment, • any differences that arise between them can be attributed to the program (treatment) rather than to any other factors • If properly designed and conducted, randomized experiments provide the most credible method to estimate the impact of a program

  25. Random Assignment • Randomization with two (individuals or groups) doesn’t work! Treatment Control • But differences even out in a large sample Big Differences between treatment and control On average same number of red and blues in treatment and control

  26. Can we randomize? • Randomization does not mean denying people the benefits of the project • Usually there are existing constraints within project implementation that allow randomization • Randomization is the fairest way to allocate treatment

  27. How to introduce Randomness • Organize lottery • Randomize order of phase-in of a program • Randomly encourage some more than others • Multiple treatments

  28. Phase in of Program • Randomize the order in which clinics receive the program • Then compare Jan 2014 group to Jan 2015 group at the end of the first year Jan 2014 Jan 2015 July 2015

  29. If some groups must get the program Highly vulnerable Moderately vulnerable Not vulnerable • Example – a program for children in Kenya • Highly vulnerable children (orphans) must get the program • Randomize among less vulnerable children ENROLL RANDOMIZE SORRY

  30. Vary treatment intensity and nature Intensity Nature Randomize across communities Which approach has a greater impact? • Randomize across communities • Additional impact of SMS reminders HIV/AIDS information campaign 100 villages HIV/AIDS information campaign + SMS reminders 100 villages HIV/AIDS information campaign (radio) 100 villages HIV/AIDS information campaign (newspaper) 100 villages

  31. Unit of Randomization • At what level should I randomize? • Individual • Household • Clinic • Community • Considerations • Political feasibility of randomization at individual level • Spillovers within groups • Implementation capacity: One clinic administering different treatments

  32. Unit of Randomization Individual randomization Clinic-level randomization 150 clinics (75 treatment, 75 control) 3000 participants • 630 participants (315 treatment, 315 control) Bigger Unit = Bigger Study

  33. Advantages of Randomized Evaluations • Results are transparent and easy to share • Difficult to manipulate or to dispute • More likely to be convincing

  34. Limitations • Cannot always be used (ex political or ethical reasons) • Internal validity issues: power, attrition, compliance, etc. • External validity issues: sample size, generalizability of results to the population of interest • These issues often affect the validity of non-experimental studies as well

  35. EXAMPLE #1 Evaluating the School-based HIV education programs among youth in Kenya

  36. Education and HIV/AIDS in Kenya • Esther Duflo, Pascaline Dupas, Michael Kremer, Vandana Sharma

  37. Background Information: HIV/AIDS IN KENYA • Kenya AIDS Indicator Survey (KAIS) • August – December 2007 • Sampled 18,000 individuals aged 15 to 64 yrs from 10,000 households across Kenya • OVERALL : 7.4% of Kenyans are HIV+ 8.7% of women are HIV+ 5.6% of men are HIV+ • More than 1.4 million Kenyans are living with HIV/AIDS National AIDS and STI Control Programme, Ministry of Health, Kenya. July 2008. Kenya AIDS Indicator Survey 2007: Preliminary Report. Nairobi, Kenya.

  38. HIV/AIDS in Kenya

  39. School-Based HIV Prevention Interventions • Education has been called a “social vaccine” for HIV/AIDS • Children aged 5-14 yrs have been called a “window of hope” because • they have low HIV infection rates • their sexual behaviors are not yet established and may be more easily molded • In Africa most children now attend some primary schools • School-based HIV prevention programs are inexpensive, easy to implement and replicate • There is limited rigorous evidence about the effectiveness of these types of programs

  40. Background - Study design • Between 2003-2006, non-profit organization ICS implemented HIV prevention programs in 328 primary schools in Western Kenya • Schools were randomly assigned to receive none, one or both of the following interventions: • Teacher Training in Kenya’s national HIV/AIDS education curriculum • National HIV curriculum focuses on abstinence until marriage and does not include condom information • Program provided in-service training to 3 upper-primary teachers to enhance delivery of the curriculum • Uniforms Distribution Program • Provided two free uniforms for one cohort of students (girls and boys), with the aim of helping them stay in school longer (second uniform provided 18 months after first)

  41. Background – Study design • Study Location: Butere, Mumias, Bungoma South and Bungoma East districts in Western Province • Study Sample : 19,300 youths (approx half females) enrolled in Grade 6 in 2003 (~13 years) • Experimental Design:

  42. Results • Teacher Training: • Teachers were more likely to discuss HIV in class • Had little impact on knowledge, self-reported sexual activity, or condom use. • Increased tolerance toward people with HIV • No effect on pregnancy rates 3 years and 5 years later

  43. Results • Uniforms program: • Reduced dropout rates (by 17% in boys and 14% in girls) • Reduced the rate of teen childbearing • From 14.4% to 10.6% after 3 years • From 30.7% to 26.1% after 5 years

  44. HIV/AIDS and Education in Western Kenya: A Biomarkers Follow-up Study • Objective: To study the impact of the teacher training, and uniforms programs on actual transmission of STIs and HIV • Self-reported data is often unreliable especially with respect to sexual behavior • Changes in knowledge or attitudes do not necessarily translate into sustained behavior change

  45. Study Design • A cross-sectional survey to measure HSV-2 prevalence and behavioral outcomes was administered to subjects between February 2009 and March 2011 • Six to eight years after interventions • Note: not powered to estimate impacts on HIV

  46. 328 schools in Western Kenya Teacher Training Free Uniforms Control HSV-2 Prev KAP HSV-2 Prev KAP HSV-2 Prev KAP Random Assignment Programs offered in 2003 Follow up in 2009-2010 KAP= Knowledge, Attitudes and Practices

  47. Results IHSV-2 infection 7 yrs post-intervention

More Related