360 likes | 583 Vues
What is Randomized Evaluation? Why Randomize?. J-PAL South Asia, April 29, 2011. Life Cycle of an Evaluation. < Needs assessment > <~~~~~~~~~~~ Process Evaluation ~~~~~~ ~ ~~~~~> < ~~~~~~~ ~~ Impact Evaluation ~~~~~~ ~~>
E N D
What is Randomized Evaluation?Why Randomize? J-PAL South Asia, April 29, 2011
Life Cycle of an Evaluation • < Needs assessment > <~~~~~~~~~~~ Process Evaluation ~~~~~~~~~~~~> <~~~~~~~~~ Impact Evaluation ~~~~~~~~> Other types of Evaluations: Review / Cost-benefit analysis / Cost-effectiveness Analysis
What was the issue? • Class sizes are large • Many children in 3rd and 4th standard are not even at the 1st standard level of competency • Social distance between teacher and some of the students is large
Proposed solution • Hire local women (balsakhis) from the community • Train them to teach remedial competencies • Basic literacy, numeracy • Identify lowest performing 3rd and 4th standard students • Take these students out of class (2 hours/day) • Balsakhi teaches them basic competencies
Possible outcomes Pros Cons Less qualified Teacher resentment Reduced interaction with higher-performing peers • Reduced class size • Teaching at appropriate level • Reduced social distance • Improved learning for lower-performing students • Improved learning for higher-performers
Study design • We want to look at impact • What do we do?
What is the impact of the balsakhi? • What would have happened without the balsakhi program?
Pre-post (Before vs. After) • Look at average changein test scores over the school year for the balsakhi children
What was the impact of the balsakhi program? Before vs. After Impact = 26.42 points? 75 50 25 0 0 26.42 points? 2002 2003
Impact: What is it? Intervention Impact Primary Outcome Counterfactual Time
Impact: What is it? Counterfactual Impact Intervention Primary Outcome Time
Impact: What is it? Intervention Primary Outcome Impact Counterfactual Time
How to measure impact? Impact is defined as a comparison between: • the outcome some time after the program has been introduced • the outcome at that same point in time had the program not been introduced the ”counterfactual”
Impact evaluation methods • Randomized Experiments • Also known as: • Random Assignment Studies • Randomized Field Trials • Social Experiments • Randomized Controlled Trials (RCTs) • Randomized Controlled Experiments
Impact evaluation methods 2. Non- or Quasi-Experimental Methods a. Pre-Post • Simple Difference • Differences-in-Differences • Multivariate Regression • Statistical Matching • Interrupted Time Series • Instrumental Variables • Regression Discontinuity
Other Methods • There are more sophisticated non-experimental methods to estimate program impacts: • Multivariable Regression • Matching • Instrumental Variables • Regression Discontinuity • These methods rely on being able to “mimic” the counterfactual under certain assumptions • Problem: Assumptions are not testable
Methods to estimate impacts • We constructed results from the different ways of estimating the impacts using the data from the schools that got a balsakhi • Pre – Post (Before vs. After) • Simple difference • Difference-in-difference • Other non-experimental methods • Randomized Experiment
II – What is a randomized experiment?The Balsakhi Example povertyactionlab.org
The basics Start with simple case: • Take a sample of schools • Randomly assign them to either: • TreatmentGroup– is offered treatment • Control Group- not allowed to receive treatment (during the evaluation period)
6 – Randomized Experiment • Suppose we evaluated the balsakhi program using a randomized experiment • QUESTION #1: What would this entail? How would we do it? • QUESTION #2: What would be the advantage of using this method to evaluate the impact of the balsakhi program? Source: www.theoryofchange.org
Random assignment in Vadodara 2006 Baseline Test Scores 20 10 0 28 28 Treat Compare
Key advantage of experiments Because members of the groups (treatment and control) do not differ systematically at the outset of the experiment, any difference that subsequently arises between them can be attributed to the program rather than to other factors. 23
Constraints of design • Pratham had received funding to introduce balsakhis in all schools • Principals wanted balsakhis in their school • Teachers wanted balsakhis in their class • If denied a balsakhi, they may block data collection efforts
Rotation design Round 1 Red: Std. 3 Blue: Std. 4 Round 1 Red: Std. 3 Blue: Std. 4 Round 2 Std 3 from Round 1 Std 4 in Round 2 —————————————————————————— Std 4 from Round 1 Std 3 in Round 2
Rotation design Comparison Red: Treatment Blue: Control Std 3 Std 4 Round 1 Red: Std. 3 Blue: Std. 4
Key steps in conducting an experiment 1. Design the study carefully 2. Collect baseline data 3. Randomly assign people to treatment or control 4. Verify that assignment looks random 5. Monitor process so that integrity of experiment is not compromised
Key steps in conducting an experiment (cont.) 6. Collect follow-up data for both the treatment and control groups 7. Estimate program impacts by comparing mean outcomes of treatment group vs. mean outcomes of control group. 8. Assess whether program impacts are statistically significant and practically significant.
Impact of Balsakhi - Summary *: Statistically significant at the 5% level
Impact of Balsakhi - Summary *: Statistically significant at the 5% level
Impact of Balsakhi - Summary *: Statistically significant at the 5% level Bottom Line: Which method we use matters!
Results • This program improved average test score by nearly 10% • The lowest-performing kids in balsakhi schools improved the most (relative to the control group) • Highly cost effective: intervention cost less than Rs.100 per child and results far outweigh the cost • Can be scalable, especially in light of RTE Act: remedial education