1 / 26

Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank

Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank Impact Evaluation for Real Time Decision Making. Do we know…. Can leadership and motivation alone break the poverty trap? – Rwanda rapid results initiative

mateja
Télécharger la présentation

Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank Impact Evaluation for Real Time Decision Making

  2. Do we know… • Can leadership and motivation alone break the poverty trap? –Rwanda rapid results initiative • Can participation change local norms and institutions? –Sierra Leone institutional reform & capacity building/gobifo • What type of information can enhance local accountability? –Uganda schools’ budget • What is the best way to select local projects? –Indonesia direct voting versus representatives’ decisions • Do audits increase accountability in public administration? –Indonesia, Brazil

  3. Of course we know!? • These are difficult questions… • We turn to our best judgment for guidance and pick an incentive, subsidy level, a voting scheme, a package of services… • Is there any other incentive, subsidy, scheme or package that will do better?

  4. The decision process is complex • A few big decisions are taken during design but many more decisions are taken during roll out & implementation

  5. Developing a decision tree for public procurement …

  6. How to select between plausible alternatives? • Establish which decisions will be taken upfront and which will be tested during roll-out • Scientifically test critical nodes: measure the impact of one option relative to another or to no intervention • Pick better and discard worse during implementation • Cannot learn everything at once • Select carefully what you want to test by involving all relevant partners

  7. Impact evaluation Application of the scientific method to understand and measure behavioral response • Hypothesis • If we subdivide contracts then more companies will bid for the contract • Testing • Randomly assign which bids will be subdivided in smaller size contracts and compare number of bidders and bidding price relative to engineering costs • Observations • Smaller contract size increases number of bidders • Bidding prices fall • Costs of administering contracts rise • Conclusion • Smaller contract size increases competition in the bidding procedure

  8. Walk along a decision tree for public procurement … In Okhlahoma, publication of engineering costs lowered procurement costs by 4.6% Andhra Pradesh e-procurement increased bidder participation by 25%

  9. What is Impact Evaluation? Counterfactual analysis isolates the causal effect of an intervention on an outcome • Effect of contract size on number of bidders • Effect of renegotiation option on bidding price • Effect of information on price • Compare same individual with & without option, information etc. at the same point in time to measure the effect • This is impossible • Impact evaluation uses large numbers (individuals, communities) to estimate the effect

  10. How is this done? • Select one group to receive treatment (contract alternative, information…) • Find a comparison group to serve as counterfactual (other alternative, no information…) • Use these counterfactual criteria: • Treated & comparison groups have identical initial average characteristics (observed and unobserved) • The only difference is the treatment • Therefore the only reason for the difference in outcomes is due to the treatment (or differential treatment)

  11. How is monitoring different from impact evaluation? % Before After 80 20 20 Change=60 2001 1996 Low controls High controls Information The Monitoring story In 1996, transfer to school was 20 out of 100 budget allocation Government started publishing school allocations in newspapers By 2001, transfers jumped to 80% of budget allocation, a gain of 60 (80-20)

  12. How is monitoring different from impact evaluation? The Impact evaluation story • In 1996, a low awareness year, transfers to schools were 20% of budget allocation • After the 1996 PETS were published, there was much public discussion and budgets published in local newspapers • By 2001, transfers in schools close to newspaper outlets increased to 80 • Transfers to schools far from newspaper outlets increased to 36 • Newspaper information increased transfers by 44 (80-36) % Before After 80 Impact =44 36 20 20 2001 1996 Low controls High controls Information & other factors

  13. How to use Monitoring & Impact Evaluation? Use monitoring to track implementation efficiency (input-output) BEHAVIOR • Use impact evaluation to measure effectiveness (output-outcome) MONITOR EFFICIENCY INPUTS OUTPUTS OUTCOMES EVALUATE EFFECTIVENESS $$$

  14. Question types and methods • Monitoring and process evaluation • Is program being implemented efficiently? • Is program targeting the right population? • Are outcomes moving in the right direction? • Impact Evaluation • What was the effect of the program on outcomes? • How would outcomes change under alternative program designs? • Is the program cost-effective? Descriptive analysis Causal analysis

  15. When would you use M&E and when IE? • Are transfers to localities being delivered as planned? • Does information reduce capture? • What are the trends in public procurement? • Do contractual incentives increase timeliness in delivery of services? • M&E • IE • M&E • IE

  16. Why evaluate: babies & bath water Uganda Community-Based Nutrition • Failed project • In year 3 communities stopped receiving funds • Parliament negative reaction • Intervention stopped …but… • Strong impact evaluation results in year 1-2 • Children in treatment scored half a standard deviation better than children in the control • Recently, Presidency asked to take a second look at the evaluation: saving the baby?

  17. Why Evaluate? • Improve quality of programs • Separate institutional performance from quality of intervention • Test alternatives and inform design in real time • Increase program effectiveness • Answer the “so what” questions • Build government institutions for evidence-based policy-making • Plan for implementation of options not solutions • Find out what alternatives work best • Adopt better way of doing business and taking decisions

  18. PM/Presidency: Communicate to constituencies Treasury/ Finance: Allocate budget CAMPAIGN PROMISES BUDGET Accountability Effects of government program SERVICE DELIVERY Line ministries: Deliver programs and negotiate budget Cost-effectiveness of alternatives and effect of sector programs Institutional framework Cost-effectiveness of different programs

  19. Shifting Program Paradigm From: • Program is a set of activities designed to deliver expected results • Program will either deliver or not To: • Program is menu of alternatives with a learning strategy to find out which work best • Change programs overtime to deliver more results

  20. Shifting Evaluation Paradigm • From retrospective, external, independent evaluation • Top down • Determine whether program worked or not • To prospective, internal, and operationally driven impact evaluation /externally validated • Set program learning agenda bottom up • Consider plausible implementation alternatives • Test scientifically and adopt best • Just-in-time advice to improve effectiveness of program over time

  21. Operational questions: managing for results • Question design-choices of program • Institutional arrangements, Delivery mechanisms, Packages, Pricing/incentive schemes • Use random trials to test alternatives • Focus on short term outcomes • take up rates, use, adoption • Follow up data collection and analysis • 3-6-12 months after exposure • Measure impact of alternative treatments on short term outcomes and identify “best” • Change program to adopt best alternative • Start over

  22. Policy questions: accountability • How much does the program deliver? • Is it cost-effective? • Use most rigorous method of evaluation possible • Focus on higher level outcomes • educational achievement, health status, income • Measure impact of operation on stated objectives and a metric of common outcomes • One, two, three year horizon • Compare with results from other programs • Inform budget process and allocations

  23. Is impact evaluation a one shot analytical product? • This is a technical assistance product to change the way decisions are taken • It is about building a relationship • Adds results-based decision tools to complement existing sector skills • The relationship delivers not one but a series of analytical products • Must provide useful (actionable) information at each step of the impact evaluation

  24. DIMEDevelopment Impact Evaluation • Objective • To build know‐how in implementing agencies and work with Bank operations to generate operational knowledge • Programmatic Activities • Annual workshops for skill development • Community of practice for South‐to‐South learning • Technical advisory group to assure quality of analytical work • In-Country Activities • Technical support and field coordination through project cycle

  25. DIME programs

  26. Thank you email alegovini@worldbank.org Web www.worldbank.org/dime

More Related