1 / 13

Integrating Quantitative& Qualitative Methods; and Putting all together

Integrating Quantitative& Qualitative Methods; and Putting all together. Prepared for the course on Evaluating the Impact of Projects and Programs, April 10-14, 2006, Beijing China. Contact: Ywang2@worldbank.org. Outlines.

butch
Télécharger la présentation

Integrating Quantitative& Qualitative Methods; and Putting all together

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Integrating Quantitative& Qualitative Methods;and Putting all together Prepared for the course on Evaluating the Impact of Projects and Programs, April 10-14, 2006, Beijing China. Contact: Ywang2@worldbank.org

  2. Outlines This session tries to link together what we have covered so far [leaving the applications and case studies for Module 5] • Integrating Quantitative and qualitative methods • What if we do not have baseline data? • Other approaches to impact evaluation • Selecting a methodology: best-practices • A review of topics that we have covered so far • Homework for the team presentation tomorrow

  3. Integrating Quantitative and Qualitative methods • No one method is perfect: Qualitative methods have several drawbacks • Growing acceptance for integrated approach • Benefits of integrating both • Consistency checks • Different perspectives • Analysis can be conducted on different levels • Feedbacks to help interpret findings of quantitative analysis

  4. An example: Nicaragua School Autonomy Reform case • Quantitative: a quasi-experimental design were used • to determine the relationship between decentralized management and learning; and • to generalize results for different types of schools • Qualitative analysis: a series of key informant interviews and focus group discussions with different school based staff and parents were used • To analyze the context in which the reform was introduced, • To examine the decision-making dynamics in each school, and • To assess the perspectives of different school community actors on the autonomy process.

  5. What if we do not have baseline data? • Use time-lag comparison groups: by bringing control groups into the program at a later stage, once the evaluation has been designed and initiated. Random selection determines when the eligible joining in. • Use Quasi-experimental designs such as “propensity score matching” in which the comparison group is matched to the treatment group on a set of observed char. or by using the “propensity score” see below. • Use other Quasi-experimental designs • Use non-experimental designs – other approaches.

  6. Propensity Score Matching • Aims to find the closest comparison group from a sample of non-participants to the sample of participants. • Need a representative sample survey of eligible non-participants and one for participants. • Pool the two samples and estimate a logit model of program participation • Create the predicted value of the probability of participation from the logit regression; these are called “propensity scores” • Exclude some non-participants with lowest scores • For each participants find 5 non-participants with the closest propensity score. This is called nearest neighbors. • The estimated project gain for individual i is the difference between his/her outcome value (e.g. income) and the mean outcome of the nearest neighbors. • Calculate the mean value of these individual gains to obtain the average overall gain.

  7. Non-experimental design –other approaches for IE • Evaluating structural adjustment programs where there is no counterfactual… • Approaches with no counterfactual • Qualitative studies • “Before and After” which compares the performance of key variables after a program with those prior to the program. • Approaches that generate a counterfactual using multiple assumptions. • Computable General Equilibrium models (CGEs) that attempt to contrast outcomes in treatment and comparison groups through simulations. • With and without comparisons • Statistical controls consisting of regressions that control for the differences in initial conditions and policies undertaken in porgram and non program countries. • Theory-based evaluation.

  8. Cost-Benefit or Cost-effectiveness analysis • This type of analysis does not measure impact, but measures program efficiency. • Attempts to measure the economic efficiency of program costs versus benefits in monetary terms. But it may not be possible for programs in social sectors • Identify all program/project costs and benefits and compute a cost to effectiveness ratio R=cost/unit (or benefit), which can be compared across interventions. • Caveats: quantifying costs and benefits; appropriate indicators; methods and assumptions being consistent across ratios.

  9. Selecting a Methodology Evidence from the best-practice evaluations highlighted that the choices of IE methods are not mutually exclusive. Combining quantitative and qualitative methods is ideal. Features of best-practice evaluations: • An estimate of the counterfactual has been made by randomization or using other methods such as matching to create a comparison group. • Relevant data collected at baseline and follow-up • The treatment and comparison group are of sufficient size to establish statistical inferences with minimal attrition • Cost-benefit or cost-effectiveness analysis is included to measure project efficiency • Qualitative techniques are incorporated to allow for the triangulation of findings.

  10. Putting them all together • The why” and “what” of IE (session 1-3) • The log frame and its role (session 4) • Evaluation questions, key steps (session 5) • Designing evaluation and indicators (s6-7) • Quantitative (exp and quasi-exp) methods (s8) • Survey design and sampling (s9-10) • Qualitative and other nonexp methods (s 11-13) • Data analysis and reporting (s14-17) • Participant team work session [last session]

  11. 为什么要进行IMPACT评估?(第1-3节) 什么是项目流程图?(第4节) 什么是评估感兴趣的问题?(第5节) 如何得到问题的答案?( 第6,8,10,11,12节)x 什么是测量的指标和工具?(第7节) 从那里和如何收集到数据?(第8,9,12节) 如何对收集到的数据进行分析和解释?(第10, 12, 14-17节) 如何报告和应用评估的结果?(第14-17节) 总结讲课的内容 (this page needs to re-translated based on previous one)

  12. 评估所带来的总体利益 评估的保密和匿名原则 评估给评估实验参与者可能造成的负担和伤害 处理评估过程中所发现的失误和危机 评估的职业道德

  13. 设计矩阵 世行学院评估部( WBIEG)

More Related