1 / 24

Impact Evaluation in Education

Impact Evaluation in Education. Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14. The essence of theory of change – linking activities to intended outcomes. I am building a temple. I am cutting rocks. http://img359.imageshack.us/img359/7104/picture420bt2.jpg.

loraj
Télécharger la présentation

Impact Evaluation in Education

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14

  2. The essence of theory of change – linking activities to intended outcomes I am building a temple I am cutting rocks http://img359.imageshack.us/img359/7104/picture420bt2.jpg

  3. Theory of change “the process through which it is expected that inputs will be converted to expected outputs, outcome and impact” DfID Further Business Case Guidance “Theory of Change”

  4. Theory of change Start with a RESULTS CHAIN

  5. The results chain: tips We produce Influence Contribute to We control We control Clients control 100% attribution Some attribution Partial attribution We are accountable for We expect Should occurr Delivered annually By end of program Long-term Readily changed Less flexibility to change Long term Activities Outputs Outcomes

  6. Monitoring –activities and outputs

  7. Personal Monitoring Tools

  8. No monitoring - blind and deaf

  9. Monitoring and Evaluation MonitoringEfficiency Measures how productively inputs (money, time, personnel, equipment) are being used in the creation of outputs (products, results) An efficient organisation is one that achieves its objectives with the least expenditures of resources Evaluation Effectiveness Measures the degree to which results / objectives have been achieved An effective organisation is one that achieves its results and objectives

  10. MONITORING focused on project process (per individual project) EVALUATION focused on effectiveness of project process (for many projects) Inputs All Outputs Most Outcomes Some • Short and intermediate effects. • Long term effects and changes • Project • deliverables • achieved • “Count” • (quantified) • what has • been done • Resources • Staff • Funds • Facilities • Supplies • Training

  11. Resist temptation, there must be a better way! • Clear objectives • Few key indicators • Quick simple methods • Existing data sources • Participatory method • Short feed-back loops • Action results!

  12. Monitoring/Evaluation objectives must be SMART • Specific • Measurable • Achievable • Realistic • Timed (see 10 Easy Mistakes, page 5)

  13. Evaluation: who evaluates whom? The value of a joint approach

  14. The Logical Chain • Define Objectives (and Methodology) • Supply Inputs • Achieve Outputs • Generate Outcome • Identify and Measure Indicators • Evaluate by comparing Objectives with Indicators 7.Redefine Objectives (and Methodology)

  15. Impact Evaluation • An assessment of the causal effect of a project, program or policy beneficiaries. Uses a counterfactual… Impacts = Outcomes - What would have happened anyway

  16. When to useImpact Evaluation? Evaluate impact when project is: Innovative Replicable/ scalable Strategically relevant for reducing poverty Evaluation will fill knowledge gap Substantial policy impact Use evaluation within a program to test alternatives and improve programs

  17. Impact Evaluation Answers What was the effect of the program on outcomes? How much better of the beneficiaries because of the program policies? How would outcome change if changed program design? Is the program cost-effective?

  18. Different Methods to measure impact evaluation

  19. Randomization • The “gold standard” in evaluating the effects of interventions • It allows us to form a “treatment” and “control” groups • identical characteristics • differ only by intervention Counterfactual: randomized-out group

  20. Matching • Matching uses large data sets and heavy statistical techniques to construct the best possible artificial comparison group for a given treatment group. • Selected basis of similarities in observed characteristics • Assumes no selection bias based on unobservable characteristics. Counterfactual: matched comparison group

  21. Difference-in-difference • Compares the change in outcomes overtime between the treatment group and comparison group • Controls for the factors constant overtime in both groups • ‘parallel trends’ in both the groups in the absence of the program Counter-factual: changes over time for the non- participants

  22. Uses of Different Design

  23. Qualitative and Quantitative Methods Qualitative methods focus on how results were achieved (or not). They can be very helpful for process evaluation. It is often very useful to conduct a quick qualitative study before planning an experimental (RCT) study.

  24. Thank you!

More Related