1 / 35

MeE for Learning Organizations

MeE for Learning Organizations. Lant Pritchett and Salimah Samji A Cutting Edge in Development Thinking Harvard Executive Education May 13, 2010. http://www.youtube.com/watch?v=ZrzC_KLI8KM. “Evaluation” as an innovation/movement/advocacy position to improve “development”.

hakan
Télécharger la présentation

MeE for Learning Organizations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MeE for Learning Organizations Lant Pritchett and Salimah Samji A Cutting Edge in Development Thinking Harvard Executive Education May 13, 2010

  2. http://www.youtube.com/watch?v=ZrzC_KLI8KM

  3. “Evaluation” as an innovation/movement/advocacy position to improve “development” Successful Movements • Clearly articulated vision • Politically feasible coalition • “Career” trajectories • Patina of “normal science” …but can be ineffective • Insularity, not open to question fundamental premises • Lock-in of movement specific “human capital” politically defensive • Takes too long to shift if proves ineffective

  4. How does evaluation fit in “development” • “Development” is a coalition of narrower sub-movements both objective specific (e.g. education, health, gender, environment) and instrument specific (e.g. micro-credit, irrigation) • Help to make “successful” movements also effective • Eventually weed out the successful but ineffective sub-movements (but this is hard and unlikely to be the result of Big E evaluation)

  5. Overview of session • Defining terms: What is “M” and “E” • Introducing “e”: The missing middle • “e” as a learning tool: The 7 step process • Aggregating up from organizational learning to system learning

  6. Why you should care … • To identify whether there were any benefits for the investments made • Were objectives met? • What factors explain the result? • How can the program be improved? • Compare alternative models to get the biggest bang for your buck • To inform next generation projects • Evidence-based policy making – demonstration effect for government

  7. What is “M” and “E”? Monitoring (“M”): Regular collection and reporting of information to show what progress has been made in the implementation of programs. Focuses on inputs and (sometimes) outputs. Evaluation (“E”): Measuring changes in outcomes and evaluating the impact of specific interventions on those outcomes. Focuses on “with and without” interventions (needs “control” group) and identifies causal impacts. There is a difference between M and E!

  8. Complementary roles for M and E Monitoring • Routine collection of information • Tracking implementation progress • Focus on inputs and sometimes outputs “Is the project doing things right ?” Evaluation • Ex-post assessment of effectiveness and impact • Confirming project expectations • Measuring impacts “What is the project doing?”

  9. What do the poor say? “Is this information you are gathering from us just to help you write your report or can you really be helpful to us?” Woman in South Sudan

  10. Introducing “e”: The missing middle • “e” = experiential learning • “e” lies in between M and E • Analyzing existing information (baseline data, monitoring data) • Drawing intermediate lessons • Serves as a feed-back loop into project design • Don’t always have to do Impact Evaluation • Uses within project design variations to identify differentials in the efficacy of the project on inputs and outputs for real time feedback into project/program implementation

  11. The problem in pictures Lets begin with the project time line Lots of “M” – passing data unto God for whatever use … Findings of “E” come too late to be of much assistance to implementers Lost opportunity: No timely “e” to help the project!!

  12. “e” as a learning tool: The 7 step process

  13. Step 1: Reverse engineer from goals back to instruments • Begin with a clear definition of the problem you are trying to solve. Then state the goalas well as the magnitude of the desired impact. • Reverse engineer your goal to program/policy/project instruments. • Clear objectives of the project (what is the problem?) • Clear idea of how you will achieve the objectives (causal chain or storyline) • Outcome focused: What visible changes in behavior can be expected among end users as a result of the project, thus validating the causal chain/ theory of change?

  14. Magnitude Matters • Ex ante threshold justifies the cost. • If you’re hunting for hippos don’t look under the grass.

  15. Using a storyline to structure a design concept: Results Future Vision of Success Present Unsatisfactory Situation River of Uncertainties You need a complete coherent causal chain from proposed action to desired outcome for “how” the “what” will happen.

  16. A dysfunctional storyline fails to deliver results Results Present Unsatisfactory Situation Future Satisfactory Situation River of Uncertainties

  17. Example: Storyline for education project Make your theory of change explicit

  18. Step 2: Design a project • Design a project (P1) that will help you achieve your goals. • Specify the timing, magnitude and gain from the project for each link in the chain. • Determine the indicators (input, output and outcome) that you will collect to test if your theory of change works or not.

  19. Review: Log Frame, Results Framework, Theory of Change • Impacts • Outcomes • Outputs • Activities • Inputs Longer-term benefits Effectiveness Results Deliverables Efficiency Procurement & Disbursements

  20. Example: Indicators for education project

  21. Steps 1 and 2 are standard operating procedure • But not rigorous enough and no “E”valuation of outcomes. • In theory if not in practice (cost benefit analysis is done for only 20% of bank projects). • Haphazard/unstructured learning – ad hoc responding at mid term review. • So what are the next 5 steps …

  22. Step 3: Admit we do not know what will work … and we certainly do not know what will work best • Acknowledge that implicit choices were made in designing the project P1. • Admit that there might be differentials in magnitude that depend on the selection of the design elements/parameters. The mythical “alternatives considered”

  23. Step 4: Identify the design space and design two more project variants • Articulate your design space. Specify the key parameter/elements within the design space. • Specify the timing, potential magnitude and uncertainty of the gain for each of these possible project variants. • Select two (or more) new projects based on the highest uncertainty and upside potential. • Repeat step 2(c) for each of the new projects (i.e. determine indicators for P2 and P3).

  24. 4a. Articulate your design space • Using our education example of teacher training, assume 3 design parameters with 2 options each: • Location: Centrally (A) or in School (B) • Content: Subject matter (α) or Pedagogy (β) • Follow-up: Semi-annually (I) or Annually (II)

  25. 4b. Specify potential magnitude and level of uncertainty of impact for all project variants P1

  26. Step 5: Strategically crawl your design space • Pilot the projects P1, P2 and P3 for the duration of time that you determined. • P2 and P3 could serve as an “internal counterfactual” for P1, if randomly assigned. • Collect all input and output indicators for all three projects.

  27. Step 6: “e” feeds back into a pre-specified sequential design process • Analyze the data you collected for P1, P2 and P3. • Based on analysis crawl to next most promising component of the design space and repeat Step 4.

  28. Example of a sequential design process: Electricity provision to slums Source: Anand&Garcia 2010

  29. The problem in pictures - revisited “e” feeds back into design process helping implementers learn

  30. Feedback loop between Step 6 and 4

  31. Advantages of “e” over “E” evaluation • Project implementers feel part of the process, see the benefits, are bought in and knowledge is co-produced • No collection of data on a “no program” group required—the comparisons are “within program/project” variants • Can handle truly universal programs if a control is simply impossible. • You can learn or generate hypotheses you did not anticipate • Ability to explore the interactions of the policy or policies with all kinds of background variables • Big E evaluation • often cannot usefully distinguish causes of failure—many projects simply fail to be implemented • can explore only a tiny part of the design space (even with 5 design parameters, 2 options each, with complementarities the dimensionality blows up) • generalization beyond places where the specific distribution of all variables that can influence the outcome is precisely the same as the original study location

  32. Step 7: Go back to authorizing environment • How does evaluation fit into Ministry of Finance, Planning Ministry and/or Chief Economist of Countries. • “e” helps sectors come back with the best possible project. • “e” creates legitimate space for organization failure.

  33. Organization portfolio of MeE

  34. The Achievements of the best of aid look like the Conditions of the worst of aid Every Hollywood movie has the same plot: a sympathetic character overcomes increasingly difficult obstacles to achieve their final objective William Goldman The problem wasn’t that Rocky had the same plot as all other Hollywood movies, it was the inauthentic repetition

More Related