1 / 22

Capacity development & learning in evaluation

Capacity development & learning in evaluation. Uganda Evaluation Week 19 th to 223 rd May 2014. Contents. What do we mean by evaluation capacity? Links to managing for results Evidence from organisations who have introduced results management systems The challenge of evaluation culture

dasan
Télécharger la présentation

Capacity development & learning in evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Capacity development & learning in evaluation Uganda Evaluation Week 19th to 223rd May 2014

  2. Contents • What do we mean by evaluation capacity? • Links to managing for results • Evidence from organisations who have introduced results management systems • The challenge of evaluation culture • The experience of DFID • Results system or results culture • Conclusions

  3. What do we mean by evaluation capacity? A process of good governance? Building blocks? Both ?

  4. Individual Level Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

  5. Institutional Level Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

  6. Enabling Environment Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

  7. All or nothing? Neglect of… Means that …

  8. Results management framework • Strategic Results Framework • Objectives, indicators & strategy • Roles & responsibility • Programme Results Framework • Results chain & theory of change • Align with strategic framework • Performance indicators • Credible performance reporting • Relevant, timely & reliable reporting • Use results to improve • performance • Adjust the programme • Develop lessons & good practices • Credible measurement & analysis • Measure & assess results • Assess contribution to strategic objectives Source: Itad Ltd, adapted from ‘Managing for Development Results Handbook’

  9. Expectations for managers Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

  10. Evaluations of RBM • UNDP (2007) • Significant progress was made on: on sensitising staff to results; and on creating the tools to enable a fast and efficient flow of information. • Managing for results has proved harder to achieve. Stronger emphasis on resource mobilisation and delivery, a culture fostering a low level of risk-taking, weak information systems at country level, the lack of clear lines of accountability and the lack of a staff incentive structure all work against building a strong culture of results. • Finland (2010) • Tools and procedures are comprehensive and well established . Good standards of project design are not consistently applied. • Low priority given by managers to monitoring, reporting and evaluation. Most monitoring reports were activity-based or financial and there was little reporting against logframes. • Managing for results depends not only on technical methodology, but also on the way the development cooperation programme is organised and managed. Finland’s approach is characterised as being risk-averse; few examples of results being used to inform policy.

  11. ‘Can we demonstrate the difference that Norwegian aid makes?’ Overall conclusion • Although there are some elements of good foundations for better results measurement, current arrangements lack the strength of leadership, depth of guidance and coherence of procedures necessary for effective evaluation of Norwegian aid. • As a result of a lack of incentives, poor processes for planning and monitoring grants, and weaknesses in the procedures for evaluations, this cannot be demonstrated. ITAD Ltd (2014) Can we demonstrate the difference that Norwegian Aid makes? Evaluation of results measurement and how this can be improved Available at: http://www.norad.no/en/tools-and-publications/publications/evaluations/publication?key=412342

  12. What is an evaluative culture? • An organization with a strong evaluative culture: Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

  13. UK Department for International Development • A 2009 study into DFID’s evaluation reports found: • Weaknesses were systemic in nature linked to top management, requiring a significant change in culture. • A key overarching problem was an unduly defensive attitude to the findings from evaluation. • Other detailed recommendations called for: • evaluability issues to be considered at the planning stage; • for training of staff; • for strengthening the evidence base that underpins evaluations; and • for requiring managers to make a formal response to evaluations. Roger C Riddell (2009) The Quality of DFID’s Evaluation Reports and Assurance Systems. IACDI (The Independent Advisory Committee on Development Impact)

  14. DFID – Highly rated for evaluability • Fivefeatures of DFID’s approach combine to justify high ratings : • Continuity of guidance from planning a project Business Case; quality assurance arrangements; evaluation policy; and evaluation training materials, with some cross referencing. • Recognition that a clear logic model and results based on prior evidence strengthens the quality of project design rather than being a formality to complete a project proposal. • Evaluability is assessed from several perspectives: expected impact and outcomes; strength of the evidence base; theory of change; and what arrangements are need to measure, monitor and evaluate progress and results. • Documentation includes detailed descriptions, training or self-briefing materials and examples for staff to follow. • There is consistency of message across planning guidance, appraisal and approval, with a detailed checklist for quality assurance. Source: ITAD Ltd (2014) Can we demonstrate the difference that Norwegian aid makes? An evaluation of results measurement and how this can be improved. Annex 5 (available on www.norad.no/evaluation)

  15. DFID – Embedding Embedding: Business Cases and Evaluation advisors Since 2011: 37 advisers in a evaluation role; 150 staff accredited in evaluation and 700 people receiving basic training. • Significant increase in the quantity of evaluations commissioned increased from 12 per year, prior to 2011, to an estimated 40 completed evaluations in 2013/14. • The embedding process has increased the actual and potential demand. • Decision to evaluate now made during the preparation of BCs. Good for programme performance, but a lack of a broader strategic focus. • Depth of this capacity is less than required with 81% accredited to date only at the foundation or competent level • Gaps in capacity relate to: • understanding why and when to commission evaluation • enhancing the contexts of evaluations and engaging stakeholders appropriately • selecting and implementing appropriate evaluation approaches while ensuring reliability of data and validity of analysis • reporting and presenting information in a useful and timely manner. • Need to: • strengthen evaluation governance; develop a DFID evaluation strategy Source DFID (2014) Rapid Review of Embedding Evaluation in UK DFID

  16. Core quality model

  17. DFID Learning DFID is the highest performing civil service main department for ‘learning and development’. (Cabinet Office survey) • Evaluations are a key source of knowledge. • 40 evaluations completed in 2013-14. • 425 evaluations either underway or planned as at July 2013. • Annual, mid-term and project completion reviews are an under-utilised resource. • Staff find it hard to identify what is important and what is irrelevant. • DFID’s ability to influence has been strengthened by its investment in knowledge. Issues: • Workload pressures restricts making time to learn. • Staff often feel under pressure to be positive about assessing both current and future project performance. • Knowledge is sometimes selectively used to support decision-making. • Positive bias links to a culture where staff have often felt afraid to discuss failure. • Many evaluations are not sufficiently concise or timely to affect decision-making. Source: Independent Commission for Aid Impact (2014) How DFID Learns

  18. UK – National Audit Office £44m spent on government evaluation in 2010-11 Estimated 102 FTE staff working on evaluation in the government Findings Recommendations • Significant spend • Coverage incomplete • Rationale for what the government evaluates is unclear. • Evaluations often not robust enough to reliably identify the impact. • Learning not used to improve impact and cost-effectiveness. • Plan evaluation when designing all new policies. • Design policy implementation to facilitate robust evaluation. • Departments to make data available to independent evaluators for research purposes. Source NAO (2014) Evaluation in Government

  19. Results system or results culture? But these should not be mistaken for an evaluative culture. Indeed, on their own, they can become a burdensome system that does not help management at all. Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

  20. Measures to foster an evaluative culture Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR

  21. Conclusions – taking a positive view • Evaluation only one source of information alongside research and implementation experience. ECD needs to inform how these work together. • Quality evaluation is built on quality planning. ECD needs to be linked to better planning systems. • Technical skills are necessary but are not sufficient. • Effective evaluation will be determined by the culture and incentives in the organisation. • ECD is a journey, not a destination. Systems are not static; they need continual review, learning and revision. There is no simple solution but rather systems need to be introduced, used, tested, reviewed and then updated in a rolling cycle.

  22. End

More Related