1 / 62

CHSSN Evaluation Workshop

CHSSN Evaluation Workshop. January 28 2005 Natalie Kishchuk, PhD. What does the word ‘ evaluation’ evoke for you? Write down one or two words that come to mind…. Aims. Provide participants with knowledge about alternative models of evaluation, to assist in preparation of CHSSN evaluation

yon
Télécharger la présentation

CHSSN Evaluation Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHSSNEvaluation Workshop January 28 2005 Natalie Kishchuk, PhD

  2. What does the word ‘ evaluation’ evoke for you? • Write down one or two words that come to mind…

  3. Aims • Provide participants with knowledge about alternative models of evaluation, to assist in preparation of CHSSN evaluation • Tell jokes about evaluation

  4. Outline 1. Why evaluate? 2. Evaluation models Dominant models Alternative models 3. Choosing evaluation processes for CHSSN 4. Planning the evaluation

  5. 1. Why evaluate? • Example: Canada Prenatal Nutrition Program • prenatal nutrition supplements and support for high-risk women • reached nearly 30,000 women between 1996 and 2002 • Why would you evaluate this program?

  6. Why evaluate? • To learn, improve and share what you have learned • To grow and develop as an organization, as individuals • To ensure your actions are having the desired impact • Not a negative impact or no impact at all • To see if there are more effective ways of using your resources

  7. Why evaluate? • I.e. the main functions of evaluation are: • Accountability • Program improvement • Knowledge development

  8. Summative vs. formative evaluation • Summative evaluation question: What is the value of this program, activity, organization? • Impacts, outcome, results evaluation • Results-based performance measurement • Formative evaluation question: How can we make this better? • Scriven (1967)

  9. 2. Evaluation models • Dominant models • In health and social services • In government

  10. Dominant model in health and social services • Impact evaluation • Aims to assess the impacts, results, effects, outcomes of the program • Question is: Did it work? Did it produce the results we wanted? Did it achieve its objectives?

  11. Impact evaluation methods • Methods adopted from social science research • Use quantitative measures and statistical analyses • Become fully developed during the era of the ‘Experimenting Society’

  12. Impact evaluation designs • Randomized experiments or trials • Random assignment of participants (people, organizations, cities) to the program or a control • Quasi-experimental • Participants are compared to a ‘non-equivalent’ comparison group • Longitudinal • Participants are followed over time or pre-post

  13. Examples • Evaluation of an asthma education program • Pre – measure: right before an education session with a respirologist • Post: at three and six months after • Control group: people with asthma randomly selected from the population, matched on age and gender • Results: some impact on some aspects of asthma self-management

  14. Canada Prenatal Nutrition Program: High-risk women were compared to low risk women • No significant impact of supplements and counseling on low birth weight

  15. Overall logic of impact evaluation • Pre-Post with comparison group PRE MEASURE POST MEASURE PRE MEASURE POST MEASURE PROGRAM ALTER-NATIVE

  16. Over the last 20 years, dissatisfaction has grown with this quantitative, positivist, black box, ‘elitist’ approach to evaluation • Results not used • Results not seen as telling the whole story • Other approaches have sprung up as alternatives

  17. Dominant model in government • Results-based performance management • Current approach to evaluation in the federal and some provincial governments emphasizes both ongoing performance monitoring and evaluation • Federal evaluation policy: • http://www.tbs-sct.gc.ca/pubs_pol/dcgpubs/TBM_161/ep-pe_e.html

  18. RMAF’s • All federal government programs must have a ‘Results-Based Management and Accountability Framework’ (RMAF) • Results for Canadians (2000) • Intended to serve as a “blueprint for managers to help them focus on measuring and reporting on outcomes throughout the lifecycle of a policy, program or initiative” • Linked to annual departmental planning and reporting on performance: results of programs to be reported regularly

  19. Contains: • Program logic model • performance measurement strategy • evaluation strategy • reporting strategy • Sometimes, risk assessment (RBAF: risk-based audit framework)

  20. Performance measurement strategy • Strategy for regular collection of information for monitoring how a program, policy or initiative is doing at any point in time • Involves identifying and measuring performance indicators • Performance indicators are used to show whether the expected outputs and outcomes (from logic model) are being produced

  21. Performance measurement strategy • Usually presented as a table:

  22. Evaluation strategy • Evaluation can be conducted at any time in a program cycle but TB emphasizes: • mid-term review : formative evaluation, aiming for ongoing improvement • After the bugs are worked out: 2 years • summative evaluation: aiming to assess achievement of objectives • After enough time to see results: 5 years (?) • Evaluation strategy includes issues and questions, indicators and data sources

  23. Evaluation questions • Treasury Board proposes three types of questions, about: • Relevance: does the program continue to be consistent with priorities, and does it address an actual need? • Success: is the program effective in meeting its intended outcomes, within budget and without unwanted outcomes? Is there progress toward final outcomes? • Cost-effectiveness: Are there more cost-effective means of getting the same results?

  24. Other questions

  25. Questions about: Descriptive Questions Evaluative Questions Needs What problems are targeted by the program? Why is the program needed? What are other pressing problems for the target group? Is the program relevant to and needed by its target population? What needs remain unmet, are what are the consequences of this? Program process design and structure What are the targets and objectives of the program? What is the theory of change: how do the program development activities relate to its targets and objectives? Are the program activities logically related to the identified needs, and to the program targets and objectives? Are the inputs sufficient to accomplish the project objectives? Are there alternative activities that would be more effective in addressing the problem? Implementation What are the activities are actually being carried out in developing the program? Who is being reached by or participating in these activities? How are the internal and external environments influencing the implementation of the program? How closely does the program implementation correspond to the initial plans? Do the participants reached correspond to those who were targeted? How will the program’s implementation affect its effectiveness? Outcomes and anticipated impacts What are the likely or expected results or effects of the program? What are possible side effects, and have any of these occurred? How adequate are the outputs produced? To what extent have the program’s objectives been achieved? How likely is the program to produce meaningful impacts? Productivity What are the direct and indirect costs of the program in relation to its benefits? How efficient has development of the program been? Do the expected benefits justify the costs? Could the same objectives be achieved through other programs with lower costs?

  26. Evaluation strategy • Usually presented as a table:

  27. Program logic model • A picture of how program resources end up as results • Is useful in many ways: • Allows stakeholders to articulate their underlying program theory: • Develop a common vision OR identify areas of divergence • Helps identify unrealistic program objectives given resources • Helps point out where activities are not related to intended outcomes

  28. Program logic model • The components of the logic model are : • Inputs: the program’s human, financial and material resources, and their organization • Activities: the programs, services, products, transactions that transform the inputs into outputs • Outputs: the direct productions of the program, or the transformation of resources into something that is delivered to clients or users. • Outcomes: in the short term, the results of the program and their effects on target groups, stakeholders and their environment, and in the medium to long term their impacts on the overall goals of the program. • …. and the relationships among these components

  29. OBJECTIVES INPUTS ACTIVITIES OUTPUTS SHORT-TERM OUTCOMES INTERMEDIATE OUTCOMES LONG-TERM IMPACTS

  30. Why the focus on outcomes or results? • Tendency for service providers to focus on outputs, assuming that they produce the desired results • E.g. number of surgeries performed (output) vs. no. of patients who recover (outcome) • Shifts the management focus to results • But are we going too far???

  31. « a result is a non-existent piece of frozen process, an artifice to capture something that we can measure and that we call a result……The tragedy is that quite a few have actually become quite ill with this degenerative affliction (Michael Patton called it mad outcome disease)... the primary symptom being the insatiable need to develop indicators, especially performance indicators….. » … from an electronic consultation undertaken by the Canadian Evaluation Society, June 2002.

  32. Alternative paradigms • Qualitative evaluation • Participatory evaluation • Theory of change • Responsive evaluation • Critical friend

  33. Qualitative evaluation Reflects a new paradigm in social research: • ‘What we observe is not Nature itself, but Nature exposed to our method of questioning’ • (Heisenberg, 1958) • Rejection of the positivist paradigm and the notion of an objective observer

  34. Programs are seen as social constructions by multiple actors with competing interests • The evaluator is one actor among these • The meaning of the evaluation has to be negotiated among all actors

  35. Sub-paradigms • Constructivist: aims for understanding, reconstruction of actors’ perspectives • Types are: grounded theory, social constructivism, interpretive anthropology • Critical: aims are social transformation, restitution, emancipation • Types are: feminist, neomarxist, critical theory

  36. Useful for: • Understanding the rich detail of program processes: inside the ‘black box’ • Finding unanticipated effects • But often seen as soft, subjective methods: has lacked credibility PROGRAM

  37. Qualitative methods • Observation, participant observation • Unstructured or semistructured interviews • Group interviews – focus groups • Other group techniques: nominal, delphi

  38. Examples • Qualitative evaluation of a program to keep pregnant teens and teen mothers in high school (PSAME) • Semi-structured interviews with girls, their mothers, teachers, health workers

  39. Participatory evaluation • Evolution from focus on evaluation utilization  user involvement stakeholder involvement  participatory evaluation • Key stakeholders design and manage the evaluation • Evaluator is a facilitator, not an outside expert

  40. Emergence of participatory evaluation • Groundswell of interest by the early 1990’s • Uptake often a reaction to previous approaches • Strong early momentum in international development

  41. Coincided with several other movements: • Participatory research • Action science (Agyris) • Critical social science (Habermas, Freire) • Feminist research • Naturalistic inquiry (Guba & Lincoln)

  42. Participatory evaluation • draws on local resources and capacities • recognizes the innate wisdom and knowledge of beneficiaries and end-users • beneficiaries are active participants, not just passive information sources • ensures that everyone (funders, managers, end-users) are part of the decision-making process  • uses facilitators who act as catalysts to assist those involved in asking the key questions • builds capacity and commitment to making any necessary changes or corrective actions Reitbergen-McCrackren, J., Narayan, D. (1997) Participatory monitoring and evaluation. World Bank: Social Policy and Resettlement Division

  43. Participatory evaluation methods • Can use any type of method in a participatory evaluation • Is more an attitude than a set of procedures • Usually an evaluation committee or steering group with representation of stakeholder groups ensures participation • E.g. managers, staff, clients • Committee’s role and level of involvement are negotiated

  44. Ensures stakeholder buy-in, reduces risk of findings being ignored • Democratizes and demystifies evaluation, builds capacity through participation • But is slow and sometimes painful!

  45. Example • Evaluation of COCo: Centre for Community organizations

More Related