1 / 33

Program Evaluation

Program Evaluation. Evaluation Defined. Green and Kreuter ( 1999) broad definition: “ comparison of an object of interest against a standard of acceptability ” Weiss (1998) more targeted:

hoang
Télécharger la présentation

Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Evaluation

  2. Evaluation Defined • Green and Kreuter (1999) broad definition: “comparison of an object of interest against a standard of acceptability” • Weiss (1998) more targeted: “systematic assessment of the operation and/or the outcomes of a program or a policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy”.

  3. Fournier (2005) “Evaluation is an applied inquiry process for collecting and synthesizing evidence that culminates in conclusions about the state of affairs, value, merit, worth, significance, or quality of a program, product, person, policy, proposal, or plan.”

  4. Program evaluation • A tool for using science as a basis for : • Ensuring programs are rational and evidence-based • Needs assessed • Theory-driven • Research-based • Ensuring programs are outcome-oriented • Forces goals and objectives at the outset • Ascertaining whether goals and objectives are being achieved • Performance measures established at outset

  5. Program evaluation • A tool for using science as a basis for : • Informing program management about • Program processes – adjusted, improved • Program quality – effectiveness (see goals and objectives) • Program relevance • Decision-making and action • e.g. policy development based on program evaluations • Transparency and accountability • Funders, participants, and other stakeholders.

  6. Program evaluation • Not done consistently in programs • Often not well-integrated into the day-to-day management of most programs

  7. FROM Logic Model presentation: The accountability era • What gets measured gets done • If you don’t measure results, you can’t tell success from failure • If you can’t see success, you can’t reward it • If you can’t reward success, you’re probably rewarding failure • If you can’t see success, you can’t learn from it • If you can’t recognize failure, you can’t correct it. • If you can demonstrate results, you can win public support. Re-inventing government, Osborne and Gaebler, 1992 Source: University of Wisconsin- Extension-Cooperative Extension

  8. Within an organization – evaluation... • Should be designed at the time of program planning • Should be a part of the ongoing service design and policy decisions • Evidence that actions conform with strategic directions, community needs, etc • Evidence that money spent wisely • Framework should include components that are consistent across programs • In addition to indicators and methods tailor-made for specific programs and contexts • Extent of evaluation • related to the original goals • related to complexity of the program

  9. When not to evaluate (Patton, 1997) • There are no questions about the program • Program has no clear direction • Stakeholders can’t agree on program objectives • Insufficient funds to evaluate properly

  10. Merit and Worth • Evaluation looks at the merit and worth of an evaluand (the project, program, or other entity being evaluated) • Merit is the absolute or relative quality of something, either intrinsically or in regard to a particular criterion • Worth is an outcome of an evaluation and refers to the evaluand’svalue in a particular context. This is more extrinsic. • Worth and merit are not dependent on each other.

  11. Merit and Worth • A medication review program has merit if it is proven to reduce known risk for falls • It also has value/worth if it saves the health system money • An older driver safety program has merit if it is shown to increase confidence among drivers over 80 years of age • Its value is minimal if it results in more unsafe drivers on the road and increases risk and cost to community at large.

  12. Evaluation vs research • In evaluation, politics and science are inherently intertwined. • Evaluations are conducted on the merit and worth of programs in the public domain • which are themselves responses to prioritized needs that resulted in political decisions • Program evaluation is intertwined with political power and decision making about societal priorities and directions (Greene, 2000, p. 982).

  13. Formative evaluation Purpose: ensure a successful program. Includes: • Developmental Evaluation (pre-program) • Needs Assessment – match needs with appropriate focus and resources • Program Theory Evaluation / Evaluability Assessment – clarity on theory of action, measurability, against what criteria • Logic Model – ensures aims, processes and evaluations linked logically • Community/organization readiness • Identification of intended users and their needs • etc

  14. Phase 5 Administrative & policy assessment Phase 4 Educational & ecological assessment Phase 2 Epidemiological assessment Phase 1 Social assessment Phase 3 Behavioral & environmental assessment Predisposing Public Health Health education Behavior Reinforcing Quality of life Health Policy regulation organization Environment Enabling Surveillance, Planning and Evaluating for Policy and Action: PRECEDE-PROCEED MODEL* Phase 6 Implementation Phase 7 Process evaluation Phase 8 Impact evaluation Phase 9 Outcome evaluation Input Process Short-term social impact Output Short-term impact Longer-term health outcome Long-term social impact *Green & Kreuter, Health Promotion Planning, 4thed, 2005.

  15. Formative evaluation Purpose: ensure a successful program • Process Evaluation– all activities that evaluate program once running • Program Monitoring • Implemented as designed or analysing/understanding why not • Efficient operations • Meeting performance targets (Outputs in logic model)

  16. Summative evaluation Purpose: determine program success in many different dimensions Also called- • Effectiveness evaluation • Outcome/Impact evaluation • Examples • Policy evaluation • Replicability/exportability/transferability evaluation • Sustainability evaluation • Cost-effectiveness evaluation

  17. Evaluation Science • Social research methods • Match research methods to the particular evaluation questions • and specific situation • Quantitative data collection involves: • identifying which variables to measure • choosing or devising appropriate research instruments • reliable and valid • administering the instruments in accordance with general methodological guidelines.

  18. Experimental Design in Evaluation • Randomized controlled trial (RCT) • Robust science, internal validity • Pre/post-test with equivalent group R O1 XO2 R O1 O2 • Post-test only with equivalent group R XO2 R O2 Problems with natural settings: • Randomization • Ethics • Implementation not controlled (staff, situation) • Participant demands • Perceived inequity between groups • etc

  19. Experimental Design in Evaluation • Quasi-experimental design • Randomization not possible: • Ethics • Program underway • No reasonable control group • One group post-test XO2 • Weakest design so use for exploratory, descriptive • Case study. Not for attribution. • One group pretest-posttest O1XO2 • Can measure change • Can’t attribute to program • Pre-post non-equivalent (non-random) groups – good but must • Construct similar comparators by (propensity) matching individuals or group N O1 XO2 N O1 O2

  20. Evaluation Methods (Clarke and Dawson, 1999) • Strict adherence to a method deemed to be ‘strong’ may result in the wrong problems becoming the focus of the evaluation • purely because the right problems are not amenable to analysis by the preferred method • Rarely only one method used • Require range to ensure depth and detail from which conclusions can be drawn

  21. Experimental Design in Evaluation • Criticism of experimental design in evaluation • Program is a black box • ED measures causality (Positivist) • Does not capture the nature of causality (Realist) • Internal dynamics of program not observed • How does the program work? • Theory helps explain • What are the characteristics of those in the program? • Participants need to choose to make a program work • Right conditions are needed to make this possible Clark and Dawson, 1999 • What are unintended outcomes/effects of the program?

  22. Naturalistic Inquiry - Qualitative design • Quantitative (ED) offers little insight into the social processes which actually account for the changes observed • Can use naturalistic methods to supplement quantitative techniques (mixed methods) • Can use fully naturalistic paradigm • Less common

  23. Naturalistic Inquiry • Interpretive: • People mistake their own experiences for those of others. So…. • Emphasis on understanding lived experiences of (intended) program recipients • Constructivist: • Knowledge is constructed (not discovered by detached scientific observation). So… • Program can only be understood within natural context • How being experienced by participants, staff, policy makers • Can’t construct evaluation design ahead of time “don’t know what you don’t know” • Theory is constructed from (grounded in) data

  24. Evaluation Data • Quantitative • Qualitative • Mixed • Primary • Secondary • One-off surveys, data pulls • Routine monitoring • Structured • Unstructured (open-ended)

  25. Data Collection for Evaluation • Questionnaires • right targets • carefully constructed: capture the needed info, wording, length, appearance, etc • analysable • Interviews (structured, semi-, un-) • Individuals • Focus groups • Useful at planning, formative and summative stages of program

  26. Data Collection for Evaluation • Observation • Systematic • Explicit procedures, therefore replicable • Collect primary qualitative data • Can provide new insights • drawing attention to actions and behaviour normally taken for granted by those involved in program activities and therefore not commented upon in interviews • Circumstances in which it may not be possible to conduct interviews

  27. Data Collection for Evaluation • Documentary • Solicited e.g. journals/diaries • Unsolicited e.g. meeting minutes, emails, reports • Public e.g. organization’s reports, articles in newspaper/letters • Private e.g. emails, journals

  28. Evaluation in Logic Models • Look at the Logic Model Template (next slide) • What types of evaluation do you see? • What methods are implied? • What data could be used?

  29. From Logic Model presentation

  30. Evaluation in Business Case • Look at handout: Ontario Min of Agriculture, Food, and Rural Affairs(OMAFRA) • Where do you see evaluation? • What methods are implied? • What data could be used?

More Related