1 / 89

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION. PRESENTATION BY ARDEN HANDLER, DrPH April 10, 2002. RELATIONSHIP BETWEEN EVALUATION AND POLICY. Multiple program evaluations can lead to the development of policy

ziazan
Télécharger la présentation

ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ROLE OF EVALUATION IN POLICY DEVELOPMENT AND IMPLEMENTATION PRESENTATION BY ARDEN HANDLER, DrPH April 10, 2002

  2. RELATIONSHIP BETWEEN EVALUATION AND POLICY • Multiple program evaluations can lead to the development of policy • Programs are often thought of as the expressions of policy so when we do program evaluation, we may be in fact evaluating a policy (e.g., Head Start = day care policy for low-income children)

  3. RELATIONSHIP BETWEEN EVALUATION AND POLICY • Population-based programs (e.g., Medicaid) are often thought of as a policy; when we are evaluating population-based programs we are usually using program evaluation methods to examine a policy

  4. PROGRAM EVALUATION VERSUS POLICY ANALYSIS • Program evaluation uses research designs with explicit designation of comparison groups to determine effectiveness

  5. PROGRAM EVALUATION VERSUS POLICY ANALYSIS • Policy analysis uses a variety of different frameworks to answer one or more questions about a policy: • HISTORICAL FRAMEWORK • VALUATIVE FRAMEWORK • FEASIBILITY FRAMEWORK

  6. PROGRAM EVALUATION VERSUS POLICY ANALYSIS • Policy analysis often relies on policy/program evaluations • The tools of program evaluation can be used to evaluate the effectiveness of policies; • However, this is not policy analysis

  7. Purposes of Evaluation/ Evaluation Questions • Produce information in order to enhance management decision-making • Improve program operations • Maximize benefits to clients: to what extent and how well was the policy/program implemented?

  8. Purposes of Evaluation/ Evaluation Questions • Assess systematically the impact of programs/policies on the problems they are designed to ameliorate • How well did the program/policy work? • Was the program worth its costs? • What is the impact of the program/policy on the community?

  9. Two Main Types Of Evaluation • Process or formative • Outcome or summative

  10. Process or Formative Evaluation • Did the program/policy meet its process objectives? • Was the program/policy implemented as planned? • What were the type and volume of services provided? • Who was served among the population at risk?

  11. Why Do We Do Process Evaluation? • Process evaluation describes the policy/program and the general environment in which it operates, including: • Which services are being delivered • Who delivers the services • Who are the persons served • The costs involved

  12. Why Do We Do Process Evaluation? • Process evaluation as program monitoring • Charts progress towards achievement of objectives • Systematically compares data generated by the program with targets set by the program in its objectives

  13. Why Do We Do Process Evaluation? • Process evaluation • Provides feedback to the administrator regarding the program • Allows others to replicate the program if program looks attractive • Provides info to the outcome evaluation about program implementation and helps explain findings

  14. Outcome or Summative Evaluation • Did the program/policy meet its outcome objectives/goals? • Did the program/policy make a difference?

  15. Outcome or Summative Evaluation • What change occurred in the population participating in or affected by the program/policy? • What are the intended and unintended consequences of this program/policy? • Requires a comparison group to judge success

  16. Outcome or Summative Evaluation • What impact did the program/policy have on the target community? • Requires information about coverage

  17. Why Do We Do Outcome Evaluation? • We want to know if what we are doing works better than nothing at all • We want to know if what we are doing new works better than what we usually do

  18. Why Do We Do Outcome Evaluation? • Which of two or more programs/policies work better? • We want to know if we are doing what we are doing efficiently

  19. What Kind of Outcomes Should We Focus on? • Outcomes which can clearly be attributed to the program/policy • Outcomes which are sensitive to change and intervention • Outcomes which are realistic; can the outcomes be achieved in the time frame of the evaluation?

  20. Efficiency Analysis • Once outcomes have been selected and measured, an extension of outcome evaluation is efficiency analysis: • cost-efficiency • cost-effectiveness • cost-benefit

  21. Evaluation Success Whether an evaluation will demonstrate a positive impact of a policy or program depends on other phases of the planning process as well as on adequate evaluation designand data collection

  22. Evaluation Success Whether an evaluation will show a positive effect of a policy/program depends on: • Adequate Program Theory • Adequate Program Implementation • Adequate Program Evaluation

  23. Reasons Why Evaluations May Demonstrate No Program Effect Theory failure

  24. Program Theory • What is a program’s theory? • Plausible model of how program/policy works • Demonstrates cause and effect relationships • Shows links between a program’s/policy’s inputs, processes and outcomes

  25. Means-ends HierarchyM.Q. Patton • Program theory links the program means to the program ends • Theory of Action

  26. Means-ends HierarchyM.Q. Patton Constructing a causal chain of events forces us to make explicit the assumptions of the program/policy • What series of activities must take place before we can expect that any impact will result?

  27. Theory Failure • Evaluations may fail to find a positive impact if program/policy theory is incorrect • The program/policy is not causally linked with the hypothesized outcomes (sometimes because true cause of problem not identified)

  28. Theory Failure • Evaluation may fail to find a positive impact if program/policy theory is not sufficiently detailed to allow for the development of a program plan adequate to activate the causal chain from intervention to outcomes

  29. Theory Failure • Evaluation may fail if program/policy was not targeted for an appropriate population (theory about who will benefit is incorrect) These 3 issues are usually under control of those designing the program/policy

  30. Other Reasons Why Evaluations May Demonstrate No Policy/Program Effect Program/policy failure

  31. Program/policy Failure • Program/policy goals and objectives were not fully specified during the planning process

  32. Other Program Reasons • Program/policy was not fully delivered • Program/policy delivery did not adhere to the specified protocol

  33. Other Program Reasons • Delivery of treatment deteriorated during program implementation • Program/policy resources were inadequate (may explain above)

  34. Other Program Reasons • Program/policy delivered under prior experimental conditions was not representative of treatment able to be delivered in practice • e.g., Translation from university to "real" setting or from pilot to full state program

  35. Non-Program Reasons Why Evaluations May Demonstrate No Program Effect • Evaluation Design And Plan • Are design and methods used appropriate for the questions being asked? • Is design free of bias? • Is measurement reliable and valid?

  36. Conducting an Outcome Evaluation How do we choose the appropriate evaluation design to assess system, service, program or policy effectiveness?

  37. Tools for Assessing Effectiveness • Multiple paradigms exist for examining system, service, program, and policy effectiveness • Each has unique rhetoric and analytic tools which ultimately provide the same answers

  38. Tools for Assessing Effectiveness • Epidemiology • e.g., Are cases less likely to have had exposure to the program than controls? • Health Services Research • e.g., Does differential utilization of services by enrollees and non-enrollees lead to differential outcomes?

  39. Tools for Assessing Effectiveness • Evaluation Research • e.g., Are outcomes for individuals in the program (intervention) different than those in the comparison or control group?

  40. Mix and Match The evaluator uses a mix and match of Methods/Paradigms

  41. Depending on: • Whether program/service/policy and/or system change covers: • entire target population in state/city/county • entire target population in several counties/community areas

  42. Depending on: 2. Whether program/service/policy and/or system change includes an evaluation component at initiation 3. Whether adequate resources are available for evaluation 4. Whether it is ethical/possible to manipulate exposure to the intervention

  43. Outcome Evaluation Strategies • Questions to consider: Is service/program/policy and/or intervention population based or individually based?

  44. Outcome Evaluation Strategies • Population Based --e.g., Title V, Title X, Medicaid • from the point of view of evaluation, these programs/policies can be considered “universal” since all individuals of a certain eligibility status are entitled to receive services

  45. Outcome Evaluation Strategies • Population Based: issues • With coverage aimed at entire population, who is the comparison or the unexposed group? • What are the differences between eligibles served and not served which may affect outcomes? Between eligibles and ineligibles?

  46. Outcome Evaluation Strategies • Population Based: issues • How do we determine the extent of program exposure or coverage? (need population-based denominators and quality program data)

  47. Outcome Evaluation Strategies • Population Based: issues • Which measures to use? • Measures are typically derived from population data sets e.g., Medicaid claims data, surveillance, vital records, census data

  48. Outcome Evaluation Strategies • Individually Based • e.g., Aids/sex education program in two schools; smoking cessation program in two county clinics • Traditional evaluation strategies can be more readily used/designs are more straightforward

  49. Outcome Evaluation Strategies • Questions to Consider: • Is the evaluation Prospective or Retrospective? • Retrospective design limits options for measurement and for selection of comparison groups • Prospective design requires evaluation resources committed up front

  50. Outcome Evaluation Strategies • Questions to consider • Which design to choose? • Experimental, quasi-experimental, case-control, retrospective cohort? • What biases are inherent in one design versus another? • What are the trade-offs and costs?

More Related