1 / 60

Introduction to Program Evaluation

Introduction to Program Evaluation. Victor Balaban, PhD Program Evaluation Team (PET) Field Services and Evaluation Branch (FSEB) Division of Tuberculosis Elimination (DTBE) NCHHSTP/CDC. Disclaimer.

noam
Télécharger la présentation

Introduction to Program Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Program Evaluation Victor Balaban, PhD Program Evaluation Team (PET) Field Services and Evaluation Branch (FSEB) Division of Tuberculosis Elimination (DTBE) NCHHSTP/CDC

  2. Disclaimer • The contents and conclusions in this presentation have not been formally disseminated by CDC and should not be construed to represent any agency determination or policy.

  3. What is Evaluation?

  4. Evaluation • Evaluation is the systematic investigation of the merit, worth or significance of an object, hence assigning “value” to a program’s efforts means addressing those three inter-related domains: • Merit (or quality) • Worth (or value, i.e., cost-effectiveness) • Significance (or importance) source: CDC Framework for Program Evaluation in Public Health: http://www.cdc.gov/eval/framework/index.htm

  5. Evaluation • Evaluation is: • An activity that assists in planning and measuring programs • a way of managing, improving and being accountable for: • resources • activities • results • Evaluation answers the question- “Is the program doing what we intend it to do?”

  6. What Can Be Evaluated? • Direct service interventions • Community mobilization efforts • Research initiatives • Surveillance systems • Policy development activities • Outbreak investigations • Laboratory diagnostics • Communication campaigns • Infrastructure-building projects • Training and educational services • Administrative systems Source: MMWR, 1999, Framework for Program Evaluation in Public Health

  7. Why Do We Evaluate? • Effectiveness - to determine if a program achieved it’s objectives • Impact - to assess how well program(s) are working • Improvement - to modify programs that are not working according to plan or take advantage of something that is working exceptionally well • Accountability - to report to stakeholders • To help develop new efforts

  8. How Does Evaluation Differ from Surveillance? • Surveillance is the routine tracking of disease status or behavior over time • Surveillance is not necessarily in relation to any specific program or intervention. • Evaluation is conducted in relation to specific program(s) or intervention(s)

  9. How Does Evaluation Differ from Research? • The purpose of research is to produce knowledge about how the world works. • Evaluation studies are used to improve programs and inform decisions about future resource allocations. • The standards for evidence are higher in research, and the time lines for generating knowledge can be longer than for evaluation. adapted from: Michael Patton as interviewed by Lisa Waldick (IDRC). 2002-02-08

  10. Why is Evaluation Important? • Improve knowledge of program design • Improve program implementation • Reporting • Ensure that a program reaches those who need it most • Give visibility to work • Demonstrate accountability • Share information • Enhance understanding of what works best and what does not work – and why

  11. Example: TB – Completion of Treatment • In a State, an organization received funds for a TB program. The program’s goal was to increase the proportion of newly diagnosed TB patients who complete treatment within 12 months to 93.0%. • Records showed that in the three years since the program was funded, 85.0% of patients completed treatment within 12 months. • Was the program successful?

  12. Example: TB – Completion of Treatment • The State felt that the target of 93.0% treatment completion within 12 months was not reached and therefore the program had failed. • The program staff, however, were confident that the program was a success because only 74.0% of patients had completed treatment within 12 months in the three years before the program was funded. • Who is correct?

  13. Example: TB – Completion of Treatment • Was the program a success or a failure? • What program management issues does this example present? • What information is needed to make management decisions for the way forward? • How could evaluation have helped in this case?

  14. Remember • The apparent success or failure of a program or activity must always be closely examined • What you measure will determine what you are able to find out • Evaluation can help us to do things differently and better understand the why and how of program/activity success

  15. Summary • Evaluation is an activity that helps in program management • Evaluation involves assessing a program or activity to find out: • What has been achieved • What progress has been made • What the successes and challenges are • What difference has been made by the program

  16. Types of Evaluation

  17. Types of Evaluation Determining Broader Impacts Impact Evaluation Determining If Activity Caused Outcomes Outcome Evaluation Determining if Activity Was Implemented As Intended Process Evaluation Planning Effective Activity Formative Evaluation

  18. When to Evaluate? --Public Health Program-- Completion Conception Planning a NEW program Assessing a DEVELOPING program Assessing a STABLE, MATURE program Assessing a program after it has ENDED

  19. Formative Evaluation • Collects data describing the needs of a system or population, including those needs to be addressed by a program or activity. • Answers questions such as: • How should the activity be designed or modified to address participants’ needs? • What can we learn from pilot-testing our approach? • Are the materials we are going to use appropriate?

  20. Process Evaluation • Collects more detailed data about the quality of the activity, factors that affected quality, and differences between intended and actual delivery of the activity . • Answers questions such as: • Was the activity implemented as intended? • Did the activity reach the intended audience? • Why where there differences between intentions and actual delivery?

  21. Outcome Evaluation • Collects data to determine if, and by how much, program activities or services achieved their intended outcomes among the targeted population (often with a comparison or control group). • Answers questions such as: • Did the activity result in the expected outcomes? • Can we attribute observed changes among the targeted population to the activity? • Can we indicate what might happen in the absence of the activity?

  22. Impact Evaluation • Collects data about a population or region over time to establish a causal association between programs and what they aimed to achieve beyond the outcomes on individuals targeted by the program(s) • Answers questions such as: • What long term effect does the activity, combined with other initiatives, have?

  23. Selecting an Appropriate Evaluation Method

  24. Criteria for Selecting Evaluation Method • What evaluation question needs to be answered? • Who needs the data? • What resources are available for evaluation?

  25. What Information Is Needed? • Different stakeholders or users have different information needs based on how they will use the information. • Information needs also vary at the different stages of a program and the type of evaluation being conducted Activities (Interventions, Services) Outcomes(Intermediate Effects) Impact (Long-termEffects) Input (Resources) Output (Immediate Effects) • Staff • Funds • Materials • Facilities • Supplies • Trainings • Services • Education • Treatments • Interventions • # staff trained • # condoms distributed • # test kits distributed • # clients served • # tests conducted • Provider behavior • Risk behavior • Service use • Behaviorclinical outcomes • Quality of life • TB incid/prev • Social norms • STI incid/prev • AIDS morb/mort • Economic impact Impact Evaluation Process Evaluation Outcome Evaluation

  26. Levels of Evaluation Effort Monitoring and Evaluation Pipeline All Most Some Few* Impact Evaluation Number of Programs Outcome Evaluation Process Evaluation Input/Output Monitoring Adaptation of Rehle/Rugg M&E Pipeline Model, FHI 2001

  27. What Information Is Most Important? • How do you prioritize your evaluation questions? • Identify the use for the information • Consider the feasibility of answering questions given the available resources • Determine what you “need to know” vs. what is “nice to know”

  28. Three Types of Questions • Descriptive Questions - “What is” • Describe a program/process • Normative Questions – Compare “What is” to “What should be” • Measuring against a stated standard • Cause and Effect Question – “Effect” • Measure before and after – with and without comparisons

  29. Main Evaluation Question/Issue

  30. Indicators • A measurable piece of information that helps answer your evaluation question • Indicators are signposts, markers or clues of change; they are intended to indicate whether objectives are being achieved • Provide a reference point for program or project planning, management, and reporting • Relates to the objectives of your evaluation • Can be related to processes or outcomes

  31. Indicators • Is also referred to as a performance measure in the NTIP • Can use existing ones or develop ones tailored to a particular question • Allow you to assess trends and identify problems • Can act as early warning signals for corrective action • An indicator is not the actual result, or the data collection method or tool

  32. Measures vs. Indicators • Measures are descriptions of program functioning, while indicators measure one aspect of a program or a project that is usually directly related to particular objectives. • Measures alone do not necessarily provide enough information to indicate how effective a program is in reaching its intended results • Anything can be measured, however, every measure is not an indicator of program functioning

  33. You are buying a used car and want to know what condition the car is in: You can measure many things when you inspect the car: Tire tread How clean the oil is Wear on brake pads Rust on body of car OR You can examine the number of miles the car has been driven Example

  34. You are developing indicators to measure HIV testing within your TB program: You can measure many things # of people tested # of people diagnosed # of test kits purchased OR You can examine the percent of program participants aged 15–49 receiving HIV test results in the past 12 months Example

  35. Key Elements of a Good Indicator Specific: An indicator must be related to the conditions that the program/project wishes to change Measurable: An indicator must be quantifiable and allow for analysis of the data Appropriate: An indicator must be necessary and have relevance to the management of information needs of the persons who will use it Realistic: An indicator must be attainable at a reasonable cost using appropriate collection methods Time-based: An indicator must have a time period for collection clearly stated

  36. Examples of Indicators (from NTIP) • proportion of patients, with newly diagnosed TB for whom 12 months or less of treatment is indicated, who complete treatment within 12 months • proportion of contacts to sputum AFB smear-positive TB patients with newly diagnosed latent TB infection (LTBI) who start treatment • percent of cooperative agreement recipients that have a TB training focal point

  37. Targets • Reasonable expectations about what “success” means • Should create one for each indicator • Based on the current status of an activity • Consider program requirements

  38. Collecting Evaluation Data

  39. Why Use Data? • Data can help your program evaluate its program effectiveness and keep the focus on program outcomes • Data can provide feedback to stakeholders about what is working, what needs to continue, and what can be reduced • Data can convince stakeholders of the need to change • Data can uncover problems that might otherwise remain invisible

  40. Types of Data • Quantitative Data • Numbers • More objective • Epidemiological data • Qualitative Data • Words and/or concepts • More subjective • Observations • Both can be used in evaluation

  41. Data Collection Methods • Quantitative Data Collection • Surveys/Questionnaire • Secondary data • Surveillance data • Epidemiological data • Qualitative Data Collection • Focus groups • Interviews/Case study • Observations • Mixed Methods

  42. Comparison of Data Collection Methods

  43. Data Sources

  44. Data Sources Two Options: • Collect information from existing sources: surveys. program records, databases, documents, etc. • Collect new data

  45. Data Sources Where or from whom will you get data for each of your indicators to answer your evaluation questions?

  46. Existing vs. New Data • Be aware that gathering and analyzing new data can be expensive and time consuming • Before making any plans to gather new data make sure to check if there are existing data sources that have the information you need • If no existing data sources provide the information you need, then you may need to consider collecting new data

  47. Data Needs and Sources • Needs • What data do we need to achieve objectives? • For whom do we need to use it? • Does the system do what it is supposed to do? • What is the timeframe for data use?

  48. Good Data Sources • Provide the necessary information to answer your evaluation questions • Are feasible to access given the available resources • Offer confidence in the quality of information gathered • Are relevant to the time period you are interested in

  49. Existing TB Data Sources • Routinely collected data: • Record forms at the health facility • Record and report forms at the city/county/state level • Record and report laboratory forms • Census / Vital statistics • Surveillance / BRFSS • NHANES / NHIS

  50. Existing TB Data Sources • Other data sources at various levels: • Work plans and budgets • Annual reports • Audits • Meeting reports • Planning documents • Procurement records • Storage facility stock cards

More Related