1 / 71

Evaluating Health Communications Programs

Evaluating Health Communications Programs. Luann D’Ambrosio , MEd Associate Director, NWCPHP Clinical Instructor, Health Services, UW. Measuring the Effects of Programs. NYC soda ban would lead customers to consume more sugary drinks, study suggests.

fisseha
Télécharger la présentation

Evaluating Health Communications Programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Health Communications Programs Luann D’Ambrosio, MEd Associate Director, NWCPHP Clinical Instructor, Health Services, UW

  2. Measuring the Effects of Programs

  3. NYC soda ban would lead customers to consume more sugary drinks, study suggests http://www.cbsnews.com/8301-204_162-57579172/nyc-soda-ban-would-lead-customers-to-consume-more-sugary-drinks-study-suggests/

  4. Interesting Example of… Potential for unintended intervention effects Importance of study design Simulation study with 100 undergraduates Exposed to different menu choices Classic, bundle, small only Asked what they “would” purchase No mention of bans/restrictions on soda size Wilsom BM et al. PLOS One 2013;8(4):e61081

  5. Learning Objectives • Understand the basic components of program evaluation. • Understand the various types of study designs useful methods in program evaluation. • Understand the concepts of measurement validity and reliability.

  6. What is Program Evaluation? “a process that attempts to determine as systematically and objectively as possible the relevance, effectiveness, and impact of activities” A Dictionary of Epidemiology, 2008

  7. Environment and organizational context Decision-making Best available research evidence Resources, including practitioner expertise Population characteristics, needs, values, and preferences

  8. Evaluation Question(s) What treatment for what population delivered by whom under what conditions for what outcome is most effective, and how did it come about? Paul G, as cited in Glanz et al. (eds) Health Behavior and Health Education (pp. 493). 2008: Jossey-Bass; San Francisco, CA.

  9. Think about your project/program…

  10. The Big Questions… Whyevaluate health communications programs? Why not? Whenshould you start thinking about evaluation?

  11. Do You Have… A research/evaluation person on staff Time and other resources Staff to assist Partners with these resources From Mattessich, 2003

  12. Standards Utility Accuracy Propriety Feasibility

  13. Engaging Stakeholders Buy-in Value of evaluation activities Procedures you will use Permission Data collection & judgment Do not ask forgiveness later!

  14. Potential Stakeholders Who are the stakeholders for your programs? What are some of their key questions/ outcomes?

  15. Health Communications Programs can be Evaluated If… There are clear, measurable intended effects: Program delivery (process) Short-term outcomes (impact) Long-term outcomes (outcome)

  16. Health Promotion Programs can be Evaluated If… There are specific indicators of program success: Program delivery/ participation/uptake Health behavior change Health change

  17. Program Objectives  Evaluation 19

  18. Develop SMART Objectives Specific: Concrete, detailed, and well defined so that you know where you are going and what to expect when you arrive Measureable: Numbers and quantities provide means of measurement and comparison Achievable: feasible and easy to put into action Realistic: Considers constraints such as resources, personnel, cost, and time frame Time-Bound: A time frame helps to set boundaries around the objective

  19. Think about your project/program

  20. Who Defines Program Success? Many definitions possible for any one program Evaluators need to know all of them! Often must prioritize questions to be answered

  21. Program Evaluation Designs Evidence Quality

  22. Define the Population Who? Community Organization Individuals Mix How many? Number want to serve Number can reach with resources Number needed for reliable results

  23. Evaluation Phases/Types Needs/resources assessment Formative research Process evaluation Impact evaluation Outcome evaluation

  24. Needs/Resources Assessment • Program design or selection • What are the health needs of the community? • What resources are available to meet the needs? • Key informant interviews • Secondary analyses Critical Questions Phase Example Strategies

  25. Formative Research • Program design and on an as-needed basis • What form should intervention take? • How can intervention be improved? • Key informant interviews • Focus groups Critical Questions Phase Example Strategies

  26. Evaluation Types Process Impact Outcome Program - delivery - reach - awareness - satisfaction Behavior/cognition - knowledge gain - attitude change - behavior change - skill development Health - mortality - morbidity - disability - quality of life Adapted from Green et al., 1980

  27. Process Evaluation • Continuous • Is intervention being implemented as planned? • Are targets aware and participating? • Observe & document intervention activities • Survey target audience Critical Questions Phase Example Strategies

  28. Target population Receipt Fit (relevance, satisfaction) Common methods Tracking participation Surveys Key Process/Fidelity Measures Interventionists Training Delivery Common methods Observation Tracking forms

  29. Impact Evaluation • Were program goals met? • Did health behaviors change? • Were there any unintended effects? • Transition point and/or end of intervention • Track policy/ practice changes • Record review • Surveys • Records of adverse eventsSurveys Critical Questions Phase Example Strategies

  30. Unraveling the “Black Box”

  31. Research Design Issues Attributing causality v. practicality Randomized Controlled Trials Quasi-experimental designs Non-experimental designs

  32. Population Ineligibility Eligibility Participation No Participation Randomization Intervention Group No Intervention Group Outcome(s) Outcome(s) Example Randomized Control Trial Design

  33. Randomized Controlled Trials The Gold Standard Maximize internal validity Does this intervention work? Increasing controversy Challenges in applied settings Limited external validity Would this intervention work anyplace else? Control Intervention

  34. Employees & visitors, at JHMC Before new smoke-free policy After new Smoke-free policy Cigarettes smoked per day Cigarette remnant counts per day Nicotine concentrations Cigarettes smoked per day Cigarette remnant counts per day Nicotine concentrations Non-experimental Design Example: Pre-test / Post-test, No Control Group Stillmanet al. JAMA 1990;264:1565-1569

  35. Threats to internal validity Selection History Maturation Relatively few challenges Non-experimental Designs Most common Cross-sectional Before-after “Least suitable”

  36. Quasi-experimental Designs Before-after with comparison group Improves some internal validity threats Compromise re: practicality and external validity

  37. Which design would you like to use? Which design is most feasible for you to use? Think about your project/program

  38. Outcome Evaluation • Did the target population’s health or related conditions improve? • End of intervention, and/or follow-up • Mortality • Morbidity • Health care costs Critical Questions Phase Example Measures

  39. Measurement Issues Qualitative & Quantitative approaches Rich data from a few Simpler data from many What’s the key question? Self-report or not? Is the behavior socially desirable? Are other approaches feasible? Observation Records Structure and policies

  40. Measurement Issues Reliability Does the measure reflect true score or error? Validity Does it measure what you think it does? All valid measures are reliable, but not all reliable measures are valid!

  41. Is the measure or design measuring exactly what is intended? What is validity?

  42. Is the measurement consistent? What is reliability?

  43. Consistency Object or phenomenon ? Date #1 (2/20/2009) Date #2 (2/21/2009) Date #3 (2/22/2009) = Observer 1 Observer 2 Same measurement

  44. Validity & Reliability Possibilities Reliable Not valid Low validity Low reliability Not reliable Not valid Both reliable and valid Experiment-Resources.com

  45. Finding Measures Literature/contacting researchers may show you accepted methods and measures Check out existing tools like BRFSS Beware of changing modes Evaluation instruments often need community vetting Participatory methods may prevent use of existing instruments/questions

  46. Describe 2 ways you could (or will) measure your main outcome Think about your project/program

More Related