1 / 33

Population Health

Population Health. Engaging Consumers, Providers, and Community in Population Health Programs. Lecture c.

jchaves
Télécharger la présentation

Population Health

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Population Health Engaging Consumers, Providers, and Community in Population Health Programs Lecture c This material (Comp 21 Unit 8) was developed by Johns Hopkins University, funded by the Department of Health and Human Services, Office of the National Coordinator for Health Information Technology under Award Number 90WT0005. This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/.

  2. Engaging Consumers, Providers, and Community in Population Health ProgramsLearning Objectives — Lecture c • Evaluate individual behavior change interventions’ designs. • Evaluate organizational behavior change interventions’ designs. • Evaluate community-level behavior change interventions’ designs.

  3. What Is Program Evaluation? • Program evaluation is a field of study designed to answer whether an intervention had the desired impact or whether a program is on the right track and what might be done to improve it. • Program evaluation takes four forms, each with a separate purpose: formative, process, summative, and cost-effectiveness evaluations. • Practitioners of public health are coming to expect and demand evidence-based programming; evaluation yields evidence with which to design interventions and evaluate their effectiveness.

  4. Key Questions Involved in Designing an Evaluation for a Behavior Change Program • How is the intervention expected to achieve the desired outcome? • Who is the target population for the intervention? • Does the evaluation focus on those enrolled in a particular program, or on all persons who fall within the definition of the target population? • What study design will be used to evaluate impact? • What are the measures of program success? • What are the available data for answering these questions?

  5. On Whom Should an Evaluation Focus? • There are a many ways to focus evaluations: • The smallest frame is the individual. • Other commonly used grouping frames include: 1) organizational members; 2) target populations based on specific factors; 3) communities defined by geographic or other features; or a variety of other modes. • Whether an evaluation should focus on those enrolled in a particular program (program-based evaluation) or all persons who fall within the definition of the target population (population-based evaluation) depends on the objective of the program.

  6. Theories of Change • Strong programs tend to draw on one or more theories of change — either implicitly or explicitly — such as the Health Belief Model, the Social Learning Theory (modeling), the Theory of Reasoned Action, the Diffusion of Innovations Theory, and the Extended Parallel Process Model (fear management).

  7. Understanding the Sequence of Pathways • Fundamental to any evaluation is understanding the sequence of pathways (conceptual framework, logic model, program theory, and program impact pathway [PIP]) that link the program’s intervention(s) to the ultimate health outcome.

  8. The Social-Ecological Model • The most effective behavioral interventions often work at multiple levels – community, organization, family and individual. • The social-ecological model shows that individuals are far more likely to work toward changing their behavior if the social/physical environments not only encourage it, but make it easier.

  9. Structural Interventions • Structural interventions are implementing or changing laws, policies, physical structures, social or organizational structures, or standard operating procedures to bring about environmental or societal change. The individual has little to do in these situations. They are independent of individual volition.

  10. Environmental Interventions • Environmental interventions aim to change behavior by facilitating or inhibiting behaviors through changes in the surroundings. • For example, adding fountains that let you refill your water bottle easily to promote better drinking habits (e.g., water instead of soda).

  11. Organizational Interventions • Organizational intervention relates to policies that facilitate the adoption of health behaviors. • For example: Your company installs a gym or salad bar on the premises – like Google.

  12. Interpersonal and Intrapersonal Interventions • Interpersonal interventions attempt to reach clusters who can then reinforce specific behaviors. • The classic example is Alcoholics Anonymous. • Intrapersonal interventions generally involve health education and counseling provided to one individual at a time.

  13. Experimental Design • The gold standard for measuring impact is the experimental design, used widely in clinical research to evaluate the effectiveness of a given drug or treatment regime. • For example, in some studies one group of subjects gets the experimental drug while others get a placebo.

  14. Randomized Control Trials • The randomized control trial design offers the strongest possible means of controlling for potential confounders (such as selection bias, testing effect bias, maturation bias, placebo effects, and history bias). It is often considered the ‘gold standard’.

  15. Nonexperimental Designs • Non-experimental designs only control for some of the potential sources of bias but are widely used (e.g., a pre-test–post-test design with no control group) under the philosophy that some evaluation is better than none. Also, they are easier to implement in many situations where information is being drawn from sources where no experiment was intended.

  16. Quasi-experimental Designs • Quasi-experimental designs have greater generalizability and control for some, but not all, potential sources of bias; they are used when it is not possible to randomize subjects into treatment and control groups. Most experiments fall into this category. Particularly large or national surveys where taking a census is not possible.

  17. Observational Studies • Observational studies (post-test only among the experimental population) can apply sophisticated analytic techniques (e.g., instrumental variables, propensity scoring, and structural equations) to model causal inferences.

  18. Program Evaluation Often Focuses on Behavioral Outcome • Although causal links between behavioral changes and the health outcomes may be well known, program evaluation often focuses on the behavioral change (measured by self-report) rather than on the long-term health outcome that is biological in nature (measured by some type of biomarker such as the body-mass index). • Observation reduces the bias inherent in self-report, but it may introduce other biases (such as the Hawthorne effect, whereby participants perform better than under normal conditions precisely because they realize that they are being observed).

  19. Formative Evaluation • Formative evaluation is used to obtain qualitative information that will be useful in designing the intervention for maximum effect. • Target population information gathered could include: • Epidemiology of the disease or health condition. • Persons most affected. • Drivers of unhealthful behaviors. • Barriers to change. • Most credible sources of information on the topic. • Information channels. • Formative research can include both qualitative research (which is particularly useful in understanding the mindset of the target population, including their values, attitudes, beliefs, aspirations, and fears that strongly affect behavior) and quantitative research, especially where quantifying baseline levels is important.

  20. Process Evaluation • Process evaluation is used to assess how well the intervention is being implemented (fidelity to design), and includes the following: • Dose delivered • Reach • Level of exposure • Recruitment • Context

  21. Summative Evaluation • Summative evaluation measures whether change occurred as a result of the intervention. • Ideally an intervention would be evaluated by its long-term effect on health status (i.e., mortality or morbidity). • Summative evaluation generally attempts either to establish causality or (with weaker designs) to tease out causal inferences.

  22. Outcome Evaluation and Impact Evaluation • Outcome evaluation refers to assessing changes in a given outcome without necessarily attributing it to an intervention. • Impact evaluation is for a rigorous study design capable of demonstrating cause and effect, not just plausible attribution.

  23. Cost-Effectiveness Evaluation • Cost-effectiveness evaluation is a specialized form of impact assessment that extends beyond measuring the extent to which change occurred to quantifying the cost per unit of change. • It requires both careful tracking of the costs of the intervention. • Despite its complexities, cost-effectiveness evaluation answers the question that decision makers most often want to know, “What is the `return on investment’?”

  24. Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 1 • Hand hygiene is the single-most effective method of preventing the spread of infections associated with health care. Despite the established benefits, health-care workers tend to have suboptimal hand hygiene practices. • Recent data suggest that a multifaceted intervention, including the use of feedback, education, the introduction of alcohol-based hand wash, and visual reminders, may increase adherence to hand hygiene recommendations. • “Assess then revise, assess then revise” was a process evaluation used to improve the ultimate effectiveness of the intervention. • The gold standard for testing an intervention is the experimental design, which would be the most rigorous type of summative evaluation. • The dearth of published information on formative evaluation of hand-hygiene interventions underscores the need for those designing such interventions to develop a clearer understanding of why hand hygiene behavior is not more prevalent in medical care delivery settings.

  25. Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 2 • Results published the American Journal of Public Health. • Establish and refine methodology for future research efforts in this area.

  26. Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 3 • Study purpose: to assess the impact of a visual cue on hand washing compliance in public facilities. • Visual cue: the presentation of a towel by an automatic dispenser. • Hand washing compliance indicated by towel and soap usage.

  27. Program Evaluation Example: Increasing Hand Washing Compliance with a Simple Visual Cue to Action — 4 • Methodology • Eight bathrooms (four male and four female) with 16 enMotion™ towel dispensers and eight soap dispensers at the Bryan School of Business and Economics building at the University of North Carolina at Greensboro were used in the study. • Towel dispensers were set to either the “Towel Presented” or “Towel NOT Presented” condition on alternating weeks for 10 weeks. • Wireless infrared sensor were used to record traffic volume in bathrooms. • Towel and soap usage was recorded each week to indicate rates of hand washing compliance (HWC).

  28. Results — 1 8.09 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856.

  29. Results — 2 Note: Asterisk above “Towel” bar indicates a statistically significant difference at P = .05. 8.10 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856.

  30. Results — 3 Note: Asterisk above “Towel” bar indicates a statistically significant difference at P = .003. 8.11 Graph: Eric W. Ford, PhD, Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University (2016).

  31. Engaging Consumers, Providers, and Community in Population Health ProgramsSummary — Lecture c • Evaluate individual behavior change interventions’ designs. • Evaluate organizational behavior change interventions’ designs. • Evaluate community-level behavior change interventions’ designs.

  32. Engaging Consumers, Providers, and Community in Population Health ProgramsReferences — Lecture c References Campbell, D. T., & Stanley, J. C. (1973). Experimental and quasi-experimental designs for research (10th ed.). Chicago: Rand McNally College Publishing Company. Cronbach, et al. (1980). Toward reform of program evaluation. JSTOR. Valente, T. W. (2002). Evaluating health promotion programs. Oxford: Oxford University Press. Charts, Tables, Figures 8.09 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856. 8.10 Graph: Ford, E. W., Boyer, B. T., Menachemi, N., & Huerta, T. R. (2014, October). Increasing hand washing compliance with a simple visual cue. American Journal of Public Health, 104(10), 1851–1856. 8.11 Graph: Eric W. Ford, PhD, Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University (2016).

  33. Population HealthEngaging Consumers, Providers, and Community in Population Health ProgramsLecture c This material (Comp 21 Unit 8) was developed by Johns Hopkins University, funded by the Department of Health and Human Services, Office of the National Coordinator for Health Information Technology under Award Number 90WT0005.

More Related