1 / 74

Practical Approaches to Evidence-Based Evaluation Practice in Public Health

Practical Approaches to Evidence-Based Evaluation Practice in Public Health. Joseph Telfair, DrPH, MSW/MPH Professor Department of Public Health School of Health and Human Performance University of North Carolina at Greensboro Greensboro, NC (USA) j _telfai@uncg.edu ♦ (336) 334 - 3240.

papina
Télécharger la présentation

Practical Approaches to Evidence-Based Evaluation Practice in Public Health

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Practical Approaches to Evidence-Based Evaluation Practice in Public Health Joseph Telfair, DrPH, MSW/MPHProfessorDepartment of Public HealthSchool of Health and Human Performance University of North Carolina at GreensboroGreensboro, NC (USA)j_telfai@uncg.edu♦ (336) 334 - 3240

  2. OVERVIEW OF PRESENTATION • Best Practices/Evidence: MCHB Perspective • Setting the Stage:Why Important, Definitionsand Key Concepts • PerformanceMeasurement: Selecting and Constructing Measures • Process Monitoring:Developing a Monitoring System • Concluding Remarks • Questions and Discussion

  3. Tell me ....I Forget Show me....I remember Involve me....I understandChinese Proverb

  4. Of Relevance • The MCHB has developed key strategies that are the broad, cross-cutting approaches the Bureau uses in order to reach its five-year (and beyond) goals in the Bureau Strategic Plan. Goal 4 of the Strategic Plan is: • “Improve the Health Infrastructure and Systems of Care.” • One key strategy used to support this goal is: • “Using the best available evidence, develop and promote guidelines and practices that improve services and systems of care.”

  5. Best Practices/EvidenceMCHB Perspective(http://mchb.hrsa.gov/about/stratplan03-07.htm)

  6. Best Practices/Evidence (1) • MCHB/AMCHP defines “best practices” as a continuum of practices, programs and policies ranging from promising to evidence-based to science-based • EVALUATION of best practices requires the identification and establishment of evidence

  7. Evaluating Evidence • Evidence can be evaluated in four categories • Research • Expert Opinion • Field Lessons • Theoretical Rationale • All best practice approaches reported have a strong conceptual/theoretical rationale • However, the strength of evidence from research, expert opinion and field lessons fall within a spectrum

  8. Strength of Evidence Spectrum Promising Best Practice Approaches Research + Expert Opinion + Field Lessons + Theoretical Rationale +++ Proven Best Practice Approaches Research +++ Expert Opinion +++ Field Lessons +++ Theoretical Rationale +++

  9. Strength of Evidence Spectrum Promising Best Practice Approaches • Little research • A beginning of agreement in expert opinion • Very few field lessons evaluating effectiveness Proven Best Practice Approaches • Supported by strong research • Extensive expert opinion from multiple authoritative sources • Solid field lessons evaluating effectiveness

  10. Grading Evidence (1) Research/Evaluation • + A few studies in public health reporting effectiveness (Promising) • ++ Descriptive review of scientific literature supporting effectiveness (Promising/Proven) • +++ Systematic review of scientific literature supporting effectiveness (Proven)

  11. Grading Evidence (2) Expert Opinion • + An expert group or general professional opinion supporting the practice (Promising) • ++ One authoritative source (such as a national organization or agency) supporting the practice (Promising/Proven) • +++ Multiple authoritative sources (including national organizations, agencies or initiatives) supporting the practice (Proven)

  12. Grading Evidence (3) Field Lessons/Promising Practices • + Successes in state practices reported without evaluation documenting effectiveness (promising) • ++ Evaluation by a few states separately documenting effectiveness (promising/proven) • +++ Cluster evaluation of several states (group evaluation) documenting effectiveness (proven)

  13. Grading Evidence (4) Practice-based Conceptual/Theoretical Rationale +++ Only practices which are linked by strong causal reasoning to the desired outcome of improving health and total well-being of priority populations will be reported (proven)

  14. Best Practices/Evidence (2) • MCHB has established that family and community participation and engagement are key to the development of effective, quality health systems and services • Testing of Best Practices to Build Evidence – Deduction to Verification to Induction - Repeats • Requires a Practical Approach to Evaluation

  15. Setting the Stage

  16. WHY? (1) • Four Primary reasons: • To develop and maintain • an effective program and service delivery process at the state and local level • To enhance staff’s understanding of the factors that contribute to the extent to which and in what ways the • specific aims • program service targets • evaluation objectives • are being followed

  17. WHY? (2) • Four Primary reasons (cont): • To assure staff and stakeholders by putting in place a process for determining whether or not the program and service delivery activities are succeeding as planned

  18. WHY? (3) • Four Primary reasons (cont): • To Build best practices data by Assessing the application of ‘the best available evidence’ from (MCHB modified) (4 levels): • Existing Research/Evaluation • Expert Opinion • Field Lessons/Promising Practices • Practice-based Conceptual/Theoretical Rationale

  19. Definitions and Key Concepts (1) • Definition: “Evaluation or program measurement (PM) is a systematic process for staff and institutions to obtain information on the service delivery process, its outcomes, and the effectiveness of its work, so that they can improve the process and describe its accomplishments” Mattessich, PW (2003) (p. 3) [modified]

  20. Definitions and Key Concepts (2) • Definition: Program monitoring is the process of assessing progress toward achievement of a service delivery process’s objectives to determine whether the process was implemented as planned (Peoples-Sheps & Telfair (2005 – See Handout)

  21. Definitions and Key Concepts (3) • Evaluation or PM involves a comparison of the staff’s planned processes and outcomes with selected standards in order to assess accomplishments • Evaluation or PM involves the application of social science methods to determine whether assessed efforts are the cause of observed results

  22. Definitions and Key Concepts (4) • Evaluation or PM relies on both qualitative and quantitative methods, and often a triangulation of the two, to produce informative results

  23. Definitions and Key Concepts (5) • Program monitoring is carried out by assessing the extent to which a program is implemented as designed that involves tracking progress toward achievement of a program’s objectives (Peoples-Sheps & Telfair (2005) • It is a very traditional form of assessment that is generally considered an administrative function and integral to the ongoing operations of every program (Kettner, et al., 1999).

  24. Definitions and Key Concepts (6) • Definition: A performance measure is a specific, quantitative or qualitative representation (measure) of a capacity, process, or outcome deemed relevant to the assessment of program performance (Peoples-Sheps & Telfair (2005))

  25. Definitions and Key Concepts (7) • Both program monitoring and performance measures • depend on strong, meaningful measures of program and service delivery process performance

  26. PRACTICE EXERCISE Questions 1- 4

  27. Performance Measurement

  28. Selecting or Constructing Measures (1) • Deciding what to measure is an essential first step • The aspects the service delivery process that are measured attract attention and generate action (Hatry, 1999). • Conversely, aspects not measured may go unnoticed until a crisis brings them to the surface (e.g., discovery of inadequate data collection efforts that did not allow for population or service targets to be met)

  29. Selecting or Constructing Measures (2) • If the staff takes the time to think through what is needed, they are much less likely to miss something important • To cover all of the bases, start with the monitoring and evaluation’s specific aim(s) or hypothesis(es) to identify the main program and service delivery efforts and expected outcomes

  30. Selecting or Constructing Measures (3) • To construct performance measures, three tasks must be undertaken: • identifying concepts to be measured • selecting or constructing measures • locating or developing data sources

  31. Selecting or Constructing Measures (4) • Performance measures can be formulated in many different ways. They may be: • numbers (number of TB deaths) • rates (TB mortality rate) • proportions or percentages (percentage of days missed at work among person with TB)

  32. Selecting or Constructing Measures (5) • Performance measures can be formulated in many different ways. They may be (cont): • averages (average number of emergency department visits per person 18 to 44 years of age in a given year) • Categories (team meetings held) • Numbers, percentages, and rates are the most frequently used in MCH

  33. Selecting or Constructing Measures (6) • Numbers, percentages, and rates are the most frequently used in MCH • Least used, but often just as critical are Qualitative indicators such as consensus measures, aggregated (agreement/ disagreement) statements, archival text-based descriptors (e.g., policy statements and group opinions from advisor or consumer groups

  34. Selecting or Constructing Measures (7) • It is often helpful to include numbers and qualitative indicators along with rates and percentages so that the latter measures can be understood in the context of the type of service focus for which they were derived • To select or develop high-quality performance measures, candidate measures are generally assessed according to criteria that represent both scientific rigor and practical relevance

  35. Selecting or Constructing Measures (8) • Responsive measures are able to detect a change • Measures need to be understandable to the audience to whom they will be presented • Regardless of how it is formulated, a measure should have very precise wording, a specific timeframe, and a clearly defined research population (e.g., persons with TB - Quant) or set of tasks (e.g., steps for securing needed sample - Qual)

  36. Selecting or Constructing Measures (9) • A performance measure should be meaningful, valid, reliable, responsive, and understandable and should allow for risk adjustments (errors)

  37. Selecting or Constructing Measures (10) • A valid measure is one that measures what it intends to measure. • Validity, like all of the qualities in this list, is measured on a continuum, meaning that some measures have greater validity than others

  38. Selecting or Constructing Measures (11) • Reliable performance measures can be reproduced regardless of who collects the data or when they are collected (assuming the true results have not changed) • Like validity, reliability is viewed as a continuum

  39. Selecting or Constructing Measures (12) • The selection of measures is closely tied to the data or research project information available to construct them • Data or information sources should • Be of high quality, with standardized definitions (as defined and agreed upon by the research team) and data collection methods and • Have acceptable levels of validity and reliability on the items of interest

  40. Selecting or Constructing Measures (13) • Data or information sources should (cont) • Be available within the program service delivery timeframe (e.g., 3 years) • Have cost conforming to budgetary constraints of the program • It is more efficient, but not essential, to construct measures from existing, or secondary, data sources, rather than to collect new data specifically for a given set of performance measures

  41. PROCESS MONITORING

  42. Source: Mattessich, PW (2003). p. 10

  43. Developing a Monitoring System (1) • Development of a monitoring system is an essential component of program and service delivery process measurement plan • The monitoring process described in this presentation • identifies the program’s objectives • the base from which formulas to measure progress are developed

  44. Developing a Monitoring System (2) • The monitoring process described in this presentation (cont) • relative strength or emphasis of a measure is assigned as necessary • data collection plans are developed • achievement scores are calculated at predetermined intervals

  45. Developing a Monitoring System (3) • Start with the Aim-linked Objectives • The objectives of a Specific Aim, each of which consists of a performance measure and a target, serve as the foundation for project monitoring • Fully developed, measurable objectives must correspond with the program or service purpose • Performance measures must be developed as the program is being planned

  46. Developing a Monitoring System (4) • Each objective should have an explicit date by which the target is to be achieved (see example next slide) • With objectives clearly and precisely stated, the next challenge is to develop a system through which progress towards meeting the program’s targets can be monitored

  47. Developing a Monitoring System (5) • The information derived from monitoring shows which program objectives need more attention in the future and whether any of them require less intensive work • If the process has fallen short on some objectives, this information should trigger an in-depth search for the reasons expected targets were not achieved

  48. Developing a Monitoring System (6) • The Table on the slide to come shows the components of a monitoring system • The first two columns are identical to those in the previous slide showing performance measures and targets

  49. Developing a Monitoring System (7) • The remaining three columns represent the basic elements of a monitoring system, as it builds on the program’s Specific Aims linked objectives • See Expanded Matrix (Handout)

More Related