1 / 30

Systematic Review Module 8: Assessing Applicability

Systematic Review Module 8: Assessing Applicability. C. Michael White, PharmD, FCP, FCCP Professor and Director University of Connecticut/Hartford Hospital Evidence-based Practice Center. Speaker has no actual or potential conflicts of interest in relation to this activity. Learning Objectives.

albert
Télécharger la présentation

Systematic Review Module 8: Assessing Applicability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Systematic Review Module 8: Assessing Applicability C. Michael White, PharmD, FCP, FCCP Professor and DirectorUniversity of Connecticut/Hartford HospitalEvidence-based Practice Center Speaker has no actual or potential conflicts of interest in relation to this activity

  2. Learning Objectives • The successful learner will be able to: • Describe applicability and substantiate its importance • Delineate a systematic approach to assessing applicability • Based on PICOTS domains • Apply a standard approach to discerning whether a study is evaluating efficacy or effectiveness

  3. Applicability of Studies

  4. Defining Applicability • Applicability definition • “Inferences about the extent to which a causal relationship holds over variations in persons, settings, treatments, and outcomes” • Applicable study results likely reflect expected outcomes in the real world • Others terms used synonymously with applicability include external validity, generalizability, and relevance ~Shadish and Cook, 2002 Shadish W, Cook. T Experimental and quasi-experimental design for generalized casual inference. Boston: Houghton Mifflin; 2002.

  5. Framing Applicability Issues • Frame issues of applicability with reference to specific clinical or policy questions the review is intended to inform • Applicability needs to be considered at the outset • When scope of review is determined • When key questions are identified Atkins D. Assessing applicability. Methods guide.

  6. Applicability Resources • Clinical experts and stakeholders can provide general information important in framing applicability issues • What the population of interest looks like • Mostly female, mostly elderly, mostly ethnic • What types of care or procedures are routine or represent standard of care • Are certain subpopulations characteristically different from others • Biologically, clinically Atkins D. Assessing applicability. Methods guide.

  7. Other Applicability Resources • Registry or epidemiological information, practice guidelines, consensus papers, book chapters, and general reviews can provide useful applicability information • Applicability issues do not have to be reviewed for each study • Used to place the available literature in context • Should be a factor in rating the strength of evidence Atkins D. Assessing applicability. Methods guide. .

  8. General Considerations in Judging Applicability • Applicability judgments should be based on stepwise considerations of a number of specific issues • However, applicability is a general rather than absolute construct • No validated formulaic criteria Atkins D. Assessing Applicability. Methods Guide.

  9. General Considerations in Judging Applicability • Stepwise approach to applicability: • Consider applicability based on nature of interventions and outcomes • Identify a few factors that are most relevant to applicability • Summarize findings in a consistent way using PICOTS framework • Summarize reasoning behind judgments made about applicability to other populations or interventions Atkins D. Assessing Applicability. Methods Guide.

  10. Population and Applicability Atkins D. Assessing Applicability. Methods Guide. Gartlehner G. J Clin Epidemiol 2006;59:1040-8.

  11. Population and Applicability: Examples • In the FIT trial, only 4,000 of 54,000 women screened were enrolled. Women were younger, healthier, and more adherent than typical osteoporosis patients. • Trial of etanercept for juvenile diabetes excluded patients with side effects during an active run-in period. Trial found low incidence of adverse events. • Clinical trials used to inform Medicare decisions enrolled patients who were younger (60 vs. 75 years) and more often male (75 vs. 42%) than Medicare patients with cardiovascular disease. Atkins D. Assessing Applicability. Methods Guide.

  12. Intervention and Applicability Atkins D. Assessing Applicability. Methods Guide.

  13. Intervention and Applicability: Examples • Studies of behavioral modification to promote healthy diet employ larger number and longer duration of visits than those available to most community patients. • Antiretroviral trials’ use of pill counts does not always translate into effectiveness in real-world practice. • Combining iron and zinc attenuates the ability of iron to raise hemoglobin levels. • Trials of carotid endarterectomy selected surgeons with extensive experience and low complication rates were not representative of average vascular surgeons. Atkins D. Assessing Applicability. Methods Guide.

  14. Comparator, Outcomes, and Applicability Atkins D. Assessing Applicability. Methods Guide.

  15. Comparator and Applicability: Examples • Fixed-dose study compared high dose duloxetine (80 to 120 mg) to low dose paroxetine (20 mg) • Many trials evaluating magnesium in acute myocardial infarction conducted before thrombolytics, antiplatelets, beta-blockers, and primary percutaneous coronary intervention (PCI) were used • Only 1 of 23 trials comparing bypass surgery to PCI used drug-eluting stents Atkins D. Assessing Applicability. Methods Guide.

  16. Outcomes and Applicability: Examples • Trials of biologics for rheumatoid arthritis use radiographic progression rather than symptom evaluations • Trials comparing cyclooxygenase-2 inhibitors and nonsteroidal antiinflammatory drugs use endoscopy-evaluated ulceration rather than symptomatic ulcers Atkins D. Assessing Applicability. Methods Guide.

  17. Timing, Setting, and Applicability Atkins D. Assessing Applicability. Methods Guide.

  18. Timing and Applicability: Examples • Alzheimer’s disease trials evaluate surrogate end points (cognitive function scales) at 6 months, which may not reflect long-term outcomes (institutionalization rates) • Trials evaluate the QTc interval-prolonging effects of drugs using single dose, end-of-dosing interval evaluations rather than evaluations at maximum blood concentrations Atkins D. Assessing Applicability. Methods Guide.

  19. Setting and Applicability: Examples • Studies evaluating benefits of breast self-exams conducted in Shanghai and St. Petersburg, countries that do not employ routine mammography screening as in US • Would self-exam be as effective if routine mammogram picks up cancer at earlier stages? • Studies of open surgical abdominal aortic aneurysm repair found inverse relationship between hospital volume and short-term mortality Atkins D. Assessing Applicability. Methods Guide.

  20. Efficacy or Effectiveness • Seven criteria used • 5 of 7 indicative of effectiveness trial • Enrolled primary care population • Less stringent eligibility criteria • Assessment of health-related outcomes • Long study duration, clinically relevant treatment modalities • Assessment of adverse events • Adequate sample size to assess minimally important difference for a patient perspective • Intention to treat analysis Gartlehner G. Int J Tech Assessment Health Care 2009;25:323-30. Gartlehner G. J Clin Epidemiol 2006;59:1040-8.

  21. Assessment of Effectiveness Decision Tool • EPC directors reviewed 26 trials • 20 were judged subjectively as effectiveness trials, 6 as efficacy • Scale not used • Using the scale, 17 of 20 met five criteria and only 1 of 6 efficacy trials did Atkins D. Assessing Applicability. Methods Guide.

  22. Guidance for Assessing Applicability (I) • Overarching principle • Be practical, focus on a limited number of features that are most important to the key questions and objectives of the review • Step 1: Report a priori factors affecting the applicability of questions being asked using PICOTS format • Considerations should be reflected in key questions, inclusion and exclusion criteria for the review Atkins D. Assessing Applicability. Methods Guide.

  23. Guidance for Assessing Applicability (II) • Step 1 Actions: • Identify general challenges and specific factors that may affect applicability • Factors chosen will vary based on nature of intervention, perspective (clinician, policymaker, patient), and outcome (benefit, harm) • Consult stakeholders, review background • Identify factors critical to determining if evidence is applicable to decisions they need to make • Understand current practice to subsequently assess extent to which studies reflect it • Extract specific information using PICOTS format Atkins D. Assessing Applicability. Methods Guide.

  24. Guidance for Assessing Applicability (III) • Step 2: Review and synthesize the evidence with explicit attention to crucial factors within the PICOTS format • Step 2 Actions: • Identify which of your trials are effectiveness or efficacy • If you have a mix, compare and contrast findings • Judge whether differences between the body of efficacy trials and the real world are important enough to limit its value in making health care decisions Atkins D. Assessing Applicability. Methods Guide.

  25. Guidance for Assessing Applicability (IV) • Step 2 Actions (cont.) • Examine observational studies with more representative populations to inform judgments about applicability of trial data • Population-based studies, pharmacoepidemiologic studies, registries • Assess applicability of aggregated evidence • Results of effectiveness trials should be highlighted • Identify important factors in trials that may impact applicability and the direction and magnitude of the bias Atkins D. Assessing Applicability. Methods Guide.

  26. Guidance for Assessing Applicability (V) • Step 2 Actions (cont.) • Consider subgroup analyses • Seek evidence for empirical relationship between characteristics and effect size • Trials done predominantly in males; subgroup analyses reporting results based on gender can inform the direction and magnitude of the bias • Comparison of event rates across studies can illustrate variation based on population characteristics Atkins D. Assessing Applicability. Methods Guide.

  27. Guidance for Assessing Applicability (VI) • Step 3: Summarize evidence; PICOTS format • Step 3 Actions: • Indicate for each domain (PICOTS) a judgment about whether characteristics of the evidence raise applicability concerns • Describe not only what study did (exclude patients with history of bleeds) but also the effect it had (low risk of bleeding) and extent this reduced applicability • Note when major questions of applicability are not addressed and the implications for applicability Atkins D. Assessing Applicability. Methods Guide.

  28. Summary Applicability Table Atkins D. Assessing Applicability. Methods Guide.

  29. Summary Applicability Table Atkins D. Assessing Applicability. Methods Guide.

  30. Key Messages • Applicability is important and distinct from internal validity • The reviewer needs to evaluate applicability by comparing and contrasting the target population and the study population using the PICOTS format • There are discernable differences between efficacy and effectiveness studies • Effectiveness studies have high applicability • Transparency is an important aspect of the Effective Healthcare Program • A standard approach improves transparency

More Related