1 / 24

Generalised Evidence Synthesis

Generalised Evidence Synthesis. Keith Abrams, Cosetta Minelli, Nicola Cooper & Alex Sutton Medical Statistics Group Department of Health Sciences, University of Leicester, UK. CHEBS Seminar ‘Focusing on the Key Challenges’ Nov 7, 2003. Outline. Why Generalised Evidence Synthesis?

tonya
Télécharger la présentation

Generalised Evidence Synthesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generalised Evidence Synthesis Keith Abrams, Cosetta Minelli, Nicola Cooper & Alex Sutton Medical Statistics Group Department of Health Sciences, University of Leicester, UK CHEBS Seminar ‘Focusing on the Key Challenges’ Nov 7, 2003

  2. Outline • Why Generalised Evidence Synthesis? • Bias in observational evidence • Example: Hormone Replacement Therapy (HRT) & Breast Cancer • Discussion

  3. Why Generalised Evidence Synthesis? • RCT evidence ‘gold standard’ for assessing efficacy (internal validity) • Generalisability of RCT evidence may be difficult (external validity), e.g. CHD & women • Paucity of RCT evidence, e.g. adverse events • Difficult to conduct RCTs in some situations, e.g. policy changes • RCTs have yet to be conducted, but health policy decisions have to be made • Consider totality of evidence-base – (G)ES beyond MA of RCTs

  4. Assessment of Bias in Observational Studies - 1 • Empirical evidence relating to potential extent of bias in observational evidence (Deeks et al. 2003) • Primary studies: • Sacks et al. (1982) & Benson et al. (2000) • Primary & Secondary studies (meta-analyses): • Britton et al. (1998) & MacLehose et al. (2000) • Secondary studies (meta-analyses): • Kunz et al. (1998,2000), Concato et al. (2000) & Ioannidis et al. (2001)

  5. Assessment of Bias in Observational Studies - 2 • Using a random effects meta-epidemiology model (Sterne et al. 2002) • Sacks et al. (1982) & Schultz et al. (1995) ~30% • Ioannidis et al. (2001) ~ 50% • MacLehose et al. (2000) ~ 100% • Deeks et al. (2003) simulation study: comparison of RCTs and historical/concurrent observational studies • Empirical assessment of bias – results similar to previous meta-epidemiological studies • Methods of case-mix adjustment, regression & propensity scores fail to properly account for bias

  6. Approaches to Evidence Synthesis • Treat sources separately, possibly ignoring/downweighting some implicitly • Bayesian approach & treat observational evidence as prior for RCTs & explicit consideration of bias: • Power Transform Prior • Bias Allowance Model • Generalised Evidence Synthesis

  7. Example – HRT • HRT used for relief of menopausal symptoms • Prevention of fractures, especially in women with osteoporosis & low bone mineral density • BUT concerns have been raised over possible increased risk of Breast Cancer

  8. HRT & Breast Cancer – RCT Evidence before July 2002 OR 0.97 95% CI 0.67 to 1.39 Source: Torgerson et al. (2002)

  9. HRT & Breast Cancer – Observational Evidence* All Observational OR 1.18 95% CI 1.10 to 1.26 RCTs OR 0.97 95% CI 0.67 to 1.39 Source: Lancet (1997) * Adjusted for possible confounders

  10. Use of Observational Evidence in Prior Distribution Quasi RCTs Cohort Case-Control Synthesis Empirical Evidence Bias Prior RCTs

  11. Power Transform Prior • Following Ibrahim & Chen (2000) • 0    1 is degree of downweighting •  = 0  total discounting •  = 1  accept at ‘face value’ • Evaluate for a range of values of 

  12. Power Transform Prior – Results 1

  13. Bias Allowance Model Following Spiegelhalter et al. 2003 • *is unbiased true effect in observational studies •  is bias associated with observational evidence • 2 represents a priori beliefs regarding the possible extent of the bias

  14. Bias Allowance Model - Results

  15. HRT & Breast Cancer: Evidence – July 2002 HERS II (JAMA July 3) [Follow-up of HERS] • n = 2321 & 29 Breast Cancers • OR 1.08 (95% CI: 0.52 to 2.25) • WHI (JAMA July 17) [Stopped early] • n= 16,608 & 290 Breast Cancers • OR 1.28 (95% CI: 1.01 to 1.62) • HERS II & WHI • OR 1.26 (95% CI: 1.01 to 1.58) • Revised Meta-Analysis of RCTs • WHI 68% weight • OR 1.20 (95% CI: 0.99 to 1.45)

  16. Power Transform Prior – Results

  17. Generalised Evidence Synthesis • Modelling RCT & observational (3 types) evidence directly; • Hierarchical Models (Prevost et al, 2000;Sutton & Abrams, 2001) • Confidence Profiling (Eddy et al, 1990) • Overcomes whether RCTs should form likelihood & observational studies prior

  18. RCTs Quasi RCTs Cohort Case-Control Routine Beliefs Synthesis Generalised Evidence Synthesis Utilities Costs Decision Model

  19. Hierarchical Model

  20. HRT: Hierarchical Model - Results * Ignores study-type

  21. Hierarchical Model - Extensions • Inclusion of empirical assessment of (differential) bias with uncertainty, i.e. distribution • Bias Constraint, e.g. HRT

  22. Discussion – 1 • Direct vs Indirect use of non-RCT evidence • Direct: intervention effect, e.g. RR • Indirect: other model parameters, e.g. correlation between time points • Allowing for bias/adjusting at study-level • IPD if aggregate patient-level covariates are important, e.g. age, prognostic score • Quality – better instruments for non-RCTs & sensitivity of results to instruments

  23. Discussion – 2 • Subjective prior beliefs regarding relative credibility (bias or relevance) of sources of evidence • Elicitation • Bayesian methods provide … • A flexible framework to consider inclusion of all evidence, & … • which is explicit & transparent, BUT … • Require careful & critical application

  24. References Deeks JJ et al. Evaluating non-randomised intervention studies. HTA 2003;7(27). Eddy DM et al.A Bayesian method for synthesizing evidence. The Confidence Profile Method. IJTAHC 1990;6(1):31-55. Ibrahim JG & Chen MH. Power prior distributions for regression models. Stat. Sci. 2000 15(1):46-60. Prevost TC et al. Hierarchical models in generalised synthesis of evidence: an example based on studies of breast cancer. Stat Med 2000;19:3359-76. Sterne JAC et al. Statistical methods for assessing the influence of study characteristics on treatment effects in ‘meta-epidemiological’ research. Stat. Med. 2002;21:1513-1524. Spiegelhalter DJ, Abrams KR, Myles JP. Bayesian Approaches to Clinical Trials & Health-care Evaluation. London: Wiley, 2003. Sutton AJ & Abrams KR. Bayesian methods in meta-analysis and evidence synthesis. SMMR 2001;10(4):277-303.

More Related