1 / 49

Applying Bayesian evidence synthesis in comparative effectiveness research

Applying Bayesian evidence synthesis in comparative effectiveness research. David Ohlssen (Novartis Pharmaceticals Corporation). Overview. Part 1 Bayesian DIA CER sub-team Part 2 Overview of Bayesian evidence synthesis. Part 1 Bayesian DIA CER sub-team. Team Members. Chair: David Ohlssen

gabi
Télécharger la présentation

Applying Bayesian evidence synthesis in comparative effectiveness research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Applying Bayesian evidence synthesis in comparative effectiveness research David Ohlssen (Novartis Pharmaceticals Corporation)

  2. Overview Part 1 Bayesian DIA CER sub-team Part 2 Overview of Bayesian evidence synthesis

  3. Part 1 Bayesian DIA CER sub-team

  4. Team Members • Chair: David Ohlssen • Co-chair: Haijun Ma •  Other team members: • Fanni Natanegara, George Quartey, Mark Boye, Ram Tiwari, Yu Chang

  5. Problem Statement • Comparative effectiveness research (CER) is designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options • Timely research and dissemination of CER results to be used by clinicians, patients, policymakers, and health plans and other payers to make informed decisions at both the individual and population levels • Bayesian approaches provide a natural framework for combining information from a variety of sources in comparative effectiveness research • Rapid technical development as evident by a recent flurry of publications • Limited understanding on how Bayesian techniques should be applied in practice

  6. Objectives Encourage the appropriate application of Bayesian approaches to the problem of comparative effectiveness. Input into ongoing initiatives on comparative effectiveness within medical products development setting through white papers/publications and session at future meetings

  7. Project Scope Analysis of patient benefit risk using existing data Initially focused on 1) The use of Bayesian evidence synthesis techniques such as mixed treatment comparisons 2) Joint Modeling in benefit risk assessment

  8. Current aims for 2012 • Literature review of Bayesian methods in CER – Q4 2012 •  To gain an understanding and appreciation of other CER working groups – Q4 2012 • Decide on the list of CER working groups to contact • Understand the objectives, status of each group

  9. Part 2 Overview of Bayesian evidence synthesis

  10. IntroductionEvidence synthesis in drug development • The ideas and principles behind evidence synthesis date back to the work of Eddy et al; 1992 • However, wide spread application has been driven by the need for quantitative health technology assessment: • cost effectiveness • comparitive effectiveness • Ideas often closely linked with Bayesian principles and methods: • Good decision making should ideally be based on all relevant information • MCMC computation

  11. Recent developments in comparative effectiveness • Health agencies have increasing become interested in health technology assessment and the comparative effectiveness of various treatment options • Statistical approaches include extensions of standard meta-analysis models allowing multiple treatments to be compared • FDA Partnership in Applied Comparative Effectiveness Science (PACES) -including projects on utilizing historical data in clinical trials and subgroup analysis

  12. Aims of this talkEvidence synthesis • Introduce some basic concepts • Illustration through a series of applications: • Motivating public health example • Network meta-analysis • Using historical data in the design and analysis of clinical trials • Subgroup analysis • Focus on principles and understanding of critical assumptions rather than technical details

  13. Basic conceptsFramework and Notation for evidence synthesis Y1 Y2 YS Y1,..,YSData from S sources 1,…, SSource-specific parameters/effects of interest(e.g. a mean difference) Question related to 1,…, S(e.g. average effect, or effect in a new study) 2 ? S 1

  14. Strategies for HIV screening

  15. Ades and Cliffe (2002) • HIV: synthesizing evidence from multiple sources • Aim to compare strategies for screening for HIV in pre-natal clinics: • Universal screening of all women, • or targeted screening of current injecting drug users (IDU) or women born in sub-Saharan Africa (SSA) • Use synthesis to determine the optimal policy

  16. Key parametersAdes and Cliffe (2002) a- Proportion of women born in sub-Saharan Africa (SSA) b Proportion of women who are intravenous drug users (IDU) c HIV infection rate in SSA d HIV infection rate in IDU e HIV infection rate in non-SSA, non-IDU f Proportion HIV already diagnosed in SSA g Proportion HIV already diagnosed in IDU h Proportion HIV already diagnosed in non-SSA, non-IDU NO direct evidence concerning e and h!

  17. A subset of some of the data used in the synthesisAdes and Cliffe (2002) HIV prevalence, women not born in SSA,1997-8 [db + e(1 − a − b)]/(1 − a) 74 / 136139 Overall HIV prevalence in pregnant women, 1999 ca + db + e(1 − a − b) 254 / 102287 Diagnosed HIV in SSA women as a proportion of all diagnosed HIV, 1999 fca/[fca + gdb + he(1 − a − b)] 43 / 60

  18. Implementation of the evidence synthesisAdes and Cliffe (2002) The evidence was synthesized by placing all data sources within a single Bayesian model Easy to code in WinBUGS Key assumption – consistency of evidence across the different data sources Can be checked by comparing direct and indirect evidence at various “nodes” in the graphical model (Conflict p-value)

  19. Network meta-analysis

  20. Motivation for Network Meta-Analysis • There are often many treatments for health conditions • Published systematic reviews and meta-analyses typically focus on pair-wise comparisons • More than 20 separate Cochrane reviews for adult smoking cessation • More than 20 separate Cochrane reviews for chronic asthma in adults • An alternative approach would involve extending the standard meta-analysis techniques to accommodate multiple treatment • This emerging field has been described as both network meta-analysis and mixed treatment comparisons

  21. Network meta-analysis graphic A E B F C G D H

  22. Network meta-analysis – key assumptions Three key assumptions (Song et al., 2009): • Homogeneity assumption – Studies in the network MA which compare the same treatments must be sufficiently similar. • Similarity assumption – When comparing A and C indirectly via B, the patient populations of the trial(s) investigating A vs B and those investigating B vs C must be sufficiently similar. • Consistency assumption – direct and indirect comparisons, when done separately, must be roughly in agreement.

  23. Example 2 Network meta-analysisTrelle et al (2011) - Cardiovascular safety of non-steroidal anti-inflammatory drugs: • Primary Endpoint was myocardial infarction • Data synthesis 31 trials in 116 429 patients with more than 115 000 patient years of follow-up were included. • A Network random effects meta-analysis were used in the analysis • Critical aspect – the assumptions regarding the consistency of evidence across the network • How reasonable is it to rank and compare treatments with this technique? placebo naproxen Lumiracoxib rofecoxib Ibuprofen Diclofenac Celecoxib Etoricoxib

  24. Results from Trelle et alMyocardial infarction analysis Relative risk with 95% confidence interval compared to placebo Authors' conclusion: Although uncertainty remains, little evidence exists to suggest that any of the investigated drugs are safe in cardiovascular terms. Naproxen seemed least harmful.

  25. Comments on Trelle et al Drug doses could not be considered (data not available). Average duration of exposure was different for different trials. Therefore, ranking of treatments relies on the strong assumption that the risk ratio is constant across time for all treatments The authors conducted extensive sensitivity analysis and the results appeared to be robust

  26. Additional Example Using Network meta-analysis for Phase IIIB Probability of success in a pricing trial placebo A B C D Combination product

  27. Use of Historical controls

  28. IntroductionObjective and Problem Statement • Design a study with a control arm / treatment arm(s) • Use historical control data in design and analysis • Ideally:  smaller trial comparable to a standard trial • Used in some of Novartis phase I and II trials • Design options • Standard Design: “n vs. n” • New Design: “n*+(n-n*) vs. n” with n* = “prior sample size” • How can the historical information be quantified? • How much is it worth?

  29. The Meta-Analytic-Predictive ApproachFramework and Notation Y1,..,YHHistorical control data from H trials 1,…, HControl “effects” (unknown) ?‘Relationship/Similarity’ (unknown)no relation… same effects *Effect in new trial (unknown)Design objective: [* | Y1,…,YH] Y*Data in new study(yet to be observed) Y1 Y2 YH 2 1 ? H * Y*

  30. Example – meta-analytic predictive approach to form priors Application Random-effect meta-analysis prior information for control group in new study, corresponding to prior sample size n*

  31. Bayesian setup-using historical control data Meta Analysis of Historical Data Study Analysis Drug Placebo Observed Control Response Rates Prior Distribution of Control Response Rate Observed Control data Prior Distribution of drug response rate Observed Drug data Historical Trial 1 Historical Trial 2 Bayesian Analysis Historical Trial 3 Predictive Distribution of Control Response Rate in a New Study Historical Trial 4 Meta-Analysis Posterior Distribution of Control Response Rate Posterior Distribution of Drug Response Rate Historical Trial 5 Historical Trial 6 Posterior Distribution of Difference in Response Historical Trial 7 Historical Trial 8

  32. Utilization in a quick kill quick win PoC Design ... ≥ 50% ... ≥ 70% ... ≥ 50% Positive PoC if P(d ≥ 0.2)... 1st Interim 2nd Interim Final analysis Negative PoC if P(d < 0.2)... ... ≥ 90% ... ≥ 90% ... > 50% With N=60, 2:1 Active:Placebo, IA’s after 20 and 40 patients With pPlacebo = 0.15, 10000 runs

  33. R package available for design investigation 33 | Evidence synthesis in drug development

  34. Subgroup Analysis

  35. Introduction to Subgroup analysis • For biological reasons treatments may be more effective in some populations of patients than others • Risk factors • Genetic factors • Demographic factors • This motivates interest in statistical methods that can explore and identify potential subgroups of interest

  36. Challenges with exploratory subgroup analysisrandom high bias -Fleming 2010 Effects of 5-Fluorouracil Plus Levamisole on Patient Survival Presented Overall and Within Subgroups, by Sex and Age* Analysis North Central Intergroup Group Treatment Study Group Study # 0035 (n = 162) (n = 619) All patients 0.72 0.67 Female 0.570.85 Male 0.910.50 Young 0.600.77 Old 0.870.59 Hazard Ratio Risk of Mortality

  37. Assumptions to deal with extremesJones et al (2011) • Similar methods to those used when combining historical data • However, the focus is on the individual subgroup parameters g1,......, gG rather than the prediction of a new subgroup • Unrelated Parameters g1,......, gG(u) Assumes a different treatment effect in each subgroup • Equal Parameters g1=...= gG(c)  Assumes the same treatment effect in each subgroup • Compromise. Effects are similar/related to a certain degree (r)

  38. Comments on shrinkage estimation This type of approach is sometimes called shrinkage estimation Shrinkage estimation attempts to adjust for random high bias When relating subgroups, it is often desirable and logical to use structures that allow greater similarity between some subgroups than others A variety of possible subgroup structures can be examined to assess robustness

  39. Subgroup analysis– Extension to multiple studies Data summary from several studies • Subgroup analysis in a meta-analytic context • Efficacy comparison T vs. C • Data from 7 studies • 8 subgroups • defined by 3 binary base-line covariates A, B, C • A, B, C high (+) or low (-) • describing burden of disease (BOD) • Idea: patients with higher BOD at baseline might show better efficacy

  40. Graphical model Subgroup analysis involving several studies Y1 Y1,..,YS Data from S studies ? Y2 Y... 2 S 1 YS • Study-specific parameters • 1,…, S • Parameters allow data to be combined from multiple studies g2 gG g1 ? • Subgroup parameters • g1,…, gG • Main parameters of interest • Various modeling structures can be examined

  41. Extension to multiple studiesExample 3: sensitivity analyses across a range of subgroup structures • 8 subgroups • defined by 3 binary base-line covariates A, B, C • A, B, C high (+) or low (-) • describing burden of disease (BOD) 41 | Evidence synthesis in drug development

  42. SummarySubgroup analysis Important to distinguish between exploratory subgroup analysis and confirmatory subgroup analysis Exploratory subgroup analysis can be misleading due to random high bias Evidence synthesis techniques that account for similarity among subgroups will help adjust for random high bias Examine a range of subgroup models to assess the robustness of any conclusions

  43. Conclusions • There is general agreement that good decision making should be based on all relevant information • However, this is not easy to do in a formal/quantitative way • Evidence synthesis • offers fairly well-developed methodologies • has many areas of application • is particularly useful for company-internal decision making (we have used and will increasingly use evidence synthesis in our phase I and II trials) • has become an important tool when making public health policy decisions

  44. References 44 | Combining Information in Drug Development 2010

  45. Evidence Synthesis/Meta-Analysis DerSimonian, Laird (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7; 177-88 Gould (1991). Using prior findings to augment active-controlled trials and trials with small placebo groups. Drug Information J. 25 369--380. Normand (1999). Meta-analysis: formulating, evaluating, combining, and reporting (Tutorial in Biostatistics). Statistics in Medicine 18: 321-359. See also Letters to the Editor by Carlin (2000) 19: 753-59, and Stijnen (2000) 19:759-761 Spiegelhalteret al. (2004); see main reference Stangl, Berry (eds) (2000). Meta-analysis in Medicine in Health Policy. Marcel Dekker Sutton, Abrams, Jones, Sheldon, Song (2000). Methods for Meta-analysis in Medical Research. John Wiley & Sons Trelle et al., “Cardiovascular safety of non-steroidal anti-inflammatory drugs: network non-steroidal anti-inflammatory drugs: network meta-analysis,” BMJ 342 (January 11, 2011): c7086-c7086.

  46. Historical Controls Ibrahim, Chen (2000). Power prior distributions for regression models.Statistical Science, 15: 46-60 Neuenschwander, Branson, Spiegelhalter (2009). A note on the power prior. Statistics in Medicine, 28: 3562-3566 Neuenschwander, Capkun-Niggli, Branson, Spiegelhalter. (2010). SummarizingHistorical Information on Controls in Clinical Trials. Clinical Trials, 7: 5-18 Pocock (1976). The combination of randomized and historical controls in clinical trials. Journal of Chronic Diseases, 29: 175-88 Spiegelhalter et al. (2004); see main reference Thall, Simon (1990). Incorporating historical control data in planning phase II studies. Statistics in Medicine, 9: 215-28

  47. Subgroup Analyses Berry, Berry (2004). Accounting for multiplicities in assessing drug safety: a three-level hierarchical mixture model. Biometrics, 60: 418-26 Davis, Leffingwell (1990). Empirical Bayes estimates of subgroup effects in clinical trial. Controlled Clinical Trials, 11: 37-42 Dixon, Simon (1991). Bayesian subgroup analysis. Biometrics, 47: 871-81 Fleming (2010), “Clinical Trials: Discerning Hype From Substance,” Annals of Internal Medicine 153:400 -406. Hodges, Cui, Sargent, Carlin (2007). Smoothing balanced single-error terms Analysis of Variance. Technometrics, 49: 12-25 Jones, Ohlssen, Neuenschwander, Racine, Branson (2011). Bayesian models for subgroup analysis in clinical trials. Clinical TrialsClinical Trials 8 129 -143 Louis (1984). Estimating a population of parameter values using Bayes and empirical Bayes methods. JASA, 79: 393-98 Pocock, Assman, Enos, Kasten (2002). Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practic eand problems. Statistics in Medicine, 21: 2917–2930 Spiegelhalter et al. (2004); see main reference Thall, Wathen, Bekele, Champlin, Baker, Benjamin (2003). Hierarchical Bayesian approaches to phase II trials in diseases with multiple subtypes, Statistics in Medicine, 22: 763-80

  48. Poisson network meta-analysis model • Model extension to K treatments : Lu, Ades (2004). Combination of direct and indirect evidence in mixed treatment comparisons, Statistics in Medicine, 23:3105-3124. • Different choices for µ’s and  ’s. They can be: • common (over studies), fixed (unconstrained), or “random” • Note: random  ’s  (K-1)-dimensional random effects distribution

  49. Acknowledgements Stuart Bailey ,Björn Bornkamp, Beat Neuenschwander, Heinz Schmidli, Min Wu, Andrew Wright

More Related