1 / 60

C ritical review of epi studies and presentation of study findings

C ritical review of epi studies and presentation of study findings. Lydia B. Zablotska, MD, PhD Associate Professor Department of Epidemiology and Biostatistics. With thanks to Dr. M. Pai, McGill University. Learning objectives. Public health implications of epi research findings

ronnie
Télécharger la présentation

C ritical review of epi studies and presentation of study findings

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critical review of epi studies and presentation of study findings Lydia B. Zablotska, MD, PhD Associate Professor Department of Epidemiology and Biostatistics With thanks to Dr. M. Pai, McGill University

  2. Learning objectives • Public health implications of epi research findings • Synthesis of findings across studies (pooled and meta-analyses) • Generalizability of findings • Publication bias

  3. Genetic basis for depression Risk of past-year depression at age 26 according to genotype and stressful life events Dunedin Child-Development Study, Caspi et al. 2002, 2003 Week 9 - Interaction

  4. What is the individual effect of cause A in the absence of exposure to cause B? What is the individual effect of cause A in the absence of exposure to cause A? What is the observed joint effect of A and B? What is the expected joint effect of A and B in the absence of interaction? Is the observed joint effect similar to the expected joint effect in the absence of interaction? What is the interaction magnitude RDE,-=0.17-0.10=0.07 RD-,G=0.10-0.10=0 RDOBSERVED E,G=0.33-0.10=0.23 RDEXPECTED E,G=0.07+0=0.07 RDOBSERVED E,G > RDEXPECTED E,G, additive interaction RDE/ G IS PRESENT – RDE/ G IS ABSENT = 0.23 - 0.07 =0.16 interaction contrast Comparing expected and observed joint effects • RRE,-=0.17/0.10=1.7 • RR-,G=010/0.10=1.0 • RROBSERVED E,G=0.33/0.10=3.3 • RREXPECTED E,G=1.7x1.0=1.7 • RROBSERVED E,G > RREXPECTED E,G, multiplicative interaction • RRE/G IS PRESENT / RRE/G IS ABSENT = 3.3 / 1.7 =1.9

  5. “…It is critical that health practitioners and scientists in other disciplines recognize the importance of replication of such findings before they can serve as valid indicators of disease risk or have utility for translation into clinical and public health practice.”

  6. “The epicenter of translational science” “The new challenge for epidemiology is the integration of knowledge and effective interventions into various societal settings working with allied disciplines not necessarily in the biomedical domain to ensure that these interventions have their intended effects on individual and public health.” Hiatt RA. Am. J. Epidemiol. 2010;172:528-529

  7. Epidemiology and the phases of translational research T0, scientific discovery research; T1, translational research from discovery to candidate application; T2, translational research from candidate application to evidence-based recommendation or policy; T3, translational research from recommendation to practice and control programs; T4, translational research from practice to population health impact. Khoury M J et al. Am. J. Epidemiol. 2010;172:517-524

  8. KNOWLEDGE SYNTHESIS: AN ENGINE FOR TRANSLATIONAL EPIDEMIOLOGY • Knowledge synthesis is a systematic approach to reviewing the evidence on what we know and what we do not know, and how we know it. • Knowledge synthesis methods, such as meta-analysis, are becoming standard in developing evidence-based recommendations for practice (T2 research). • The Cochrane Collaboration • Other independent groups, such as the US Preventive Services Task Force The Emergence of Translational Epidemiology: From Scientific Discovery to Population Health Impact. Khoury M J et al. Am. J. Epidemiol. 2010;172:517-524

  9. Examples of knowledge synthesis • In human genomics (stage T1) - Human Genome Epidemiology Network (HuGENet, 1998) synthesizes information on gene-disease associations through human genome epidemiology (HuGE) reviews and meta-analyses • Publications reporting a discovery from genome-wide association studies are encouraged to include a meta-analysis of replication data sets • Candidate applications for clinical and public health practice (stage T2) - Evaluation of Genomic Applications in Practice and Prevention (EGAPP by CDC). An independent EGAPP Working Group selects topics, oversees systematic reviews of evidence, and makes evidence-based recommendations. Khoury M J et al. Am. J. Epidemiol. 2010;172:517-524

  10. The importance of research synthesis • Karl Pearson is probably the first medical researcher to use formal techniques to combine data from different studies (1904): • He synthesized data from several studies on efficacy of typhoid vaccination • His rationale for pooling data: • “Many of the groups… are far too small to allow of any definite opinion being formed at all, having regard to the size of the probable error involved.” Egger et al. Systematic reviews in health care. London: BMJ Publications, 2001.

  11. The importance of research synthesis • The Cochrane Collaboration is named in honor of Archibald Cochrane, a British researcher. • "It is surely a great criticism of our profession that we have not organized a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomized controlled trials”

  12. The Cochrane collaboration • Cochrane’s challenge led to the establishment during the 1980s of an international collaboration to develop the Oxford Database of Perinatal Trials. • His encouragement, and the endorsement of his views by others, led to the opening of the first Cochrane centre (in Oxford, UK) in 1992 and the founding of The Cochrane Collaboration in 1993. • An international not-for-profit and independent organization produces and disseminates systematic reviews of health-care interventions and promotes the search for evidence in the form of clinical trials and other studies of interventions.

  13. Meta-analyses and systematic reviews indexed in PubMed, 1990-2011

  14. Meta-analyses and systematic reviews indexed in PubMed, by language

  15. Are these the same or different? • Traditional, narrative review • Systematic review • Overview • Meta-analysis • Pooled analysis

  16. Pai M et al. 2004

  17. Pai M et al. 2004

  18. Definitions • Traditional, narrative reviews, usually written by experts in the field, are qualitative, narrative summaries of evidence on a given topic. Typically, they involve informal and subjective methods to collect and interpret information. • “A systematic review(systematic overview) is a review in which there is a comprehensive search for relevant studies on a specific topic, and those identified are then appraised and synthesized according to a predetermined and explicit method.” • “A meta-analysis is the statistical combination of at least 2 studies to produce a single estimate of the effect of the healthcare intervention under consideration.” • Individual patient data meta-analyses (pooled analyses) involve obtaining raw data on all patients from each of the trials directly and then re-analyzing them. Klassen et al. 2004

  19. What is a systematic review? • A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question.  It  uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992, Oxman 1993). • The key characteristics of a systematic review are: • a clearly stated set of objectives with pre-defined eligibility criteria for studies; • an explicit, reproducible methodology; • a systematic search that attempts to identify all studies that would meet the eligibility criteria; • an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias; and • a systematic presentation, and synthesis, of the characteristics and findings of the included studies. Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.2. The Cochrane Collaboration, 2009.

  20. All systematic reviews are not systematic! • 50 review articles published in 4 major general medical journals (Annals of Internal Med; Archives of Internal Med; JAMA; New Engl J Med) • 80% addressed a focused review question • 2% described the method of locating evidence • 2% used explicit criteria for selecting studies for inclusion • 2% assessed the quality of the primary studies • 6% performed a quantitative analysis • Mulrow 1987

  21. 50 review articles published in 4 major general medical journals (Annals of Internal Med; Archives of Internal Med; JAMA; New Engl J Med) 80% addressed a focused review question 2% described the method of locating evidence 2% used explicit criteria for selecting studies for inclusion 2% assessed the quality of the primary studies 6% performed a quantitative analysis 158 reviews published in 6 major general medical journals (Annals of Internal Med; JAMA; New Engl J Med; BMJ; Am J Med; J of Int Med) 34% addressed a focused review question 28% described the method of locating evidence 14% used explicit criteria for selecting studies for inclusion 9% assessed the quality of the primary studies 21% performed a quantitative analysis All systematic reviews are not systematic! • Mulrow 1987 • McAlister et al. 1999

  22. All systematic reviews are not systematic! What makes reviews systematic? • Careful description of retrieval methodologies • Assessment of consistency of findings across studies (plots) • Assessment for publication and other reporting biases (assessment of heterogeneity)

  23. Narrative vs. Systematic Reviews Pai M et al. 2004

  24. Meta-analysis • Meta-analysis is the use of statistical methods to summarize the results of independent studies (Glass 1976) • Originally intended for experimental studies only • Meta-analyses of observational studies present particular challenges because of inherent biases and differences in study designs (Stroup et al. 2008)

  25. All systematic reviews are not meta-analyses • Many systematic reviews contain meta-analyses. • “…it is always appropriate and desirable to systematically review a body of data, but it may sometimes be inappropriate, or even misleading, to statistically pool results from separate studies. Indeed, it is our impression that reviewers often find it hard to resist the temptation of combining studies even when such meta-analysis is questionable or clearly inappropriate.” • Egger et al. 2001

  26. Pooled analysis • Focuses on treatment groups rather than on studies • Does not consider the validity of the comparisons • Subject to “Simpson’s paradox in probability • An extreme example of confounding in which a confounder reverses the effect first observed • Could happen: • When validity of the comparisons in ignored • When there is a large imbalance of a factor at the different levels of the variable of interest • Different risks • Diseases or disease stages are different • Patients are recruited from different settings • Variable follow-up between studies Lievre et al. 2002

  27. A new drug is compared to a placebo in 4 relatively small trials in patients at high risk for a certain adverse even and to an active reference drug in 2 larger trials of patients at low risk for the event Lievre et al. 2002

  28. Potential pitfalls of systematic reviews and meta-analyses • When a meta-analysis is done outside of a systematic review • When poor quality studies are included or when quality issues are ignored • When small and inconclusive studies are included • When inadequate attention is given to heterogeneity • When reporting biases are a problem

  29. Assessment of heterogeneity of findings • Heterogeneity could be due to differences in: • Patient populations studied • Interventions used • Co-interventions • Outcomes measured • Study design features (eg. length of follow-up) • Study quality • Random error

  30. Meta-analyses:How to look for heterogeneity

  31. Strategies for addressing heterogeneity 1. Check again that the data are correct Severe heterogeneity can indicate that data have been incorrectly extracted or entered 2. Do not do a meta-analysis 3. Explore heterogeneity It is clearly of interest to determine the causes of heterogeneity among results of studies. Heterogeneity may be explored by conducting subgroup analyses. Ideally, investigations of characteristics of studies that may be associated with heterogeneity should be pre-specified in the protocol of a review. Explorations of heterogeneity that are devised after heterogeneity is identified can at best lead to the generation of hypotheses. They should be interpreted with even more caution and should generally not be listed among the conclusions of a review. Also, investigations of heterogeneity when there are very few studies are of questionable value. 4. Ignore heterogeneity Fixed-effect meta-analyses ignore heterogeneity. The existence of heterogeneity suggests that there may not be a single intervention effect but a distribution of intervention effects. Thus the pooled fixed-effect estimate may be an intervention effect that does not actually exist in any population, and therefore have a confidence interval that is meaningless as well as being too narrow. The P value obtained from a fixed-effect meta-analysis does however provide a meaningful test of the null hypothesis that there is no effect in every study. 5. Perform a random-effects meta-analysis It is intended primarily for heterogeneity that cannot be explained. 6. Change the effect measure Heterogeneity may be an artificial consequence of an inappropriate choice of effect measure. When control group risks vary, homogeneous odds ratios or risk ratios will necessarily lead to heterogeneous risk differences, and vice versa. 7. Exclude studies In general it is unwise to exclude studies from a meta-analysis on the basis of their results as this may introduce bias. However, if an obvious reason for the outlying result is apparent, the study might be removed with more confidence.

  32. Recommendations for reporting the results of meta-analyses • Graphical summaries of study estimates and a combined estimate • Tables listing descriptive information for each study • Results of sensitivity testing • Results of sub-group analyses • Discussion of statistical uncertainty of findings • Efficient ways of visually presenting summary results • Forest plots: • Do confidence intervals of studies overlap with each other and the summary effect? • Present the point estimate and CI of each studies • Also present the overall, summary estimate • Allow visual appraisal of heterogeneity • Other graphs: • Cumulative meta-analysis • Sensitivity analysis • Funnel plot and trim-and-fill plot for publication bias • Galbraith, L’Abbe plots, etc [rarely used]; L’Abbe plot: are the studies spread around a central diagonal line indicating identical risks in intervention and control groups?

  33. Ried 2006

  34. Ried 2006

  35. Cumulative meta-analysis • A meta-analysis in which studies are added one at a time in a specified order (eg according to date of publication or quality) and the results are summarized as each new study is added Hackshaw et al. 1997

  36. Fergusson et al. 2005

  37. Sensitivity analysis • A meta-analysis in which studies are omitted one at a time in a specified order (eg according to number of subjects or quality) and the results are summarized as each new study is omitted. IV magnesium for acute myocardial infarction ISIS-4 trial had >50,000 patients! It showed no survival benefit from the addition of IV magnesium

  38. Hypothetical funnel plots Panel A: symmetrical plot in the absence of bias. Panel B: asymmetrical plot in the presence of reporting bias. Panel C: asymmetrical plot in the presence of bias because some smaller studies (open circles) are of lower methodological quality and therefore produce exaggerated intervention effect estimates.

  39. Meta-analyses:How to look for heterogeneity • Statistical tests: • Chi-square test for heterogeneity (Cochran Q test) • Tests whether the individual effects are farther away from the common effect, beyond what is expected by chance • Has poor power • P-value < 0.10 indicates significant heterogeneity • I-squared (I2, Higgins et al 2002) • % of total variability in effect measure that is attributable to heterogeneity (i.e. not to chance) • ranges between 0 and 100 • Values of I-squared equal to 25%, 50%, and 75% representing low, moderate, and high heterogeneity, respectively.

  40. Statistical models for combining data • All methods compute weighted averages • Weighting factor is often the study size • Models for dichotomous outcomes: • Fixed effects model • Inverse-variance, Peto method, M-H method • This choice of weight minimizes the imprecision (uncertainty) of the pooled effect estimate • Random effects model • DerSimonian & Laird method • The amount of variation, and hence the adjustment, can be estimated from the intervention effects and standard errors of the studies included in the meta-analysis • Models for continuous outcomes: • inverse-variance fixed-effect method • inverse-variance random-effects method • The methods will give exactly the same answers when there is no heterogeneity. • Where there is heterogeneity, confidence intervals for the average intervention effect will be wider if the random-effects method is used rather than a fixed-effect method, and corresponding P values will be less significant. • an underlying is that the outcomes have a normal distribution in each intervention arm in each study • summary statistics used for meta-analysis of continuous data are the mean difference (MD) and the standardized mean difference (SMD) Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.2 [updated September 2009].

  41. Fixed vs. random effects models A fixed effect model concentrates solely on the selected studies included in the meta-analysis, whereas a random effects model takes into account that there might be other studies unpublished, overlooked in the systematic literature search, or to be undertaken in the future which weren’t included in the meta-analysis at hand. When the research question in the meta-analysis is whether treatment has produced an effect in the set of homogeneous studies analyzed, then the fixed effects model is the appropriate one. If binary outcome variables are used, fixed and random effects models give different results. In case of continuous variables, the results of meta-analyses using fixed or random models are often identical. In the presence of heterogeneity, a random-effects meta-analysis weights the studies relatively more equally than a fixed-effect analysis.

  42. Fixed effects model • Based on the assumption that a single common (or 'fixed') effect underlies every study in the meta-analysis • For example, if we were doing a meta-analysis of ORs, we would assume that every study is estimating the same OR. • Under this assumption, if every study were infinitely large, every study would yield an identical result. • Same as assuming there is no statistical heterogeneity among the studies

  43. Random effects model • Makes the assumption that individual studies are estimating different true effects • we assume they have a distribution with some central value and some degree of variability • the idea of a random effects MA is to learn about this distribution of effects across different studies • Allows for random error plus inter-study variability • Results in wider confidence intervals (conservative) • Studies tend to be weighted more equally (relatively more weight is given to smaller studies) • Can be unpredictable (i.e. not stable)

More Related