1 / 97

Evidence Based Dentistry: Statistics 2

Evidence Based Dentistry: Statistics 2. Critical Appraisal Skills depend upon some understanding of the scientific method and the role statistics plays Al Best, PhD Director of Faculty Research Development Perkinson 3100B ALBest@VCU.edu. Review. To assess validity

flynn
Télécharger la présentation

Evidence Based Dentistry: Statistics 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evidence Based Dentistry: Statistics 2 Critical Appraisal Skills depend upon some understanding of the scientific method and the role statistics plays Al Best, PhD Director of Faculty Research Development Perkinson 3100B ALBest@VCU.edu

  2. Review • To assess validity • In the study, What’s the question? • Where did the data come from? Sampling and measurement • What would you expect if “nothing is going on”? • Is the observed data different than that? • Today: …

  3. Stats 3: Outline • Challenges in scientific inference • Defining the research question • Causation vs. association • Sources of error in clinical research • Multiple outcomes • Some solutions • Experimental design • Randomization • Masking/blinding • Numeric outcomes • Means, correlations, …

  4. Recall Barasch’s paper • “In conclusion, this case-control study supports a causal link between bisphosphonates and ONJ.”

  5. Causation vs. Association • What do we mean by “Exposure A causes Disease B”? • One approach: Logical consideration of evidence favoring causality • Consistent association across studies and designs • Strong association (big differences, strong trends) • Dose response relationship • Biologic plausibility • Note: investigators can come up with a plausible mechanism for virtually any association

  6. Causation vs. Association • Example: A marketing claim backed by “experts”

  7. Causation vs Association • What do we mean by “Exposure A causes Disease B”? • Empirical approach: http://xkcd.com/943/

  8. Causation vs Association • What do we mean by “Exposure A causes Disease B”? • Empirical approach: Ruling out spurious associations • Statistical association • Tendency of characteristics to vary together, so information on one allows guessing or predicting the other more accurately • BUT: there are lots of ways we may be fooled

  9. Associations are common, and most are worthless

  10. How we measure statistical associations • Associations are what we observe, as • Differences or ratios of: • means or medians • proportions • odds • rates • Slopes of trends in statistical models • Correlation, regression coefficients • Causation → association, but not the other way • No measure of association, in itself, implies causation

  11. Causal relationships? • We think: • influenza deaths less common in immunized groups • Stroke probability ↑ with ↑ SBP, ↑ LDL • ↑ exercise, ↓ DBP • ↑ sugar consumption, ↑ DMFS • ↑ cigarette smoking, ↑ alveolar bone loss • We are not sure: • Antioxidant supplementation can lower US cancer risk • Calcium supplements reduce demineralization after 65 • Treating chronic periodontitis reduces CVD risk • Scaling and root debridement in pregnant women will reduce preterm births

  12. Clinical research is a search for causes and predictors • Comparison (control) is critical! • Comparisons are needed to distinguish real from bogus causes • Comparisons between groups are needed to escape the particular and possibly idiosyncratic • Statistical associations are how one summarizes comparisons among groups • Causal pathways imply statistical associations

  13. Clinical research is a search for causes and predictors • Clinical research is about finding and distinguishing causal (or consistently prognostic) associations from those due to • Reverse causation • Chance • Measurement bias • Selection bias • Confounding

  14. Reverse causation? Does the outcome proceed from the clinical variable? • We observe: Prevalence of a disease is higher in those from whom a microbe is successfully cultured than in those from whom it is not • Is the microbe statistically associated with the disease • because the microbe causes the disease? • or because the disease makes the site more hospitable to colonization or infection by the microbe?

  15. Measurement (information) bias • Definition, Bias: • Systematic distortion of the estimated intervention effect away from the “truth,” caused by inadequacies in the design, conduct, or analysis of a trial • Definition, Measurement bias: • Systematic or non-uniform failure of a measurement process to accurately represent the measurement target • Examples …

  16. Measurement (information) bias • Examples, Measurement bias: • different approaches to questioning, when determining past exposures in a case-control study • more complete medical history and physical examination of subjects who have been exposed to an agent suspected of causing a disease than of those who have not been exposed to the agent

  17. Measurement bias? • NHANES III: “We estimate that at least 35% of the dentate US adults aged 30 to 90 have periodontitis”1 • Mesial and buccal surfaces • Two randomly selected quadrants • CAL≥3mm • Or: Full mouth prevalence= 65%2 1 JM Albandar, JA Brunelle, A Kingman (1999) "Destructive periodontal disease in adults 30 years of age and older in the United States, 1988-1994". Journal of Periodontology 70 (1): 13–29. 2 A Kingman & JM Albandar (2002) “Methodological aspects of epidemiological studies of periodontal diseases.” Periodontology 2000 29, 11–30.

  18. Recall bias Is frequent sugar consumption related to childhood caries? Results using the British National Diet and Nutrition Survey (1995): • Conclusion “overall, the benefits of frequent brushing of teeth did not outweigh the damaging effect of frequent sugar consumption” • Hinds & Gregory (1995) National diet and nutrition survey: Children ages 1.5 to 4.5 years. London: HMSO. • (parental questionnaire) • Conclusion: “for children who brushed their teeth twice a day or more, consumption of sugars and sugary foods did not appear to be associated with caries” • Gibson & Williams (1999) Dental caries in pre-school children Caries Research 33,101–113. • (Dietary record) Harris, et al. (2004) Risk factors for dental caries in young children: a systematic review, Community Dental Health, 21(S); 71-85

  19. Selection bias • Definition: Bias from the use of a non-representative group as the basis of generalization to a broader population • Can distort: disease frequency measures,exposure-disease associations • Example: Estimate prognosis from patients newly diagnosed and infer to patients hospitalized with the disease • Newly diagnosed patients have a much broader spectrum of outcomes “Selection bias” is also used to describe the systematic error that can occur when allocating patients to intervention groups.

  20. Confounding • Informal definition: distortion of the true biologic relation between an exposure and a disease outcome of interest • Usually due to a research design and analysis that fail to properly account for additional variables associated with both • Such variables are referred to as confounders or as lurking variables

  21. Confounding examples • Misidentified carcinogen • Prior to the discovery of HPV, HSV-2 was associated with the cervical cancer • It is now well established that HPV is central to the pathogenesis of invasive cervical cancer.And HSV-2 appears to increase the risk Hawes & Kiviat (2002) Are Genital Infections and Inflammation Cofactors in the Pathogenesis of Invasive Cervical Cancer? JNCI 94(21): 1592-159

  22. Perio and CVD • Cigarette smoking is associated with adult perio and CVD • This produces an association between perio and CVD • Control for smoking to see the perio-CVD relationship clearly Scannapieco et al. (2003) Associations Between Periodontal Disease and Risk for Atherosclerosis, Cardiovascular Disease, and Stroke. A Systematic Review. Annals of Periodontology (8)38-53

  23. Multiplicity effects http://xkcd.com/882/

  24. Multiplicity effects

  25. Multiplicity effects

  26. Multiplicity effects

  27. Controlling multiplicity effects • Multiple outcomes: The proliferation of possible comparisons in a trial. Common sources of multiplicity are: • multiple outcome measures, assessment at several time points, subgroup analyses, or multiple intervention groups • Multiple comparisons: Performance of multiple analyses on the same data. Multiple statistical comparisons increase the probability of a type I error: “finding” an association when there is none

  28. Controlling multiplicity effects • Analysis of multiple outcome variables that reflect different disease manifestations, any of which might signal a spurious association due purely to chance • Analysis of the same variable at multiple time points after treatment initiation • Periodic analysis of accumulating partial results to see how a study is progressing, and whether there’s reason to stop collecting data earlier than initially planned • Post hoc subgroup comparisons are especially likely not to be confirmed in following studies

  29. Multiple outcome examples • Caries • visual exam • x-ray interpretation • FiberopticTransillumination (FOTI) • Electrical Caries Meter (ECM) • DiagnoDent (DD) • Periodontology • alveolar bone loss • clinical attachment level • pocket depth

  30. Multiple comparisons • If we control the chance of a false positive answer in evaluating a single outcome by a conventional statistical method, which permits a 5% chance of claiming a statistical association when none exists, • then by the time we’ve compared treatments on 4 outcomes our chance of an error is MUCH HIGHER • We can’t stop asking questions, so three strategies are used to control this • Require stronger evidence to claim an effect • Hierarchy of outcomes • Composite outcome

  31. Require stronger evidence to claim an effect • Example: When asking 5 questions • Require stronger evidence for each question so that the chance of claiming statistical association when none exists is only 1% instead of p < 5% • This insures that, overall, no more than 5% false positive errors • Various methods guide how much stronger one should require evidence to be in different circumstances

  32. Outline • Challenges in scientific inference • Defining the research question • Causation vs. association • Sources of error in clinical research • Multiple outcomes • Some solutions • Experimental design • Randomization • Masking/blinding • Numeric outcomes • Means, correlations, …

  33. Randomization: What it is • Randomization: assignment of treatments to patients (equivalently, patients to treatments) based a chance • Can take many different forms, all acceptable • The simplest is a coin-flip for each patient • Randomization can be purposely unbalanced • Example: 2/3 patients on experimental therapy

  34. Randomization: Why? • What it accomplishes • Virtually eliminates opportunities for intentional or inadvertent skewing of patient allocation to favor a treatment • Eliminates other selection biases of all sorts affecting treatment comparisons, period! • Eliminates reverse causation, period! • Tends to protect against confounding

  35. Randomize, But • What it can’t do • Randomization can’t assure that the actual treatment groups in a particular trial are comparable at baseline. • Randomization’s benefits vanish if it is performed before patients enter the trial, or when patients are not committed to its results

  36. Randomize, But • What it can’t do • Randomization gives no help whatsoever with regard to • measurement bias • placebo effect • Hawthorne effect • regression to the mean • So other tools are needed to cope with these, and possibly to deal with imbalances/inequities between randomized groups

  37. Blinding, AKA: Masking • Masking and Blinding refer to concealment of the randomized intervention received by a patient. Who may be blind: • Case/patients/participants • Interventionists, those treating participants • Those measuring outcomes: Clinicians and technicians who do not treat case/patients, but are involved in evaluating their outcomes • Investigators involved in decision-making about policies during the trial, and about statistical analyses to interpret the resulting data

  38. Double? Blind • “Blinding is intended to prevent bias on the part of study personnel. • The most common application is double-blinding, in which participants, caregivers, and outcome assessors are blinded to intervention assignment.” Altman, et al. (2001) The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine, 134(8), 663-694.

  39. Does the paper say blinding occurred? • Needs to be explicit • Which of the trial participants were masked, and how treatment was concealed • Understand what the blinding accomplished • If not blind, why not? • Clearly, some types of masking can be impractical, inconvenient, expensive, ineffective, and/or even unethical depending upon the specific treatment and setting • However, exactly when such constraints apply is highly controversial

  40. Outcome measures blinded to group membership • Directly and totally protects against “diagnostic suspicion bias,” a measurement bias when evaluation is skewed by treatment-influenced expectations • Protects against indirect consequences of such bias from communication of skewed intermediate evaluations back to the study investigators, treating clinicians, and patients during conduct of the trial while they can influence • further treatment decisions, and • whether the trial continues

  41. Sidney Harris (2011) www.sciencecartoonsplus.com

  42. Outline • Challenges in scientific inference • Some solutions • Numeric outcomes • Describing quantitative responses • Comparison of Means (t-test, ANOVA) • Power and sample size • Correlation (trend)

  43. Case-Control Study • Consider numeric data. For instance:

  44. Recall: Data classification

  45. Data classification

  46. “a case… study … of the risk associated with BP and ONJ” • Design a measurement system • Age was calculated from the subject’s birth date • What type of data is this? • Discrete / qualitative orContinuous / quantitative / numeric? • Nominal orOrdinal orInterval orRatio ?

  47. Quantitative Data • Continuous Type • Age, duration of disease, roughness, level, color change • Discrete Type (count data) • dmfs, dmft, # involved surfaces, # bleaching treatments

  48. Describe Quantitative Data We describe numeric data by: • Measures of Centrality • AKA: typical value, location • Mean, median • Measures of Spread • Standard deviation, range • Shape of distribution • Normal • Skewed

  49. Arithmetic Mean • “center of gravity” • DATA (weight in mg): 1, 1, 1, 2, 4, 6, 6 • n = Number of Observations (sample size) = 7 • Sum = 1+1+1+2+4+6+6 = 21 • Mean = 3.0 mg.

  50. Median • If n is odd, median is center value of the data ranked from lowest to highest in numerical value; if n is even, average the center two observations. • For these data: 1, 1, 1, 2, 4, 6, 6 (weight in mg) median is 2 mg. • The median is little affected by extreme observations.

More Related