1 / 88

Introduction to Biostatistics for Clinical and Translational Researchers

Introduction to Biostatistics for Clinical and Translational Researchers. KUMC Departments of Biostatistics & Internal Medicine University of Kansas Cancer Center FRONTIERS: The Heartland Institute of Clinical and Translational Research. Course Information. Jo A. Wick, PhD

braith
Télécharger la présentation

Introduction to Biostatistics for Clinical and Translational Researchers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Biostatistics for Clinical and Translational Researchers KUMC Departments of Biostatistics & Internal Medicine University of Kansas Cancer Center FRONTIERS: The Heartland Institute of Clinical and Translational Research

  2. Course Information • Jo A. Wick, PhD • Office Location: 5028 Robinson • Email: jwick@kumc.edu • Lectures are recorded and posted at http://biostatistics.kumc.edu under ‘Events & Lectures’

  3. Objectives • Understand the role of statistics in the scientific process and how it is a core component of evidence-based medicine • Understand features, strengths and limitations of descriptive, observational and experimental studies • Distinguish between association and causation • Understand roles of chance, bias and confounding in the evaluation of research

  4. Course Calendar • July 5: Introduction to Statistics: Core Concepts • July 12: Quality of Evidence: Considerations for Design of Experiments and Evaluation of Literature • July 19: Hypothesis Testing & Application of Concepts to Common Clinical Research Questions • July 26: (Cont.) Hypothesis Testing & Application of Concepts to Common Clinical Research Questions

  5. Why is there conflicting evidence? • Answer: There is no perfect research study. • Every study has limitations. • Every study has context. • Medicine (and research!) is an art as well as a science. • Unfortunately, the literature is full of poorly designed, poorly executed and improperly interpreted studies—it is up to you, the consumer, to critically evaluate its merit.

  6. Critical Evaluation Validity and Relevance • Is the article from a peer-reviewed journal? • How does the location of the study reflect the larger context of the population? • Does the sample reflect the targeted population? • Is the study sponsored by an organization that may influence the study design or results? • Is the intervention feasible? available?

  7. Critical Evaluation Intent • Therapy: testing the efficacy of drug treatments, surgical procedures, alternative methods of delivery, etc. (RCT) • Diagnosis: demonstrating whether a new diagnostic test is valid (Cross-sectional survey) • Screening: demonstrating the value of tests which can be applied to large populations and which pick up disease at a presymptomatic stage (Cross-sectional survey) • Prognosis: determining what is likely to happen to someone whose disease is picked up at an early stage (Longitudinal cohort)

  8. Critical Evaluation • Causation: determining whether a harmful agent is related to development of illness (Cohort or case-control)

  9. Critical Evaluation Validity based on intent • What is the study design? Is it appropriate and optimal for the intent? • Are all participants who entered the trial accounted for in the conclusion? • What protections against bias were put into place? Blinding? Controls? Randomization? • If there were treatment groups, were the groups similar at the start of the trial? • Were the groups treated equally (aside from the actual intervention)?

  10. Critical Evaluation • If statistically significant, are the results clinically meaningful? • If negative, was the study powered prior to execution? • Were there other factors not accounted for that could have affected the outcome? Miser, WF Primary Care 2006.

  11. Experimental Design • Statistical analysis, no matter how intricate, cannot rescue a poorly designed study. • No matter how efficient, statistical analysis cannot be done overnight. • A researcher should plan and state what they are going to do, do it, and then report those results. • Be transparent!

  12. Types of Samples • Random sample: each person has equal chance of being selected. • Conveniencesample: persons are selected • because they are convenient or readily available. • Systematic sample: persons selected based on a pattern. • Stratified sample: persons selected from within subgroup.

  13. Random Sampling • For studies, it is optimal (but not always possible) for the sample providing the data to be representative of the population under study. • Simple random sampling provides a representative sample (theoretically) and protections against selection bias. • A sampling scheme in which every possible sub-sample of size n from a population is equally likely to be selected • Assuming the sample is representative, the summary statistics (e.g., mean) should be ‘good’ estimates of the true quantities in the population. • The larger n is, the better estimates will be.

  14. Random Samples • The Fundamental Rule of Using Data for Inference requires the use of random sampling or random assignment. • Random sampling or random assignment ensures control over “nuisance” variables. • We can randomly select individuals to ensure that the population is well-represented. • Equal sampling of males and females • Equal sampling from a range of ages • Equal sampling from a range of BMI, weight, etc.

  15. Random Samples • Randomly assigning subjects to treatment levels to ensure that the levels differ only by the treatment administered. • weights • ages • risk factors

  16. Nuisance Variation • Nuisance variation is any undesired sources of variation that affect the outcome. • Can systematically distort results in a particular direction—referred to as bias. • Can increase the variability of the outcome being measured—results in a less powerful test because of too much ‘noise’ in the data.

  17. Example: Albino Rats • It is hypothesized that exposing albino rats to microwave radiation will decrease their food consumption. • Intervention: exposure to radiation • Levels exposure or non-exposure • Levels 0, 20000, 40000, 60000 uW • Measurable outcome: amount of food consumed • Possible nuisance variables: sex, weight, temperature, previous feeding experiences

  18. Experimental Design • Types of data collected in a clinical trial: • Treatment – the patient’s assigned treatment and actual treatment received • Response – measures of the patient’s response to treatment including side-effects • Prognostic factors (covariates) – details of the patient’s initial condition and previous history upon entry into the trial

  19. Experimental Design • Three basic types of outcome data: • Qualitative – nominal or ordinal, success/failure, CR, PR, Stable disease, Progression of disease • Quantitative – interval or ratio, raw score, difference, ratio, % • Time to event – survival or disease-free time, etc.

  20. Experimental Design • Formulate statistical hypotheses that are germane to the scientific hypothesis. • Determine: • experimental conditions to be used (independent variable(s)) • measurements to be recorded • extraneous conditions to be controlled (nuisance variables)

  21. Experimental Design • Specify the number of subjects required and the population from which they will be sampled. • Power, Type I & II errors • Specify the procedure for assigning subjects to the experimental conditions. • Determine the statistical analysis that will be performed.

  22. Experimental Design • Considerations: • Does the design permit the calculation of a valid estimate of treatment effect? • Does the data-collection procedure produce reliable results? • Does the design possess sufficient power to permit and adequate test of the hypotheses?

  23. Experimental Design • Considerations: • Does the design provide maximum efficiency within the constraints imposed by the experimental situation? • Does the experimental procedure conform to accepted practices and procedures used in the research area? • Facilitates comparison of findings with the results of other investigations

  24. Types of Studies • Purpose of research • To explore • To describe or classify • To establish relationships • To establish causality • Strategies for accomplishing these purposes: • Naturalistic observation • Case study • Survey • Quasi-experiment • Experiment Ambiguity Control

  25. Generating Evidence Complexity and Confidence

  26. Observation versus Experiment • A designed experiment involves the investigator assigning (preferably randomly) some or all conditions to subjects. • An observational study includes conditions that are observed, not assigned.

  27. Example: Heart Study • Question: How does serum total cholesterol vary by age, gender, education, and use of blood pressure medication? Does smoking affect any of the associations? • Recruit n = 3000 subjects over two years • Take blood samples and have subjects answer a CVD risk factor survey • Outcome: Serum total cholesterol • Factors: BP meds (observed, not assigned) • Confounders?

  28. Example: Diabetes • Question: Will a new treatment help overweight people with diabetes lose weight? • N = 40 obese adults with Type II (non-insulin dependent) diabetes (20 female/20 male) • Randomized, double-blind, placebo-controlled study of treatment versus placebo • Outcome: Weight loss • Factor: Treatment versus placebo

  29. Cross-Sectional Studies • Designed to assess the association between an independent variable (exposure?) and a dependent variable (disease?) • Selection of study subjects is based on both their exposure and outcome status, thus there is no direction of inquiry

  30. Cross-Sectional Studies

  31. Cross-Sectional Studies • Cannot determine causal relationships between exposure and outcome • Cannot determine temporal relationship between exposure and outcome “Exposure is associated with Disease” “Exposure causes Disease” “Disease follows Exposure”

  32. Analysis of Cross-Sectional Data Prevalence of disease compared in exposed versus non-exposed groups: Prevalence of exposure compared in diseased versus non-diseased groups:

  33. Case-Control Studies • Designed to assess the association between disease and past exposures • Selection of study subjects is based on their disease status • Direction of inquiry is backward

  34. Case-Control Studies Direction of Inquiry Time

  35. Analysis of Case-Control Data Odds ratio: odds of case exposure . odds of control exposure

  36. Cohort Studies • Designed to assess the association between exposures and disease occurrence • Selection of study subjects is based on their exposure status • Direction of inquiry is forward

  37. Cohort Studies Direction of Inquiry Time

  38. Cohort Studies • Attrition or loss to follow-up • Time and money! • Inefficient for very rare outcomes • Bias • Outcome ascertainment • Information bias • Non-response bias

  39. Analysis of Cohort Data Relative Risk: risk of disease in exposed . risk of disease in unexposed

  40. Randomized Controlled Trials • Designed to test the association between exposures and disease • Selection of study subjects is based on their assigned exposure status • Direction of inquiry is forward

  41. Randomized Controlled Trials Direction of Inquiry Time

  42. Why do we randomize? • Suppose we wish to compare surgery for CAD to a drug used to treat CAD. We know that such major heart surgery is invasive and complex—some people die during surgery. We may assign the patients with less severe CAD (on purpose or not) to the surgery group. • If we see a difference in patient survival, is it due to surgery versus drugs or to less severe disease versus more severe disease? • Such a study would be inconclusive and a waste of time, money and patients.

  43. How could we fix it? • Randomize! • Randomization is critical because there is no way for a researcher to be aware of all possible confounders. • Observational studies have little to no formal control for any confounders—thus we cannot conclude cause and effect based on their results. • Randomization forms the basis of inference.

  44. Other Protections Against Bias • Blinding • Single (patient only), double (patient and evaluator), and triple (patient, evaluator, statistician) blinding is possible • Eliminates biases that can arise from knowledge of treatment • Control • Null (no treatment), placebo (no active treatment), active (current standard of care) controls are used • Eliminates biases that can arise from the natural progression of disease (null control) or simply from the act of being treated (placebo)

  45. Analysis of RCT Data • What kind of outcome do you have? • Continuous? Categorical? • How many samples (groups) do you have? • Are they related or independent?

  46. Types of Tests • Parametric methods: make assumptions about the distribution of the data (e.g., normally distributed) and are suited for sample sizes large enough to assess whether the distributional assumption is met • Nonparametric methods: make no assumptions about the distribution of the data and are suitable for small sample sizes or large samples where parametric assumptions are violated • Use ranks of the data values rather than actual data values themselves • Loss of power when parametric test is appropriate

  47. Analysis of RCT Data • Two independent percentages? Fisher’s Exact test, chi-square test, logistic regression • Two independent means? Mann-Whitney, Two-sample t-test, analysis of variance, linear regression • Two independent time-to-event outcomes? Log-rank test, Wilcoxon test, Cox regression • Any adjustments for other prognostic factors can be accomplished with the appropriate regression models (e.g., logistic for yes/no outcomes, linear for continuous, Cox for time-to)

  48. Threats to Valid Inference • Statistical Conclusion Validity • Low statistical power - failing to reject a false hypothesis because of inadequate sample size, irrelevant sources of variation that are not controlled, or the use of inefficient test statistics. • Violated assumptions - test statistics have been derived conditioned on the truth of certain assumptions. If their tenability is questionable, incorrect inferences may result. • Many methods are based on approximations to a normal distribution or another probability distribution that becomes more accurate as sample size increases—using these methods for small sample sizes may produce unreliable results.

  49. Threats to Valid Inference • Statistical Conclusion Validity • Reliability of measures and treatment implementation. • Random variation in the experimental setting and/or subjects. • Inflation of variability may result in not rejecting a false hypothesis (loss of power).

More Related