1 / 16

EBCP

EBCP. Random vs Systemic error. Random error : errors  in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie : innacurate Systematic erros : predictable errors that happen all the time. Eg : forgeting to zero a scale. Ie : low accuracy.

yannis
Télécharger la présentation

EBCP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EBCP

  2. Random vs Systemic error • Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie: innacurate • Systematic erros: predictable errors that happen all the time. Eg: forgeting to zero a scale. Ie: low accuracy

  3. Bias: systemic error due to flawed methodology

  4. Types 1 vs 2 error • Type 1 error: False +ve, generally due to bias • Type 2 error: False –ve, insufficient statistical power (ie: CI is too wide because the sample size is too small) or bias

  5. Confidence intervals Statistical significance Clinical significance

  6. Causation • Exposure must precede outcome • Dose dependant gradient • Dechallence-rechallege- take away the exposure and the outcome decreases/disappears, then reappears when the exposure is returned • Also: Is the association consistent with other studies and does it make biological sense?

  7. Measuring Outcomes • Relative risk (RR): the probability of an event in the active treatment group divided by the probability of an event in the control group. RR = Y/X. A relative risk of 1 is the null value or no difference. • Absolute risk reduction (ARR) : The risk in the control group minus that in the invervention group: ARR = X-Y. • Relative risk reduction (RRR)= 1-RR

  8. Measuring Outcomes • Odds ratios: used for case control trials as risk of developing the disease has no meaning since they already have it or don’t. • Odds ratio = odds of exposure in the cases/odds of exposure in the controls OR= a/c ÷ b/d

  9. Measuring Outcomes • Number needed to treat (NNT): the number of patients you need to treat to prevent one additional bad outcome. The number needed to treat is the reciprocal of the absolute risk reduction (NNT= 1/ARR). • Number needed to harm (NNH): the number of people who need to be subjected to the exposure for one person to develop a negative outcome (NNH= 1/ARR in a study measuring harm)

  10. Diagnostic Tests • A Sensitve test helps rule out a diagnosis: SeNsitiveNegative rule OUT: SNOUT • A Specific test helps confirm the diagnosis: SPecificPositive rule IN: SPIN • Sensitivity: probability of true positives • Specificity: probability of true negatives

  11. Diagnostic Tests • Pre test probability: The chance your pt has the diagnosis. Basically the incidence in similar people presenting with the same symptoms. • Likelihood Ratios (+/-ve) : how much a positive or negative result modifies the probability of the disease. • Ratio of 1 doesn’t change the probability • Ratios greater than one increase the probability • Ratios less than one decrease the probability LR+ = sensitivity/(1-specificity) OR true+ve rate /false+ve rate LR- = (1-sensitivity)/specificity OR false-ve rate /true-ve rate

  12. Nomograms

  13. Prognostic Studies • Usually done via observational studies like Case-control or more commonly Cohort studies. • The cohort should all be at a similar point in the course of the disease. • Results can be shown as a “x” year survival rate or survival curve.

  14. Systematic Reviews • Sometimes the method of selecting articles for the systematic review is biased. If the selection process is unbiased the funnel plot should look like an inverted funnel.

  15. Systematic Reviews • Forest plots: Combine the results of the studies into one graph.

  16. Systematic Reviews • Forest Plots: • Heterogeneity: Assess if any of the studies are significantly different from the others. If heterogeneity is too high then the results of the studies are too different to pool together in a meta-analysis.

More Related