1 / 21

Lecture 4 Study design and bias in screening and diagnostic tests

Lecture 4 Study design and bias in screening and diagnostic tests. Sources of bias : spectrum effects/subgroup analyses verification/workup bias information (review) bias Critical assessment of studies: e.g., STARD criteria. Bias. What is it?

gitano
Télécharger la présentation

Lecture 4 Study design and bias in screening and diagnostic tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 4Study design and bias in screening and diagnostic tests • Sources of bias : • spectrum effects/subgroup analyses • verification/workup bias • information (review) bias • Critical assessment of studies: • e.g., STARD criteria

  2. Bias • What is it? • Bias in a measurement vs bias in the result of a study • Selection vs information bias • What does it mean in studies of screening and diagnostic tests? • Difference between bias and effect modification?

  3. Reducing bias • Studies of diagnostic tests give variable results • Biassed studies generally overestimate sensitivity/specificity • STARD criteria proposed to improve quality of these studies

  4. Spectrum effect: bias or modification? • Sensitivity and specificity are not innate characteristics of a test, but vary by study population • e.g., by age, sex, comorbidity • e.g, exercise stress testing: worse performance in women than men • Study population should be representative of population in which test will be used

  5. Design implications • Investigate test performance in sub-groups • Report characteristics of study population

  6. Verification/work-up bias • Results of test affect intensity of subsequent investigation • e.g., risky or expensive follow-up • Selection or information bias? • E.g. Exercise stress test and angiography • effects? • solutions?

  7. Example of verification/work-up bias • VQ (ventilation/perfusion scanning to detect pulmonary embolism • Positive scan -> angiography • Studies with selective referral of patients: sensitivity = 58% • Study (PIOPED) with prospective investigation of all patients: sensitivity = 41%

  8. Information/review bias • Examples: • Diagnosis is not blind to test result • Diagnosis is made with access to other clinical information • Knowledge of results of follow-up used in interpretation of screening test • Effects? • Solutions? • (NB: raw test performance vs “real-world” situation)

  9. Other sources of bias • Indeterminate test results: • How do they affect results? • Solutions? • Context: • Interpretation varies with changes in disease prevalence • Criteria for positivity • Technical advances, operator experience

  10. Optimal design • Cohort vs case-control? • Prospective cohort with blind evaluation • Case-control: • Sources of bias?

  11. Example for discussion • Seniors in emergency department (ED): • High risk of functional decline, death etc. • Needs usually not recognized at ED visit • Objective: Development and validation of tool to identify “high-risk” seniors in ED (need more careful assessment and follow-up) • Methods?

  12. RESULTS: ISAR development Adverse health outcome defined as any of following during 6 months after ED visit • >10% ADL decline • Death • Institutionalization

  13. Scale development • Selection of items that predicted all adverse health events • Multiple logistic regression - “best subsets” analysis • Review of candidate scales with clinicians to select clinically relevant scale

  14. Identification of Seniors At Risk (ISAR) 1. Before the illness or injury that brought you to the Emergency, did you need someone to help you on a regular basis? (yes) 2. Since the illness or injury that brought you to the Emergency, have you needed more help than usual to take care of yourself? (yes) 3. Have you been hospitalized for one or more nights during the past 6 months (excluding a stay in the Emergency Department)? (yes) 4. In general, do you see well? (no) 5. In general, do you have serious problems with your memory?(yes) 6. Do you take more than three different medications every day? (yes) Scoring: 0 - 6 (positive score shown in parentheses)

  15. Predictive validity of ISAR scale • AUC and 95% CI • Overall (n=1673): 0.71 (0.68 – 0.74) • Admitted to hospital (n=509): 0.66 (0.61 – 0.71) • Discharged (n= 1159): 0.70 (0.66 – 0.74) • Similar results by informant (patient vs proxy) • Next steps?

  16. Second study • Multi-site randomized controlled trial of a 2-step intervention using ISAR + nurse assessment/referral • Study 2 population had lower % ISAR +ve than study 1 population • implications for sensitivity, specificity, AUC, LR, DOR?

  17. Other predictive measures in elderly • Pra screening tool (widely used in US HMOs): • AUC values of 0.61 - 0.71 for prediction of hospital utilization or functional decline (Coleman, 1998) • Hospital Admission Risk Profile (HARP) • AUC of 0.65 for prediction of nursing home admission (Sager, 1996) • Comorbidity indices (diagnosis and medication-based measures from administrative data): • AUC values of 0.58-0.60 for emergency hospitalization (Schneeweiss, 2001)

More Related