1 / 28

Reporting of performance indicators: some statistical aspects

Reporting of performance indicators: some statistical aspects. Utrecht, May 5 2008 Ewout Steyerberg, Hester Lingsma Dept of Public Health, Erasmus MC, Rotterdam. Performance estimates unreliable with small samples. Differences unreliable with small samples (100 centers, 20 patients each).

yoshi
Télécharger la présentation

Reporting of performance indicators: some statistical aspects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reporting of performance indicators: some statistical aspects Utrecht, May 5 2008 Ewout Steyerberg, Hester Lingsma Dept of Public Health, Erasmus MC, Rotterdam

  2. Performance estimates unreliable with small samples

  3. Differences unreliable with small samples (100 centers, 20 patients each)

  4. Differences less unreliable with larger samples (100 centers, 200 patients each)

  5. Overview 1. Basic statistical concerns 2. Conceptual model for differences 3. Comparison of Dutch hospitals • Dealing with uncertainty • Fixed and random effect methods • Illustration: Decubitus ulcers prevalence 4. Some reflections on HSMR 5. Conclusions and discussion

  6. Differences by performance indicators

  7. Interpretation of differences in performance indicators • Assume all centres have equal quality; evidence has to be strongly against this Null hypothesis (p<.05/ <0.001?) • Any differences may be attributable to • Different applications of insufficiently clear definitions • Case-mix: residual confounding • Chance (always apparent > true differences) • Quality of care • External comparisons more risky than internal

  8. IGZ indicators • Hospital wide • High risk wards • Disease-specific • Other

  9. Analysis • Fixed effect methods • Forest plot • Funnel plot • Rank plot • Random effect methods • Rankability: tau2 / total variance • Estimation of chance-corrected differences • Expected ranks

  10. Formulas fixed effect • Forest plot if y=0 • Funnel plot: exact CI • Rank plot: bootstrap resampling, 1000 times

  11. Formulas random effects: θi ~ N (μ, τ2 ) 1. Rankability: τ2/ total variance; • in practice: τ2 / (τ2 + median(si2)) 2. Estimation of θi: 1. Empirical Bayes 2. Random effects model R: lmer in package ‘Lme4’ 3. ERi = 1 + ∑ F((θi - θj) / √(var(θi ) + var(θj))) where i≠j and F is the normal distribution function

  12. IGZ decubitus prevalence • Data 2005 • www.ziekenhuizentransparant.nl

  13. Forest plot

  14. Funnel plot

  15. Rank plot; rankability 31% (.26/.84)

  16. Random effects estimates

  17. Expected Ranks

  18. Conclusions on reporting • Both fixed and random effects methods can clarify the role of chance in judging reported performance • Forest, funnel and rank plots show more extreme differences than random effect estimates or expected ranks • For some IGZ performance indicators differences are barely more than expected by chance • Overinterpretation is a serious risks of current public reporting • Desired transparency in health care at odds with realism of science

  19. HSMR: the solution? • Definition: clear, but clinical policies may have some distorting effect • Case-mix: extensive efforts, always risk of residual confounding • Chance: relatively large sample size,but high aggregation level implies we may miss detail

  20. Short reflections on HSMR questions • C stat irrelevant to indicate quality of case-mix correction • 80% - 50%: as long as it is clear what we include and compare • Preventable death: preferably RCT evaluation • High vs low ranks: current Dutch league tables very unreliable • Target level of HSMR: the lower the better; warning signals • HSMR face validity: problem? Good scientists explain things simple • Modify behaviour: preferably RCT evaluation • Coding and completeness: audits; simulations? • Quality of care: only one explanation for differences • Mask poor by good wards in hosp: yes; this is the price for more stability by high numbers

  21. HSMR and Evidence-Based Medicine • EBM mainly relies on randomised controlled trials, often summarized in meta-analysis • Performance indicator comparisons have many problems, such as definition and registration issues, case-mix correction, chance • Proposal: RCT for introduction of HSMR • Intervention: HSMR reports + quality improvement actions • Control: No HSMR reports (or blinding to HSMR for more powerful comparisons)

  22. Stroke

  23. Data: Netherlands Stroke survey • Observational study aimed to study quality of care • 579 patients with acute ischemic stroke • 10 hospitals in The Netherlands

  24. Results: Netherlands stroke survey N (poor outcome) = 268 (53%) Hospitals

  25. Fixed effect Random effect Unadjusted Age adjusted Age+Sex+type infarction 12 confounders χ2 = 48, 9 df, p<0.0001 χ2 = 40, 9 df, p<0.0001 χ2 = 37, 9 df, p<0.0001 χ2 = 24, 9 df, p=0.0042 τ2=0.38, χ2=24, 1 df, p<0.0001 τ2=0.31, χ2=16, 1 df, p<0.0001 τ2=0.28, χ2=14, 1 df, p=0.0001 τ2=0.18, χ2=4, 1 df, p=0.0275 Variation between centers

  26. Center effect estimates

  27. Ranking Rankability=0.55

More Related