1 / 38

Evidence-based Guidelines: Looking Back, and Looking Ahead

Evidence-based Guidelines: Looking Back, and Looking Ahead. David M Eddy MD PhD December 10, 2012. A change in topic. “Maximizing Guideline Effectiveness for Individual Patients ”. A change in topic. “Maximizing Guideline Effectiveness for Individual Patients ”

seven
Télécharger la présentation

Evidence-based Guidelines: Looking Back, and Looking Ahead

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evidence-based Guidelines:Looking Back, and Looking Ahead David M Eddy MD PhD December 10, 2012

  2. A change in topic • “Maximizing Guideline Effectiveness for Individual Patients”

  3. A change in topic • “Maximizing Guideline Effectiveness for Individual Patients” • See Eddy DM, Adler J, Patterson B, Lucas D, Smith KA, Morris M. Individualized guidelines: the potential for increasing quality and reducing costs. Ann Intern Med. 2011;154:627-34

  4. “Evidence-based Guidelines: Looking Back, and Looking Ahead” • Some observations from the last 40 years, • …and some recommendations for the next 20 • A personal perspective • The way I see it • Loaded with conflicts of interest

  5. Observations (early 1970s) • Medical decision making is fundamentally flawed • How were decisions being made? • Lousy evidence • Errors in reasoning • Gross oversimplifications • Wide variations in beliefs • We can’t trust “clinical judgment”, the “art of medicine”, expert opinion, or expert consensus • More formal methods are needed

  6. My first encounter with “lousy evidence”: Treatment of ocular hypertension Chance of field defects or blindness Worse Worse Worse Worse Worse Worse

  7. Wide variations in beliefs Fifty-eight experts’ estimates of the chance of a spontaneous rupture of a silicone breast implant 0% 0.2% 0.5% 1% 1% 1% 1.5% 1.5% 2% 3% 3% 4% 5% 5% 5% 5% 5% 5% 5% 6% 6% 6% 8% 10% 10% 10% 10% 13% 13% 15% 15% 18% 20% 20% 20% 25% 25% 25% 30% 30% 40% 50% 50% 50% 62% 70% 73% 75% 75% 75% 75% 80% 80% 80% 80% 80% 80% 100%

  8. A proposed manifesto (1976-1980) • American Cancer Society recommendations for the cancer-related checkup • "In making these recommendations, the society has four main concerns: First, there must be good evidence that each test or procedure recommended is medically effective in reducing morbidity or mortality; second, the medical benefits must outweigh the risks; third, the cost of each test or procedure must be reasonable compared to its expected benefits; and finally, the recommended actions must be practical and feasible.” • Guidelines for the cancer-related checkup: recommendations and rationale. CA-A cancer journal for clinicians, 1980;30:193-240. 8

  9. A progression of methods for guidelines(early 1980s) Scientific Judgment Preference Judgment Evidence Compare Benefits, Harms and Costs Decision Analyze Evidence Outcomes

  10. Global subjective judgment, or “consensus-based” Scientific Judgment Preference Judgment Evidence Compare Benefits, Harms and Costs Decision Analyze Evidence Outcomes

  11. “Evidence-based” Scientific Judgment Preference Judgment Evidence Compare Benefits, Harms and Costs Decision Analyze Evidence Outcomes 11

  12. “Outcomes-based” Scientific Judgment Preference Judgment Evidence Compare Benefits, Harms and Costs Decision Analyze Evidence Outcomes 12

  13. “Preference-based” Scientific Judgment Preference Judgment Evidence Compare Benefits, Harms and Costs Decision Analyze Evidence Outcomes 13

  14. Observations • Evidence based guidelines are thriving • Carry on! • Evidence-based guidelines are not enough • They only ask if there is evidence of effectiveness • Do not ask magnitude of effectiveness, or harms, or costs • Don’t enable comparison of benefits, harms and costs • Implicitly assume: • “If there is evidence of effectiveness it should be done” • “All effective treatments are equally important” • E.g. all guidelines, all performance measures • “Increasing use of evidence-based treatments will lower costs”

  15. Observations • We need to move to outcomes-based guidelines • Estimate the magnitude of benefits and harms (and costs) • Use the information to make tradeoffs, and set priorities • We can’t accomplish outcomes-based guidelines with clinical trials alone • Too many questions • Trials require lots of money, time, and agreement to participate • Results of trials from one setting can’t (shouldn’t) be transferred to other settings • Results are rapidly outdated

  16. Trials from one setting can’t simply be transferred to other settings 16

  17. In fact the rates of the outcomes vary widely in the different settings Trials are statistically “homogeneous”: no significant differences across the different settings 17

  18. Rates of major coronary events in the different settings 18

  19. Effects of treatments in the different settings 19

  20. Observation • Never use a fixed effects model in a meta-analysis • Unless it’s a multi-institutional trial using identical protocols

  21. The actual results vary widely and 5 of the 10 trials are on or outside the 95% confidence limit. The calculated effect of the treatment and 95% confidence interval The Q statistic shouldn’t be used in meta-analysis; just look at the designs of the trials 21

  22. Observations • Outcomes-based medicine requires mathematical models • Can’t depend on clinical judgment • Not feasible to do all the needed clinical trials • Have to use models (as in every other field) • Use existing evidence to build and validate models • Then use the models to answer “adjacent” questions • Similar populations, settings, interventions, outcomes,… • The mathematical methods, computing power, and software exist • Data are becoming increasingly available • It’s feasible

  23. Observations • Mathematical models have not yet proven themselves to be credible sources for decisions • Many models have structures and make assumptions that lack face validity • Few decision makers have the training needed to understand a model • Even those that do, can’t run (test) a model in their heads • Different models give different answers to same question • Very few models are validated against the real world • Validations themselves can be “dressed up” • Extremely difficult for decision makers to evaluate a model

  24. Observations • Mathematical models have not yet proven themselves to be credible sources for decisions • Many models have structures and make assumptions that lack face validity • Few decision makers have the training needed to understand a model • Even those that do, can’t run (test) a model in their heads • Different models give different answers to same question • Very few models are validated against the real world • Validations themselves can be “dressed up” • Extremely difficult for decision makers to evaluate a model anyone

  25. Different models give different results to same question

  26. The effect of a treatment

  27. Sensitivity analysis doesn’t fix the problem

  28. Observations • We need to move beyond Markov models • The assumptions do not match clinical medicine • The patient is in one and only one state at any time • Patients jump from one state to another at intervals • The chance of jumping from one state to another does not depend on how long the patient has been in the state • Past history, duration of disease, don’t matter • Markov modelers develop ingenious methods to override the assumptions • A clue that we should be using a different model structure • Put the modeling talent into developing better structures

  29. Observations • We need to improve healthcare models • Build models that more closely match physiology and clinical practice • Provide complete transparency: Make technical and non-technical reports accessible to all serious users • Help decision makers understand the descriptions • Validate, validate, validate. Show that the model can • Accurately calculate rates of important outcomes seen in the real world • Accurately calculate effects of interventions seen in the real world • Adjust for different settings (populations, care processes, behaviors) • Make working copies available for decision makers to explore

  30. Example of validation of an outcome: First stroke, males, by age

  31. Example of outcome validation:Stroke in Placebo Group of SHEP

  32. Observations: Costs • There has been almost no progress in explicit consideration of costs • The cost taboo • The “R” word • We need to attack the cost issue head on • Conduct lots of studies of “Rationing by patient choice” • JAMA 1991; 265:105-108 • Confirm (or refute) that costs are important, and patients are willing to trade off cost and quality

  33. Observations: Cost/QALY • Stop using cost/QALY as the reference method for addressing costs • Can’t trust calculations of either QALYs (clinical outcomes) or costs • Cost/QALY calculations assume nothing changes in future • E.g., no new science, technology, or evidence • Cost/QALY doesn’t correspond to anything important in real decisions • No one knows what the right level of cost/QALY is • Cost/QALY has good academic roots, but is not helpful in reality

  34. Observations • Instead, calculate what decision makers really want to know • Clinical and economic outcomes they understand and care about • Time horizons pertinent to their decisions

  35. Observations: Guidelines • Replace current population-based guidelines with a single, integrated, individualized guideline (Annals, 2011) • Span all pertinent conditions • Use all important information about the patient • Preserve continuous nature of biomarkers and risk factors; no sharp cut points • Calculate the risk of all important outcomes • Calculate the change in risk with all potential treatments • Present the information to the physician and patient • They decide • Patient centered, preference-based guideline • Rank patients by expected benefit, and use ranked list for outreach, incentives, and priority setting

  36. Observations: Performance measures • Replace current population-based performance measure with a single, integrated, outcomes-based quality measure (Health Affairs, Nov 2012) • Based on the expected improvement in clinical outcomes, not processes or treatment targets • Captures all pertinent conditions and outcomes • Includes everything a provider might do to improve the outcomes of interest • Leaves providers free to find most efficient ways to improve outcomes • Can also include costs and calculate value

  37. A GlobalOutcomes Score Global outcome score Heart attacks Strokes Lung cancer Hip fractures Cholesterol Smoking Blood pressure Bone mineral density Drug A Combinations Drug B

  38. A swan song? • The Mute Swan Refers to an ancient Greek belief that the Mute Swan (Cygnus olor) is completely silent during its lifetime until the moment just before death, when it sings one beautiful song • “Cygnus olor is ‘not mute but lacks bugling call, merely honking, grunting, and hissing on occasion’” Cygnus olor

More Related