1 / 33

R. Adams Dudley, MD, MBA Institute for Health Policy Studies

Rewarding Provider Performance: Key Concepts, Available Evidence, Special Situations, and Future Directions. R. Adams Dudley, MD, MBA Institute for Health Policy Studies University of California, San Francisco

hagop
Télécharger la présentation

R. Adams Dudley, MD, MBA Institute for Health Policy Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rewarding Provider Performance: Key Concepts, Available Evidence, Special Situations, and Future Directions R. Adams Dudley, MD, MBA Institute for Health Policy Studies University of California, San Francisco Support: Agency for Healthcare Research and Quality, California Healthcare Foundation, Robert Wood Johnson Foundation

  2. Outline of Talk • Review of obstacles to using incentives (using the example of public reporting) • Summary of available data • Addressing the tough decisions • If we have time, consider the value of outcomes reports Dudley 2005

  3. Project Overview • Goals: • Describe employer hospital report cards • Explore what issues determine success • Qualitative study • 11 communities • 37 semi-structured interviews with hospital and employer coalition representatives • Coding and analysis using NVivo software • See Mehrotra, A, et al. Health Affairs 22(2):60. Dudley 2005

  4. 11 Communities Seattle Maine S Central Wisconsin Buffalo Detroit Cleveland Indianapolis E Tennessee Memphis N Alabama Orlando Dudley 2005

  5. Summary of Report Cards • Only 3 report cards begun before 1998 • Majority use mortality and LOS outcomes, patient surveys also common • Majority use billing data • 4/11 communities no public release Dudley 2005

  6. 4 Issues Determining Success • Ambiguity of goals • Uncertainty on how to measure quality • Lack of consensus on how to use data • Relationships between local stakeholders Dudley 2005

  7. Ambiguity of Goals Hospitals Skeptical ofEmployer Goals Hospitals don’t trust employers, suspect their primary interest is still cost “An organization that has been a negotiator of cost, first and foremost, that then declares it’s now focused on quality, is a hard sell.” “Ultimately, you’re going to contract with me or not contract with me on the basis of cost. Wholly. End of story.” Dudley 2005

  8. Uncertainty on How to Measure Quality Process vs. Outcome Debate • Clinicians: Process measures more useful • “We should have moved from outcomes to process measures. Process measures are much more useful to hospitals who want to improve.” • Employers: Outcomes better, process measures unnecessary • “People want longer lasting batteries. Duracell doesn’t stand there with their hands on their hips and say, ‘Tell us how to make longer-lasting batteries.’ That’s the job of Duracell.” Dudley 2005

  9. Ambiguity of Goals Uncertainty on How to Measure Quality The Case-mix Adjustment Controversy • Clinicians: Forever skeptical case mix adjustment is good enough: “[The case-mix adjustment] still explained less than 30 percent of the differences that we saw…” • Employers: We cannot wait for perfect case-mix adjustment “My usual answer to that is ‘OK, let’s make you guys in charge of perfect, I’m in charge of progress. We have to move on with what we have today. When you find perfect, come back, and we’ll change immediately.’ ” Dudley 2005

  10. Lack of Consensus on How to Use Quality Data Low Level of Public Interest a Positive Trend? • Low levels of consumer interest, at least initially • One interviewee felt slow growth is better: “Food labeling is the right metaphor. You want some model which gets to one and a half to three percent of the people to pay attention. This gives hospitals time to fix their problems without horrible penalties.… But if they ignore it for five years all of a sudden you’re looking at a three or four percent market share shift.” Dudley 2005

  11. Ambiguity of Goals Relationships Between Local Stakeholders Market Factors • “Market power does not remain constant. Sometimes purchasers are in the ascendancy and at other times, providers are in the ascendancy, like when hospitals consolidate. And that can vary from community to community at a point in time, too.” Dudley 2005

  12. Key Elements of an Incentive Program • Measures acceptable to both clinicians and the stakeholders creating the incentives • Data available in a timely manner at reasonable cost • Reliable methods to collect and analyze the data • Incentives that matter to providers Dudley 2005

  13. CHART: California Hospital Assessment and Reporting Task Force A collaboration between California hospitals, clinicians, patients, health plans, and purchasers Supported by the California HealthCare Foundation

  14. Participants in CHART • All the stakeholders: • Hospitals: e.g., HASC, hospital systems, individual hospitals • Physicians: e.g., California Medical Association • Consumers/Labor: e.g., Consumers Union/California Labor Federation • Employers: e.g., PBGH, CalPERS • Health Plans: e.g., Blue Shield, Wellpoint, Kaiser • Regulators: e.g., JCAHO, OSHPD, NQF • Government Programs: CMS, MediCal Dudley 2005

  15. How CHART Might Play Out Dudley 2005

  16. CHART Measures • For public reporting in 2005-6: • JCAHO core measures for MI, CHF, pneumonia, surgical infection from chart abstraction • Maternity measures from administrative data • Leapfrog data • Mortality rates for MI, pneumonia, and CABG Dudley 2005

  17. CHART Measures • For piloting in 2005-6: • ICU processes (e.g., stress ulcer prophylaxis), mortality, and LOS by chart abstraction • ICU nosocomial infection rates by infection control personnel • Decubitus ulcer rates and falls by annual survey Dudley 2005

  18. Tough Decisions: General Ideas and Our Experience in CHART • Not because we’ve done it correctly in CHART, but just as a basis for discussion Dudley 2005

  19. Tough Decision #1:Collaboration vs. Competition? • Among health plans • Among providers • With legislators and regulators Dudley 2005

  20. Tough Decision #2:Same Incentives for Everyone? • Does it make sense to set up incentive programs that are the same for every provider? • This would be the norm in other industries if providers were your employees; unusual in many other industries if you were contracting with suppliers. Dudley 2005

  21. Tough Decision #2:Same Incentives for Everyone? • But providers differ in important ways • Baseline performance/potential to become top provider • Preferred rewards (more patients vs. more $) • Monopolies and safety net providers • But do you want the complexity? Dudley 2005

  22. Tough Decision #3:Encourage Investment? • Much of the difficulty we face in starting public reporting or P4P comes from the lack of flexible IT that can cheaply generate performance data. • Similarly, much QI is best achieved by creating new team approaches to care. • Should we explicitly pay for these changes, or make the value of these investments an implicit factor in our incentive programs? • Can be achieved by pay-for-participation, for instance. Dudley 2005

  23. Tough Decision #4:Moving Beyond HEDIS/JCAHO • No other measure sets routinely collected and audited as current cost of doing business • If you want public reporting or P4P of new measures, must balance data collection and auditing costs vs. information gained • Admin data involves less data collection cost, equal or more auditing costs • Chart abstraction much more expensive data collection, equal or less auditing Dudley 2005

  24. Tough Decision #4:Moving Beyond HEDIS/JCAHO • If purchasers/policymakers drive the introduction of new quality measurement costs, who pays and how? • So, who picks the measures? Dudley 2005

  25. Tough Decision #5: Use Only National Measures or Local? • Well this is easy, national, right? • Hmmm. Have you ever tried this? Is there any “there” there? Are there agreed upon, non-proprietary data definitions and benchmarks? Even with the National Quality Forum? • Maybe local initiatives should be leading national?? Dudley 2005

  26. An Example of Collaboration:C-Section Rates in CHART • Initial measure: total C-section rate (NQF) • Collaborate/advocate within CHART: • Some OB-GYNs convinced the group to develop an alternative: the C-section rate among nulliparous women with singleton, vertex, term (NSVT) presentations • Collaborate with hospital: • NSVT not traditionally coded: need to train Medical Records personnel Dudley 2005

  27. Tough Decision #6:Use Outcomes Data? • Especially important issue as sample sizes get small…that is, when you try to move from groups to individual providers in “second generation” incentive programs. • If we can’t fix the sample size issue, we’ll be forced to use general measures only (e.g., patient experience measures). Dudley 2005

  28. Outcome Reports Some providers are concerned about random events causing variation in reported outcomes that could: • Ruin reputations (if there is public reporting) • Cause financial harm (if direct financial incentives are based on outcomes) Dudley 2005

  29. An Analysis of MI Outcomes and Hospital “Grades” • From California hospital-level risk-adjusted MI mortality data: • Fairly consistent pattern over 8 years: 10% of hospitals labeled “worse than expected”, 10% “better”, 80% “as expected” • Processes of care for MI worse among those with higher mortality, better among those with lower mortality • From these data, calculate mortality rates for “worse”, “better”, and “as expected” groups Dudley 2005

  30. Dudley 2005

  31. 3 Groups of Hospitals with Repeated Measurements (3 Years) Dudley 2005

  32. Outcomes Reports and Random Variation: Conclusions • Random variation can have an important impact on any single measurement • Repeating measures reduces the impact of chance • Provider performance is more likely to align along a spectrum rather than lumped into two groups whose outcomes are quite similar • Providers on the superior end of the performance spectrum will almost never be labeled poor Dudley 2005

  33. Conclusions • Many tough decisions ahead • Nonetheless, paralysis is undesirable • Collaborate on the choice of measures • Everyone frustrated with limited (JCAHO and HEDIS) measures…need to figure out how to fund collecting and auditing new measures • Consider varying incentives across providers Dudley 2005

More Related