1 / 39

Debbie Peikes Randy Brown Arnold Chen Jennifer Schore

Don’t These Demonstrations Ever Work? Mixed Evidence from the Four-Year Medicare Coordinated Care Demonstration AcademyHealth Annual Conference June 9, 2008. Debbie Peikes Randy Brown Arnold Chen Jennifer Schore. Random Assignment Study Design.

gilda
Télécharger la présentation

Debbie Peikes Randy Brown Arnold Chen Jennifer Schore

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Don’t These Demonstrations Ever Work? Mixed Evidence from the Four-Year Medicare Coordinated Care DemonstrationAcademyHealth Annual ConferenceJune 9, 2008 Debbie Peikes Randy Brown Arnold Chen Jennifer Schore

  2. Random Assignment Study Design • Impact analysis (randomized, intent-to-treat design) • Effects on Medicare service use and cost • Effects on quality of care • Patient satisfaction • Physician satisfaction • Processes of care • Outcomes • Synthesis—what works and for whom? • Implementation analysis • Detailed description of enrollment and interventions • Site visits, phone calls, program MIS data

  3. Impacts on Hospitalizations and Costs Over the First Four Years of Operations

  4. Roadmap • Methods to Measure Impacts • Research Sample • Impacts • Hospitalizations • Traditional Part A and B costs • Total costs (with program fees) • The Challenge

  5. Methodology • Data: Medicare EDB and SAF for claims through June 2006 • Study patients: 18,000 enrollees from programs’ start dates in 2002 through June 2005 • Followup observed: • Maximum followup (for early enrollees): 46 to 51 months • Average: 29 months [19-36 range] • Regression-adjusted for demographics, prior service use and cost, and presence of 10 chronic conditions

  6. Programs Enrolled High-Cost Patients • Patients were high-cost • Costs were driven by hospitalizations • Average monthly Medicare expenditures for control group patients during year 1 • 5 programs: $655 to $999 • 5 programs: $1,000 to $1,999 • 5 programs: $2,000 to $3,999 • (National average was $570)

  7. The Punch LineCare coordination is not a panacea. Although 3 of the 15 programs appeared to be cost neutral, none reduced costs.

  8. Small Overall Effects on Hospitalizations Overall, hospitalizations down 4.5% (p=0.02), driven by sizable differences in 4 programs • Large and statistically significant reductions in 2: • Mercy -17% (p=0.02) • Georgetown -24% (p=0.06) • Moderate but not statistically significant differences in 2: • Health Quality Partners (HQP) -14% (p=0.13) • QMed -7% (p=0.38)

  9. Most Programs Had No Discernible Effects on Hospitalizations Rest of estimates not statistically significant: • 2 had favorable differences but small samples • 3 had unfavorable differences of +4 to +12% • 6 had differences between –3 and 3%

  10. Impact as a % of Control Group Mean # in Medicare Total Costs Treatment Part A + B (Part A and B Savings Program Group Hospitalizations Costs vs. Fee Paid) HQP 739 -14 -14* +0.3 (-$100 vs. $102) QMed 706 -7 -11 -0.2 (-$81 vs. $81) Mercy 463 -17* -9 +11.3* (-$113 vs. $248) Georgetown 114 -24* -13 -3.7 (-$335 vs. $242) Three Programs Are Likely Cost Neutral Only 1 program had a statistically significant reduction in Part A and B costs, and none reduced total costs including fees. Impact as a % of Control Group Mean # in Total Costs Treatment Part A + B (Part A and B Savings Program Group Hospitalizations Costs vs. Fee Paid) +0.3 (-$100 vs. $102) -0.2 (-$81 vs. $81) 114 -3.7 (-$335 vs. $242) HQP 739 -14 -14* QMed 706 -7 -11 Mercy 463 -17* -9 +11.3* (-$113 vs. $248) Georgetown -24* -13 * Indicates p<0.10; Cost neutral = total costs (regular Medicare costs plus program fees) of the treatment group are statistically comparable to regular Medicare costs of the control group.

  11. Many Programs Increased Total Costs

  12. No Favorable Effects on Total Costs • Pooled total costs are 11 percent higher • Same results when we trimmed outliers • Savings didn’t emerge over time

  13. Why Doesn’t CC Control Costs Better? An Illustration of the Funnel Effect • Best case scenario, for voluntary (opt-in) model: Average of 1 hospitalization per year r 50% theoretically preventable r 30% actually prevented = 15% of hospitalizations avoided

  14. Funnel Effect Illustration for 1,000 Enrollees

  15. Context for Findings • Consistent with results from other CMS demonstrations • Much harder for population-based programs. Say only 25% engage. Cost-neutral fees: • if decrease in admits is 15%: $35 pmpm • if decrease in admits is 4.5%: $10 pmpm • Fees paid were double the average monthly Medicare payments for regular office visits ($70)

  16. Impacts on Quality of Care

  17. Two Main Types of Measures • Measures for Impact Estimation • Both treatments and controls • Descriptive Measures • Treatment group only • Perceptions of: • Treatment group patients • Physicians of treatment group patients

  18. Perceptions of Treatment Group Patients • Patients Generally Liked the Programs • Support/monitoring • Service arrangement • Care coordinators’ general education skills • Adherence assistance • Same 2 or 3 Programs Tended to Be Above Average Across Measures

  19. Perceptions of Patients’ Physicians • Physicians Generally Liked Programs • Effects on medical practice • Patient self-management • Care coordination • Physician-patient relations • Care coordinators’ clinical competence • Patients’ outcomes • Would recommend to colleagues, patients • Same 1 or 2 Programs Tended to Be Above Average Across Measures

  20. T-C Comparisons: Process of Care Measures Receipt of: • Program services --Patient survey • Health education --Patient survey • Recommended clinical --Medicare claims services • For example, hemoglobin A1c testing

  21. T-C Comparisons: Outcome Measures • Patient knowledge -- Survey • Patient adherence -- Survey • Unmet needs -- Survey • Functioning -- Survey • Health-related quality of life -- Survey • Satisfaction with care -- Survey • Mortality -- EDB • Potentially avoidable -- Claims hospitalizations

  22. + – Methodology • Multiple Measures and Demonstration Sites High Potential for Type I Errors • Sought Patterns Within or Across Programs: • Program with differences in multiple measures? • Multiple programs with differences in similar measures? • Directions of significant differences: ’s = ’s? • Magnitude of estimated effect?

  23. Summary: Some Impacts on Process Measures • Patient awareness of Large impacts programs • Reports of receiving Large impacts education • Preventive services Scattered effects

  24. Summary: Minimal Impacts on Outcome Measures • Self-reported adherence: 0 • Unmet needs 0 • Function 0 • Health-Related Quality of Life 0 • Patient satisfaction Scattered effects • Mortality 0 • Potentially preventable Scattered effects hospitalizations

  25. Now What? • No Substantial, Broad Quality Impacts • Recall: Programs Could Be Cost-Saving or Cost Neutral and Improve Quality • Go back and examine quality results for potentially cost-neutral programs • HQP, QMed, Mercy (at a lower fee), Geo* * Georgetown dropped out before the demonstration ended and is not considered viable due to small enrollment

  26. Favorable Impacts on Process Measures for the 3 Selected Programs

  27. Favorable Impacts on Outcome and T-Only Measures for 3 Selected Programs

  28. What Features DistinguishSuccessful Programs?

  29. No Structural Distinctions * The 9 programs exclude 3 that were unable to enroll enough patients over the 4 years to be considered viable.

  30. No Distinguishing Patient Characteristics CAD = Coronary artery disease

  31. No Distinguishing Interventions

  32. Programs Excel in Different Domains Note: 1 = top quintile (3 programs); 5 = bottom quintile. Shaded cells are top 2 quintiles.

  33. Programs Report Varied Reasonsfor Success

  34. What Does it All Mean? What’s Next?

  35. So What Did We Learn? • Value of DM/care coordination still unclear: • A few programs show promise, if replicable • Some proven models weren’t tested here • No single necessary or best approach • More in-person contacts  better outcomes • Best target population may be medium severity

  36. Ongoing Work • Three programs to be extended: • HQP, QMed, Mercy (at a reduced fee) • Very different models and challenges • CMS evaluation required • Two follow-up studies under way: • Extend time frame and depth (HCFO) • Test effects of intervention changes and identify best practices (MCCPRN)

  37. Extending Time Frame and Depth: HCFO Study Tasks • Collect detailed on-site information on the 3 cost-neutral interventions • Add data for 7/06-12/07 (up to 5 years total) • Estimate effects on readmissions • Estimate effects for key subgroups • Examine effects of contamination and critical mass

  38. Testing Intervention Changes and Defining Best Practices: MCCPRN • Includes 8 MCCD sites • Test sites’ pre-specified hypotheses about different effects over time and subgroups • Develop consensus best practices • Design demo to test best practice model • Goal: Use existing sites as ongoing laboratory for rapid testing

  39. For More Information • http://www.mathematica-mpr.com/health/bestprac.asp • Email: rbrown@mathematica-mpr.com

More Related