1 / 82

Medicaid Underreporting in the CPS: Results from a Record Check Study

Medicaid Underreporting in the CPS: Results from a Record Check Study. Joanne Pascale Marc Roemer Dean Resnick US Census Bureau DCAAPOR August 21, 2007. Medicaid Undercount. Records show higher Medicaid enrollment levels than survey estimates (~10-30%)

argyle
Télécharger la présentation

Medicaid Underreporting in the CPS: Results from a Record Check Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Medicaid Underreporting in the CPS: Results from a Record Check Study Joanne Pascale Marc Roemer Dean Resnick US Census Bureau DCAAPOR August 21, 2007

  2. Medicaid Undercount • Records show higher Medicaid enrollment levels than survey estimates (~10-30%) • Undercount affects many different surveys of health insurance • Non-reporting error sources contribute • Under-reporting is the largest contributor to the undercount

  3. Current Population Survey • Focus is on under-reporting in CPS • Produces most widely-cited estimates on health insurance and uninsured • Other surveys gauge estimates against CPS; mimic CPS design • CPS = monthly survey on labor force and poverty; health insurance questions asked in annual supplement

  4. Job-based Directly-purchased Someone outside HH Medicare Medicaid SCHIP Military Other CPS Health Insurance Questions: ‘Type by Type’ Structure

  5. CPS Health Insurance Questions: Calendar Year Reference Period • Survey is conducted in March • Questions ask about coverage during previous calendar year • “At any time during 2000, was anyone in this household covered by [plan type]?”

  6. CPS Health Insurance Questions: Household-level Design • Multi-person household: • At anytime during 2000 was anyone in this household covered by [plan type]? • [if yes] Who was that? • Single-person household: • At any time during 2000 were you covered by [plan type]?

  7. CPS Cognitive Testing • Three main sources of misreporting: • Type-by-type structure: Rs ‘pre-report’ and try to ‘fit’ coverage in earliest question • 12-month reference period: some respondents focus on current coverage or ‘spell’ • Household size and complexity

  8. More on HH Size and Complexity • Rs forgot about certain HH members • Rs did not know enough detail about other HH members’ plan type • Neither problem related to ‘closeness’ between R and referent; affected housemates, distant relatives but also parents, siblings, live-in partners

  9. Shared Coverage Hypothesis • Health insurance administered in ‘units’ • Private and military coverage: nuclear family • Medicaid and ~SCHIP: parent and children • Medicare: individual • Any given HH may have a mix of units • E.g.: R on his union plan; mother on Medicare; sister and child on Medicaid; live-in partner and her child on her job plan • R may be able to report more accurately for HH members who are in their same unit (i.e.: share the same coverage type)

  10. Methods • Linked CPS survey and ‘MSIS’ record data for year 2000 • Analysis Dataset: CPS sample members… • known to be on Medicaid according to records • for whom a direct response to ‘Medicaid’ was reported in CPS (not edited or imputed) • Several items fed into ‘Medicaid’ indicator (Medicaid, SCHIP, other government plan, other) • n = 19,345 • Dependent var = whether Medicaid was reported for the known enrollees

  11. Shared Coverage Variable • Referent (person reported on) is R (self-report) • in single-person hh • in multi-person hh • Referent is not R (proxy report) • But both are on same Medicaid case • But both are on Medicaid (different cases) • Referent is on Medicaid; R is not

  12. Logistic Regression Model • Dependent var = Medicaid status reported in CPS • Independent vars: • HH composition • Shared coverage var • Another HH member had Medicaid w/in year • Recency and intensity of coverage • Most recent month referent enrolled • Proportion of days covered from January till last month enrolled • Referent covered in survey month • Referent received Medicaid services w/in year • Demographics • Sex of R • Age and race/ethnicity of referent

  13. Results: Overview of Linked Dataset • Of 173,967 CPS hh members, 19,345 (11.1%) had Medicaid according to records • Medicaid was reported in CPS for only 12,351 (7.1%) hh members • => 36.2% under-reporting

  14. Results: Overall Regression • Model is highly significant in explaining misreporting • Effect of each variable is significant and highly discernible • Ranked each of the 9 independent vars according to its importance to model

  15. Ranking of Independent Vars • Most recent month enrolled • Proportion of days covered from January • Received Medicaid services w/in year • Race/ethnicity of referent • Sex of respondent • Another HH member had coverage w/in year • Age of referent • Covered in survey month • Shared coverage var

  16. Categorization of Independent Vars • Recency and intensity of coverage • Most recent month enrolled • Proportion of days covered from January till last month enrolled • Covered in survey month • Receipt of Medicaid services • Received services with/in year • Demographics • Race/ethnicity of referent (white non-Hispanic) • Sex of R 7. Age of referent • HH composition • Another HH member had coverage w/in year • Shared coverage var

  17. Expected ranking: A: Self report in single-person HH B. Self report in multi-person HH C. Proxy report, same case D. Proxy report, different case E. Proxy report; R does not have Medicaid Actual ranking: A C D/B D/B E Results: Shared Coverage Var

  18. Summary • Recency, intensity of coverage • Receipt of Medicaid services • Shared coverage • All contribute to the saliency of Medicaid to the respondent, which could translate to more accurate reporting • Rs in multi-person HHs forget to report their own coverage

  19. Conclusions • 1. Key components of wording are problematic: • “At any time during calendar year…” • “…was anyone in this household covered…” • Explore questionnaire design alternatives • 2. Reporting accuracy goes up if R and referent both have Medicaid • Explore questionnaire designs to exploit this • See if results apply to other coverage types

  20. Thoughts on Next Steps • Reference period: • start with questions about current status • ask when that coverage began • ‘walk’ back in time to beginning of calendar year • 2. Other hh members and shared coverage: • Start with R’s coverage • For each plan type reported ask if other hh members are also covered • Continue asking about other hh members by name

  21. THANK YOU!! • Joanne.Pascale@census.gov • Marc.I.Roemer@census.gov • Dean.Michael.Resnick@census.gov

  22. Finding low-income telephone households and people who do not have health insurance using auxiliary sample frame information for a random digit dial survey Tim Triplett, The Urban Institute David Dutwin, ICR Sharon Long, The Urban Institute DCAPPOR Seminar August 21, 2007

  23. Presentation Overview Purpose:Obtain representative samples of adults without health insurance and adults in low (less than 300 percent of the federal poverty level (FPL)) and medium (between 300 and 500 percent FPL) income families while still being able to produce reliable estimates for the overall population. Strategy:Telephone exchanges within Massachusetts were sorted in descending order by concentration of estimated household income. These exchanges were divided into three strata and we oversampled the low and middle income strata. Results:Oversampling of low and medium income strata did increase the number of interviews completed with adults without health insurance as well as adults living at or below 300 percent FPL.

  24. About the Study • Telephone survey conducted in Massachusetts • Collect baseline data prior to implementation of the Massachusetts universal health care coverage plan • Started on October 16, 2006, ended on January 7, 2007 • 3,010 interviews with adults 18 to 64 • Key sub groups were low and middle income households and uninsured adults • Overall response rate 49% (AAPOR rr3 formula)

  25. Sample design features • RDD list +2 exchanges stratified by income and group into high, middle, and low income strata • Over-sampled the low-income strata (n=1381) • Separate screening sample was used to increase sample of uninsured (n=704) • More aggressive over-sampling of the low income strata on the screening sample • One adult interviewed per household • Household with both insured and uninsured adults the uninsured adults had a higher chance of selection • No cell phone exchanges were sampled

  26. Percentage of uninsured and low-income adults by income strata

  27. Alternate sampling strategies that could yield enough uninsured respondents without increasing survey costs • None – no oversampling of strata – simply increase the amount of screening interviewers • OS (2:2:1, 3:2:1) - release twice as much sample in the main study from the low and middle income strata and 3 times as much in the screener survey • OS *(3:2:1, 5:3:1) - strategy we used • OS (5:3:1, 5:3:1) - same for main and screener • OS (5,3:1, 8:4:1) – heavy oversample in screener

  28. Simulation of sample sizes resulting from the various oversampling strategies

  29. Why not go for the largest sample • Design effects will increase as the sample becomes more clustered • Larger design effects means smaller effective sample sizes • So comparing different sampling strategies you need to compare effective sample sizes • We can only calculate the design effect (and effective sample size) for the sample strategy we employed • Isolating the increase in the design effect due to the oversampling allows us to estimate the design effect for the other strategies

  30. Average Design Effects

  31. Simulation of effective sample sizes under various oversampling rules taking into consideration design effects

  32. Conclusions • Oversampling using exchange level information worked well • Higher oversampling rate for the screener sample may not have been the best strategy • Exchanges still cluster enough to use auxiliary information • Except for the design we used – these are simulated estimates

  33. Sampling in the next round • Consider increasing (slightly) the oversampling rate for the main sample and decreasing (slightly) the rate for the screener sample or use the same rate • Need to sample cell phone exchanges • Health Insurance coverage likely to be higher • Conduct Portuguese interviews

  34. Thank YouThe survey was funded by the Blue Cross Blue Shield Foundation of Massachusetts, The Commonwealth Fund, and the Robert Wood Johnson Foundation. The analysis of the survey design was funded by the Urban Institute’s Statistical Methods Group.

  35. Switching From Retrospective to Current Year Data Collection in the Medical Expenditure Panel Survey-Insurance Component (MEPS-IC) Anne T. Kearney U.S. Census Bureau John P. Sommers Agency for Healthcare Research and Quality

  36. Important Terms • Retrospective Design: collects data for the year prior to the collection period • Current Year Design: collects data in effect at the time of collection • Survey Year: the year of data being collected in the field • Single Unit Establishment vs. Multi-Unit Establishment

  37. Outline • Background on MEPS-IC • Why Switch to Current?/Barriers to Switching • Impact on Frame and Reweighting Methodology • Details of Current Year Trial Methods • Results • Summary

  38. Background on MEPS-ICGeneral • Annual establishment survey that provides estimates of insurance availability and costs • Sample of 42,000 private establishments • National and state-level estimates • Retrospective design

  39. Background on MEPS-ICTiming Example • Let’s say retrospective design in survey year 2002 • Create frame/sample in March 2003 using 2001 data from the business register (BR) • Create SU birth frame with 2002 data from BR • In the field from roughly July-December 2003 • Reweighting in March-April 2004 using 2002 data from the BR • Estimation and publication in May-June 2004

  40. Why Switch to a Current Year Design? • Estimates published about 1 year sooner • Some establishments report current data already; current data is at their fingertips • Most survey estimates are conducive to current year design • Better coverage of businesses that closed after the survey year and before the field operation • Some data users in favor of going current

  41. Barriers to Switching to a Current Year Design • One year older data for frame building • One year older data for reweighting • These could possibly make our estimates very different which we believe means worse • Other data users believe retrospective design is better for collecting certain items

  42. Impact on Frame Example: Let’s use 2002 survey year again:

  43. Impact on ReweightingNonresponse Adjustment • We use an iterative raking procedure • We do the NR Adjustment using 3 sets of • cells: • Sector Groups • SU/MU • State by Size Group

  44. Impact on ReweightingPoststratification • We use an iterative raking procedure using 2 • sets of cells: • State by Size Group and SU/MU • Under the retrospective design for the 2002 survey:

  45. Details of Trial Methods • One issue for frame: • What to do with the births • One issue for nonresponse adjustment: • What employment data to use for cell assignments • Three issues for poststratification: • What employment data to use for cell assignments • What employment data to use for total employment • What payroll data to use to create the list of establishments for total employment

  46. Details of Trial Methods2002 Survey

  47. Details of Trial Methods2002 Survey

  48. Details of Trial Methods2002 Survey

  49. Details of Trial Methods2002 Survey

  50. Details of Trial Methods2002 Survey

More Related