1 / 19

Sarah Tipping and Jennifer Sinibaldi, NatCen

Examining the Trade Off between Sampling and Non-response Error in a Targeted Non-response Follow-up. Sarah Tipping and Jennifer Sinibaldi, NatCen. Background. Groves and Heeringa (2006) drew a second-phase sample as part of the responsive design for the NSFG cycle 6.

wynona
Télécharger la présentation

Sarah Tipping and Jennifer Sinibaldi, NatCen

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Examining the Trade Off between Sampling and Non-response Error in a Targeted Non-response Follow-up Sarah Tipping and Jennifer Sinibaldi, NatCen

  2. Background • Groves and Heeringa (2006) drew a second-phase sample as part of the responsive design for the NSFG cycle 6. • Estimated increased sampling variance by approx 20% • Want to introduce Responsive Design into NatCen surveys

  3. Overview of Methods • 2 months of data from the Health Survey for England 2009 was used to simulate a responsive design protocol. • Created a cut off for Phase 1 and modelled response propensities • Three Phase 2 samples were drawn • The designs were assessed by comparing response, bias, variance and mean square error

  4. Objectives • If we implement Responsive Design… • Will we improve bias? • Will the inflation in variance outweigh gains in bias?

  5. Phase 1 • Phase 1 of the simulation ended after all cases had been called four times • Data from Phase 1 was used to model response • Discrete hazard model using call-level data • Included info about calls, interviewer characteristics, area level information (census and other measures) • Saved the predicted probabilities and used them at Phase 2 to draw sample

  6. Phase 2 • Select PSUs for re-issue at Phase 2 • Three approaches: • Cost effective sample • Pure bias reduction • Cost effective bias reduction • All three designs selected PSUs with unequal selection probabilities => selection weights needed.

  7. Results • Evaluated the three Phase 2 sample designs by comparing response, bias, variance and mean square error • The cost effective design had the highest Phase 2 response rate at 61%. (n = 250) • Pure bias reduction = 51% (n = 227) • Cost effective bias reduction = 53% ( n= 236)

  8. Interviewer observations by sample type

  9. Household type

  10. Other demographics

  11. Other demographics

  12. Bias – survey estimates (women)

  13. Bias – difference in survey estimates (women)

  14. Variance of survey estimates (women)

  15. Mean Square Error • MSE was generated for a selection of key health estimates • MSE = Var + Bias2

  16. MSE for survey estimates (women)

  17. Conclusions • Results are positive! • Focusing on cost effectiveness increases bias of estimates • ‘Pure’ bias reduction does not perform much better than cost effective bias reduction in terms of bias, variance inflation and MSE

  18. Discussion points • Discrepancies between interviewer observations and actual data. • Selection weights need careful consideration, want to avoid large weights

  19. Actual data and interviewer obs

More Related