1 / 18

Research partially supported by contract with the US National Center for Health Statistics,

Responsive Design for Household Surveys: Illustration of Management Interventions Based on Survey Paradata. Robert M. Groves, Emilia Peytcheva, Nicole Kirgis, James Wagner, William Axinn, University of Michigan, USA William Mosher, US National Center for Health Statistics.

silvio
Télécharger la présentation

Research partially supported by contract with the US National Center for Health Statistics,

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Responsive Design for Household Surveys: Illustration of Management Interventions Based on Survey Paradata Robert M. Groves, Emilia Peytcheva, Nicole Kirgis, James Wagner, William Axinn, University of Michigan, USAWilliam Mosher, US National Center for Health Statistics Research partially supported by contract with the US National Center for Health Statistics, Contract No. 200-2000-07001

  2. Definition: Responsive Design Survey designs that: • Preidentify a set of alternative features potentially affecting costs and errors of statistics • Identify a set of indicators of the cost and error properties of those feature • Monitor indicators in initial stages of data collection • Alter the active features of the survey based on cost/error tradeoff decision rules • Combine data from separate phases into a single estimator

  3. The Why’s of Responsive Designs • One-off surveys are mounted with large uncertainties (e.g., eligibility of frame elements, effort required to contact, cooperation rate, length of interview) • Most survey budgets are relatively fixed at start of project • Some survey errors are functions of effort during production Hence, quality is out of control of researcher unless designs are permitted to change based on production experience

  4. The NSFG Dashboard Effort Active Sample Productivity I’rs working occupied interviews hours eligible cum. interviews % production nonworked hours/interview X calls/day noncontacts calls/interview calls/hour mean calls % peak calls 8+ calls Data Set Balance scrn’r/main calls locked bldgs resistant response rate hard appt. % with kids propensity % sexually active group rates CV group rates

  5. Evaluation of Two Interventions Based on Paradata • Survey setting: face-to-face survey screening (3-5 min.) to locate 60% of households with 15-44 year old; one eligible sampled (60-80 min.) • Interventions • Increasing relative effort on screening interviews versus main interviews with selected respondent • Increasing relative effort on a small subset of cases with high selection weights and high propensities to respond

  6. Ratio of Screener Calls to Main Interview Calls by Day by Quarter

  7. Autoregressive Time Series Coefficients for Model Predicting Daily Number of Screener Calls(p-values for coefficients)

  8. Autoregressive Time Series Coefficients for Model Predicting Daily Number of Screener Interviews(p-values for coefficients)

  9. Second Intervention: Increased Emphasis on Subset of Active Cases • In the last weeks of Phase 1, a subsample of cases with high propensities and high selection weights are identified • These cases are chosen to improve balance of respondent pool • Interviewers are asked to give greater emphasis to these cases

  10. Mean Expected Probability to be Interviewed on Next Call, Screeners (Red) and Main (Green) by Day of Data Collection by Quarter

  11. Analytic Approach • Not all interviewers active workloads contain both “intervention” cases and “non-intervention” cases • We limit the analysis to those who have both types of cases • We examine two indicators of success: • Mean number of calls (imperfect) • Response rate in intervention period

  12. Comparison of Mean Screener Calls During Intervention Period for Intervention and Nonintervention Cases Little evidence of increased calling on intervention cases

  13. Comparison of Screener Response Rate During Intervention Period for Intervention and Nonintervention Cases Little evidence of increased response rate on intervention cases

  14. Comparison of Mean Main Calls During Intervention Period for Intervention and Nonintervention Cases Mixed evidence on higher calls on intervention cases

  15. Comparison of Main Response Rate During Intervention Period for Intervention and Nonintervention Cases General tendency to higher response rates for intervention cases

  16. Logistic Regression For Likelihood of Main Interview

  17. Conclusions • Intervention 1: Management direction to focus on screeners vs. main increases calls, sometimes dramatically; screener interviews follow • Intervention 2: Effectiveness at focusing on individual cases greater for main interviews than screener interviews

  18. Next Steps on Responsive Design with Paradata • Responsive design requires effective central management direction of interviewer behavior • We’re still learning how to communicate these directives consistently well

More Related