1 / 37

Todd D. Little University of Kansas Director, Quantitative Training Program

Missing Data Estimation in Longitudinal Research: It’s not Cheating! It’s Essential!. Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director, Undergraduate Social and Behavioral Sciences Methodology Minor

nerina
Télécharger la présentation

Todd D. Little University of Kansas Director, Quantitative Training Program

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Missing Data Estimation in Longitudinal Research: It’s not Cheating! It’s Essential! Todd D. Little University of Kansas Director, Quantitative Training Program Director, Center for Research Methods and Data Analysis Director, Undergraduate Social and Behavioral Sciences Methodology Minor Member, Developmental Psychology Training Program crmda.KU.edu Workshop presented 03-29-2011 @ Society for Research in Child Development crmda.KU.edu

  2. Learn about the different types of missing data • Learn about ways in which the missing data process can be recovered • Understand why imputing missing data is not cheating • Learn why NOT imputing missing data is more likely to lead to errors in generalization! • Learn about intentionally missing designs • Introduce a simple method for significance testing • Discuss imputation with large longitudinal datasets Road Map crmda.KU.edu

  3. Key Considerations • Recoverability • Is it possible to recover what the sufficient statistics would have been if there was no missing data? • (sufficient statistics = means, variances, and covariances) • Is it possible to recover what the parameter estimates of a model would have been if there was no missing data. • Bias • Are the sufficient statistics/parameter estimates systematically different than what they would have been had there not been any missing data? • Power • Do we have the same or similar rates of power (1 – Type II error rate) as we would without missing data? crmda.KU.edu

  4. Types of Missing Data • Missing Completely at Random (MCAR) • No association with unobserved variables (selective process) and no association with observed variables • Missing at Random (MAR) • No association with unobserved variables, but maybe related to observed variables • Random in the statistical sense of predictable • Non-random (Selective) Missing (NMAR) • Some association with unobserved variables and maybe with observed variables crmda.KU.edu

  5. Effects of imputing missing data crmda.KU.edu

  6. Effects of imputing missing data crmda.KU.edu

  7. Effects of imputing missing data Statistical Power: Will always be greater when missing data is imputed! crmda.KU.edu

  8. Modern Missing Data Analysis MI or FIML • In 1978, Rubin proposed Multiple Imputation (MI) • An approach especially well suited for use with large public-use databases. • First suggested in 1978 and developed more fully in 1987. • MI primarily uses the Expectation Maximization (EM) algorithm and/or the Markov Chain Monte Carlo (MCMC) algorithm. • Beginning in the 1980’s, likelihood approaches developed. • Multiple group SEM • Full Information Maximum Likelihood (FIML). • An approach well suited to more circumscribed models crmda.KU.edu

  9. Full Information Maximum Likelihood • FIML maximizes the casewise -2loglikelihood of the available data to compute an individual mean vector and covariance matrix for every observation. • Since each observation’s mean vector and covariance matrix is based on its own unique response pattern, there is no need to fill in the missing data. • Each individual likelihood function is then summed to create a combined likelihood function for the whole data frame. • Individual likelihood functions with greater amounts of missing are given less weight in the final combined likelihood function than those will a more complete response pattern, thus controlling for the loss of information. • Formally, the function that FIML is maximizing is where crmda.KU.edu

  10. Multiple Imputation • Multiple imputation involves generating m imputed datasets (usually between 20 and 100), running the analysis model on each of these datasets, and combining the m sets of results to make inferences. • By filling in m separate estimates for each missing value we can account for the uncertainty in that datum’s true population value. • Data sets can be generated in a number of ways, but the two most common approaches are through an MCMC simulation technique such as Tanner & Wong’s (1987) Data Augmentation algorithm or through bootstrapping likelihood estimates, such as the bootstrapped EM algorithm used by Amelia II. • SAS uses data augmentation to pull random draws from a specified posterior distribution (i.e., stationary distribution of EM estimates). • After m data sets have been created and the analysis model has been run on each separately, the resulting estimates are commonly combined with Rubin’s Rules (Rubin, 1987). crmda.KU.edu

  11. Fraction Missing • Fraction Missing is a measure of efficiency lost due to missing data. It is the extent to which parameter estimates have greater standard errors than they would have had all data been observed. • It is a ratio of variances: Estimated parameter variance in the complete data set Between-imputation variance crmda.KU.edu

  12. Fraction Missing • Fraction of Missing Information (asymptotic formula) • Varies by parameter in the model • Is typically smaller for MCAR than MAR data crmda.KU.edu

  13. Estimate Missing Data With SAS Obs BADL0 BADL1 BADL3 BADL6 MMSE0 MMSE1 MMSE3 MMSE6 1 65 95 95 100 23 25 25 27 2 10 10 40 25 25 27 28 27 3 95 100 100 100 27 29 29 28 4 90 100 100 100 30 30 27 29 5 30 80 90 100 23 29 29 30 6 40 50 . . 28 27 3 3 7 40 70 100 95 29 29 30 30 8 95 100 100 100 28 30 29 30 9 50 80 75 85 26 29 27 25 10 55 100 100 100 30 30 30 30 11 50 100 100 100 30 27 30 24 12 70 95 100 100 28 28 28 29 13 100 100 100 100 30 30 30 30 14 75 90 100 100 30 30 29 30 15 0 5 10 . 3 3 3 . crmda.KU.edu

  14. PROC MI data=sample out=outmi seed = 37851 nimpute=100 EM maxiter = 1000; MCMC initial=em (maxiter=1000); Var BADL0 BADL1 BADL3 BADL6 MMSE0 MMSE1 MMSE3 MMSE6; run; out= Designates output file for imputed data nimpute = # of imputed datasets Default is 5 Var Variables to use in imputation PROC MI crmda.KU.edu

  15. PROC MI output: Imputed dataset Obs _Imputation_ BADL0 BADL1 BADL3 BADL6 MMSE0 MMSE1 MMSE3 MMSE6 1 1 65 95 95 100 23 25 25 27 2 1 10 10 40 25 25 27 28 27 3 1 95 100 100 100 27 29 29 28 4 1 90 100 100 100 30 30 27 29 5 1 30 80 90 100 23 29 29 30 6 1 40 50 21 12 28 27 3 3 7 1 40 70 100 95 29 29 30 30 8 1 95 100 100 100 28 30 29 30 9 1 50 80 75 85 26 29 27 25 10 1 55 100 100 100 30 30 30 30 11 1 50 100 100 100 30 27 30 24 12 1 70 95 100 100 28 28 28 29 13 1 100 100 100 100 30 30 30 30 14 1 75 90 100 100 30 30 29 30 15 1 0 5 10 8 3 3 3 2 crmda.KU.edu

  16. What to Say to Reviewers: • I pity the fool who does not impute • Mr. T • If you compute you must impute • Johnny Cochran • Go forth and impute with impunity • Todd Little • If math is God’s poetry, then statistics are God’s elegantly reasoned prose • Bill Bukowski crmda.KU.edu

  17. Missing Data and Estimation:Missingness by Design • Assess all persons, but not all variables at each time of measurement • McArdle, Graham • Have core battery for all participants, but divide sample into groups and each group has additional measures • Control entry into study, to estimate and control for retesting effects • Randomly assign participants to their entry into a longitudinal study and to the occasions of assessment • Likely to be key in providing unbiased estimates of growth or change crmda.KU.edu

  18. 3-Form Intentionally Missing Design crmda.KU.edu

  19. 3-Form Protocol II crmda.KU.edu

  20. Expansions of 3-Form Design • (Graham, Taylor, Olchowski, & Cumsille, 2006) crmda.KU.edu

  21. Expansions of 3-Form Design • (Graham, Taylor, Olchowski, & Cumsille, 2006) crmda.KU.edu

  22. 2-Method Planned Missing Design crmda.KU.edu

  23. Controlled Enrollment crmda.KU.edu

  24. Growth-Curve Design crmda.KU.edu

  25. Growth Curve Design II crmda.KU.edu

  26. Growth Curve Design II crmda.KU.edu

  27. Efficiency of Planned Missing Designs crmda.KU.edu

  28. Combined Elements crmda.KU.edu

  29. The Sequential Designs crmda.KU.edu

  30. Transforming to Accelerated Longitudinal crmda.KU.edu

  31. Transforming to Episodic Time crmda.KU.edu

  32. Simple Significance Testing with MI • Generate multiply imputed datasets (m). • Calculate a single covariance matrix on all N*m observations. • By combining information from all m datasets, this matrix should represent the best estimate of the population associations. • Run the Analysis model on this single covariance matrix and use the resulting estimates as the basis for inference and hypothesis testing. • The fit function from this approach should be the best basis for making inferences about model fit and significance. • Using a Monte Carlo Simulation, we test the hypothesis that this approach is reasonable. crmda.KU.edu

  33. Population Model .52 Factor B Factor A 1* 1* .76 .75 .68 .70 .67 .72 .69 .79 .72 .75 .81 .72 .74 .70 .71 .69 .81 .73 .78 .79 A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 .49 .43 .53 .48 .52 .38 .42 .51 .35 .49 .45 .52 .50 .38 .53 .35 .47 .44 .55 .39 Note: These are fully standardized parameter estimates RMSEA = .047, CFI = .967, TLI = .962, SRMR = .021 crmda.KU.edu

  34. Change in Chi-squared TestCorrelation Matrix Technique www.Quant.KU.edu

  35. Imputing with Large Datasets • Create a BLOCK of variables that contains as much information about the dataset as possible and has no missing data • Reduce the data by creating scale averages • Reduce the data by estimating a set of principal components • Use both approaches • Impute missingness in the block. • Create product terms by key potential moderators and powered terms. • Reduce the data again • This block can be the auxiliary variables block in FIML estimation • In a sequential set of steps impute the item-level data in groups of similar types of items • Use the BLOCK of variables in each set of multiple imputations. • Select the item-level data based on similarity of constructs. • Use as many items as possible. • Save, sort, and merge the imputed datasets. • Use the super matrix approach to analyze. crmda.KU.edu

  36. Missing Data Estimation in Longitudinal Research: It’s not Cheating! It’s Essential! Thanks for your attention! Questions? crmda.KU.edu Workshop presented 03-29-2011 Society for Research in Child Development crmda.KU.edu

  37. Update Dr. Todd Little is currently at Texas Tech University Director, Institute for Measurement, Methodology, Analysis and Policy (IMMAP) Director, “Stats Camp” Professor, Educational Psychology and Leadership Email: yhat@ttu.edu IMMAP (immap.educ.ttu.edu) Stats Camp (Statscamp.org) www.Quant.KU.edu

More Related