Key Advances in Data Analysis Techniques: Insights by Michael Babyak, PhD
Michael Babyak, PhD, explores critical advancements in data analysis, offering insights into flawed techniques and innovative models. Key areas include the treatment of missing data, validation methodologies, and new approaches like generalized linear models and structural equation modeling. Babyak emphasizes the importance of rigorous statistical practices, such as multiple imputation for missing data and robust variable selection methods. His work highlights the significance of considering underlying constructs and improving reliability in measurements, ultimately enhancing research quality.
Key Advances in Data Analysis Techniques: Insights by Michael Babyak, PhD
E N D
Presentation Transcript
Some key developments in data analysis Michael Babyak, PhD
Areas of development • Discarding flawed techniques • New types of models • Treatment of missing data • Simulation and empirical tests • Validation
Techniques largely discredited or highly suspect • Categorization of continuous variables without good reason • Automated variable selection without validation • Overfitted or “cherry-picked” models
New types of models • Regression family • Clustered data • Factor analysis family
Generalized Linear Model Normal Binary/Binomial Count, heavy skew, Lots of zeros Poisson, ZIP, Negbin, gamma General Linear Model/ Linear Regression Logistic Regression ANOVA/t-test ANCOVA Transformed Chi-square Can be applied to clustered (e.g, repeated measures data)
Factor Analytic Family Structural Equation Models Partial Least Squares Latent Variables (Common Factor Analysis) Multiple regression Principal Components
You Use Latent Variables Every Day • A Single Measurement is an indicator of an underlying phenomenon, e.g. mercury rising in a sphygmomanometer measures the underlying construct of “blood pressure.” • How do you improve the reliability of blood pressure measurement? Measure more than once, perhaps even in different setting (e.g. ambulatory monitoring). • A Psychometric Scale is also a collection of indicators of an underlying process, attempting to triangulate on an underlying construct by multiple items (indicators). • A Latent Variable is a collection of indicators with the unshared/unreliable part of the indicators removed—what’s the problem?
Missing Data • Imputation or related approaches are almost ALWAYS better than deleting incomplete cases • Multiple Imputation • Full Information Maximum Likelihood
Out of Missing Data Work • Propensity Scoring • “Matches” individuals on multiple dimensions to improve “baseline balance” • Complier Average Causal Effect (CACE) • Generates a guess at the effect of a treatment among all potential compliers, including those in the control arm
Simulation Example Y = .4 X + error bs1 bs2 bsk-1 bsk bs3 bs4 …………………. Evaluate
Validation • Split-half better than nothing, but often too conservative • Bootstrap • Repeated splitting
Some Premises • “Statistics” is a cumulative, evolving field • Newer is not necessarily better, but should be entertained as regards the scientific question at hand • Keeping up is hard to do • There’s no substitute for thinking about the problem
http://www.duke.edu/~mababyak • michael.babyak @ duke.edu • http://symptomresearch.nih.gov/chapter_8/