1 / 62

Structural equation modeling with Mplus

Structural equation modeling with Mplus. E. Kevin Kelloway , Ph.D. Canada Research Chair in Occupational Health Psychology. Overview. Day 1: Familiarization with the Mplus environment – Varieties of regression Day 2 Introduction to SEM: Path Modeling, CFA and Latent variable analysis

gabe
Télécharger la présentation

Structural equation modeling with Mplus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Structural equation modeling withMplus E. Kevin Kelloway, Ph.D. Canada Research Chair in Occupational Health Psychology

  2. Overview • Day 1: Familiarization with the Mplus environment – Varieties of regression • Day 2 Introduction to SEM: Path Modeling, CFA and Latent variable analysis • Day 3 Advanced Techniques – Longitudinal data, multi-level SEM etc

  3. Today’s Agenda • 0900 - 1000 Introduction : The Mplus Environment • 1000 – 1015 Break • 1015 – 1100 Using Mplus: Regression • 1100 – 1200 Variations on a theme: Categorical, Censored and Count Outcomes • 1200 – 1300 Break • 1300 – 1400 Multilevel models: Some theory • 1400 – 1415 Break • 1415 – 1530 Estimating multilevel models in Mplus

  4. MPLUS • Statistical modeling program that allows for a wide variety of models and estimation techniques • Explicitly designed to “do everything” • Techniques for handling all kinds of data (continuous, categorical, zero-inflated etc), • Allows for multilevel and complex data • Allows the integration of all of these techniques

  5. The Mplus Framework Observed variables x background variables (no model structure) y continuous and censored outcome variables u categorical (dichotomous, ordinal, nominal) and count outcome variables • Latent variables f continuous variables – interactions among f’s c categorical variables – multiple c’s

  6. Mplus Configurations • BASE MODEL – Does regression and most versions of SEM • Mixture - Adds in mixture analysis (using categorical latent variables) • Multi-level Add-on –adds the potential for multi-level analysis • Recommend the Combo Platter

  7. Some Characteristics of Mplus • Batch processor • Text commands (no graphical interface) and keywords • Commands can come in any order in the file • Three main tasks • GET THE DATA into MPLUS and DESCRIBE IT • ESTIMATE THE MODEL of INTEREST • REQUEST THE DESIRED OUTPUT

  8. The Mplus Language • 10 Commands • TITLE Provides a title • • DATA (required) Describes the Dataset • • VARIABLE (required) Names/identifies Variables • • DEFINE Computes/transforms • • ANALYSIS Technical details of analysis • • MODEL Model to be estimated • • OUTPUT Specifies the output • • SAVEDATA Saves the data • • PLOT Graphical Output • • MONTECARLO Monte Carlo Analysis • Comments are denoted by ! And can be anywhere in the file

  9. Some conventions • “is” “are” and = can generally be used interchangeably • Variable: Names is Bob • Variable: Names = Bob • Variable: Names are Bob • “-” denotes a range • Variable: Names = Bob1 – Bob5 • : ends each command • ; ends each line

  10. Getting the data into Mplus (1) • Step 1: Move your data into a “.dat” file (ASCII) – SPSS or Excel will do this • Step 2: Create the command file with DATA and VARIABLE STATEMENTS • Step 3 (Optional) I always ask for the sample statistics so that I can check the accuracy of data reading • OPEN and RUN Day1 Example 1.inp

  11. Example 1 • TITLE: This is an example of how to read data into Mplus from an ASCII File • DATA: file is workshop1.dat; • Variable: NAMES are sex age hours location TL PL GHQ Injury; • USEVARIABLES = tl – injury; • Output: Sampstat; • Include the demographic variables in the analysis

  12. Output: Three major divisions • Repeat the input instructions – check to see if proper N, K and number of groups • Describe the analysis – describes the analysis, check for accuracy • Report the results • Fit Statistics • Parameter Estimates • Requested information (sample statistics, standardized parameters etc) • NOTE: Not all output is relevant to your analysis

  13. Getting Data into MPLUS (2) • N2Mplus – freeware program that will read SPSS or excel files • Will Create the data file • Will write the Mplus syntax which can be pasted into mplus • Limit of 300 variables • Watch variable name lengths (SPSS allows more characters than does Mplus)

  14. Multiple Regression General Goal To predict one variable (DV or criterion) from a set of other variables (IVs or Predictors). IVs may be (and usually are) intercorrelated. Minimize least squares (minimize prediction error) - Maximize R

  15. Bivariate Regression • Correlation is ZxZy/N • Line of best fit (OLS Regression line) is found by y = mx+b where • b = Y intercept Y – bX • And m = slope = r Sdy/Sdx

  16. Multiple Regression • Extension of Bivariate Regression to the case of multiple predictors • Predictors may be (usually are) intercorrelated so need to partial variance to determine the UNIQUE effects of X on Y

  17. Regression • To specify a simple linear regression you simply add a Model line to the file • Model DV on IV1 IV2 IV3….IVX • You also want to specify some specific forms of output to get the “normal” regression information • Useful options are • SAMPSTAT – sample statistics for the variables • STANDARDIZED – standardized parameters • Savedata: Save=Cooks Mahalanobis • What predicts GHQ?

  18. Categorical Outcomes

  19. LOGISTIC REGRESSION • Used typically with dichotomous outcome (also ordered logistic and probit models) • Similar to regression – generate an overall test of goodness of fit • Generate parameters and tests of parameters • Odds ratios • When split is 50/50 then discriminant and logistic should give the same result • When split varies, then logistic is preferred

  20. TESTS • Likelihood chi-squared - baseline to model comparisons • ParameterTest(B/SE) • Odds ratio - increase/decrease in odds of being in one outcome category if predictor increases by 1 unit (Log of B)

  21. In Mplus • Specify one outcome as categorical (can be either binary or ordered) • Default estimator is MLR which gives you a probit analysis • Changing to ML gives you a Logistic regression • RUN DAY1Example3.inp • To dichotomize the outcome (from a multi-category or continuous measure • define: cut injury (1);

  22. Count Data

  23. Generic Problem – Grossly distorted (distribution of), or violated assumptions for the criterion variable

  24. An Example • Data from a study of metro transit bus drivers (n=174) • Data on workplace violence (extent to which one has been hit/kicked; attacked by a weapon;had something thrown at you) 1 = not at all 4 = 3 or more times • Data cleaning suggests highly skewed and kurtotic distribution • Descriptive Statistics • N Minimum Maximum Mean Std. Deviation Skewness Kurtosis • Statistic StatisticStatisticStatisticStatisticStatistic Std. Error Statistic Std. Error • violence • 170 1.00 3.00 1.2353 .37623 1.900 .186 3.677 .370 • Valid N (listwise) 170

  25. Scores pile up at 1 (Not at all)

  26. More Estimators ►Negative Binomial. This distribution can be thought of as the number of trials required to observe k successes and is appropriate for variables with non-negative integer values. If a data value is non-integer, less than 0, or missing, then the corresponding case is not used in the analysis. The fixed value of the negative binomial distribution's ancillary parameter can be any number greater than or equal to 0. When the ancillary parameter is set to 0, using this distribution is equivalent to using the Poisson distribution. • Normal. This is appropriate for scale variables whose values take a symmetric, bell-shaped distribution about a central (mean) value. The dependent variable must be numeric. • Poisson. This distribution can be thought of as the number of occurrences of an event of interest in a fixed period of time and is appropriate for variables with non-negative integer values. If a data value is non-integer, less than 0, or missing, then the corresponding case is not used in the analysis.

  27. Count DataData in which only non-negative integers can occur (0,1,2,3 etc)

  28. Some Observations on Count Data ►Counts are discrete not continuous ►Counts are generated by a Poisson distribution (discrete probability distribution) ►Poisson distributions are typically problematic because they are skewed (by definition non-normal) are non-negative (cannot have negative predicted values) have non constant variance– variance increases as mean increases BUT… Poisson regressions also make some very restrictive assumptions about the data (i.e., the underlying rate of the DV is the same for all individuals in the population or we have measured every possible influence on the DV)

  29. The Negative Binomial Distribution ►allows for more variance than does the poisson model (less restrictive assumptions) Can fit a poisson model and calculate dispersion (Deviance/df). Dispersion close to 1 indicates no problem; if over dispersion use the negative binomial Poisson but not neg binomial is available in Mplus

  30. Zero Inflated Models Zero Inflated Poisson Regresson (ZIP Regression) Zero Inflated Negative Binomial Regression (ZINB Regression) Assumes two underlying processes predict whether one scores 0 or not 0 Predict count for those scoring > 0

  31. Day1 Example4 • Run to obtain a Poisson Regression • Outcome is specified as a count variable • To obtain a ZIP regression run Day1 Example5 • Note that one can specify different models for occurrence and frequency

  32. MultiLevel Models In Mplus

  33. An Example • What is the correlation between X and Y? • Descriptive Statistics • Mean Std. Deviation N • x 8.0000 4.42396 15 • y 8.0000 4.42396 15 • Correlationsa • x y • x Pearson Correlation 1 .912** • Sig. (2-tailed) .000 • y Pearson Correlation .912** 1 • Sig. (2-tailed) .000 • a. Listwise N=15

  34. Split Sample by Group • Group 1 r = 0.0 Mean = 3 N=5 • Group 2 r = 0.0 Mean = 8 N=5 • Group 3 r = 0.0 Mean = 13 N=5

  35. Introduction • Multi-level data occurs when responses are grouped (nested) within one or more higher level units of responses • E.G. Employees nested within teams/groups • Longitudinal data – observations nested within individuals • Creates a series of problems that may not be accounted for in standard techniques (e.g., regression, SEM etc)

  36. Some Problems with MultiLevel Data • Individuals within each group are more alike than individuals from different groups (variance is distorted) – violation of the assumption of independence • We may want to predict level 1 responses from level 2 characteristics (i.e., does company size predict individual job satisfaction). If we analyse at the lowest level only we under-estimate variance and hence standard errors leading to inflated Type 1 errors – we find effects where they don’t exist • Aggregation to the highest level may distort the variables of interest (or may not be appropriate)

  37. Two Paradoxes • Simpson’s – Completely erroneous conclusions may be drawn if grouped data, drawn from heterogeneous populations are collapsed and analyzed as if drawn from a single population • Ecological – The mistake of assuming that the relationship between variables at the aggregated (higher) level will be the same at the disaggregated (lower) level

  38. What are multi-level models? • Essentially an extension of a regression model • Y = mx + b + error • Multilevel models allow for variation in the regression parameter (intercepts (b) and slopes(m)) across the groups comprising your sample • Also allow us to predict variation ask why groups might vary in intercepts or slopes • Intercept differences imply mean differences across groups • Slope differences indicate different relationships (e.g., correlations) across groups

  39. The Multilevel model • Attempting to explain (partition) variance in the DV • Why don’t we all score the same on a given variable? • Simplest explanation is error – individual’s score is the grand mean + error. • If employees are in groups – then the variance of the level 1 units has at least 2 components – the variance of individuals around the group mean (within group variance) and the variance of the group means around the grand mean (between group variance) • This is known as the intercepts only or “variance components” or “unconditional” model – it is a baseline that incorporates no predictors

  40. The Multilevel model (cont’d) • Can introduce predictors either at level 1 or level 2 or both to further explain variance • Can allow the effects of level 1 predictors to vary across groups (random slopes) • Can examine interactions within and across levels • Can incorporate quadratic terms etc

  41. Techy Stuff – Getting the Data in Shape

  42. File Handling: Aggregation • To create level 2 observations we often need to aggregate variables to the higher level and to merge the aggregated data with our level 1 data. To aggregate you need to specify [a] the variables to be aggregated, [b] the method of aggregation (sum, mean etc) and [c] the break variable (definition for level 2) • SPSS allows you to aggregate and save group level data to the current file using the aggregate command • Mplus allows you to do this within the Mplus run

  43. Notes on Aggregation • If you choose to aggregate, then there should be some empirical support (i.e., evidence for similar responses within group). Some typical measures are: • ICC – the interclass correlation. The extent to which variance is attributable to group differences. From ANOVA (MSb-MSw)/MSb+C-1(MSw) where C= average group size • ICC(2) -reliability of means(MSb – MSw)/MSb • Rwg (multiple variants) indices of agreement • MPLUS calculates the ICC for random intercept models

  44. Centering Predictors • Centering a variable helps us to interpret the effect of predictors. In the simplest sense, centering involves subtracting the mean from each score (resulting a distribution of deviation scores that have a mean of 0) • Centering (among other things) helps with convergence by imposing a common scale • GRAND MEAN Centering – involves subtracting the sample mean from each score • GROUP MEAN Centering –involves subtracting the group mean from each score – must be done manually.

  45. Centering (cont’d) • Grand mean – each score is measured as a deviation from the grand mean. The intercept is the score of an individual who is at the mean of all predictors “the average person” • Group mean – each score measured as a deviation from the group mean. The intercept is the score of an individual who is at the mean of all predictors in the group “the average person in group X” • Grand mean is the same transformation for all cases – for fixed main effects and overall fit will give the same results as raw data • Group mean – different for each group – different results

  46. Centering (cont’d) • Grand mean – helps model fitting, aids interpretation (meaningful 0), may reduce collinearity in testing interactions, or between model parameters or squared effects – may reduce meaning if raw scores actually “mean something” • Group mean – helps model fitting, can remove collinearity if you are including both group (aggregate) and individual measures of the same construct in the model (aggregate data explains between group and individual level explains within group variance).

  47. A general recommendation • Grand mean – may be appropriate when the underlying model is either incremental (group effects add to individual level effects) or mediational (group effects exert influence through individual) • Group mean – may be more appropriate when testing cross-level interactions • Hoffman & Gavin (1998) – Journal of Management

  48. Power and Sample Size How many subjects = how long is a piece of string? • Calculations are complex, dependent on intraclass correlations, sample size, effect size etc etc • In general power at Level 1 increases with the number of observations and a Level 2 with the number of groups • Hox (2002) recommends 30 observations in each of 30 groups Heck & Thomas (2000) suggested 20 groups with 30 observations in each • Others suggest that even k=50 is too small • Practical constraints likely rule • Better to have a large number of groups with fewer individuals in each group than a small number of groups with large group sizes

  49. Convergence • Occassionally (about 50% of the time) the program will not converge on a solution and will report a partial solution (i.e., not all parameters). • In my experience lack of convergence is a direct function of sample size (small samples = convergence failures) • The easiest fix is to ensure that this is not a scaling issue – ie that all variables are measured on roughly the same metric (standardize) • The single most frustrating aspect of multi-level models

  50. A plan of analysis

More Related