1 / 63

Fun With Structural Equation Modelling in Psychological Research

Fun With Structural Equation Modelling in Psychological Research. Jeremy Miles IBS, Derby University. Structural Equation Modelling Analysis of Moment Structures Covariance Structure Analysis Analysis of Linear Structural Relationships (LISREL) Covariance Structure Models Path Analysis.

noreen
Télécharger la présentation

Fun With Structural Equation Modelling in Psychological Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fun WithStructural Equation Modellingin Psychological Research Jeremy Miles IBS, Derby University

  2. Structural Equation Modelling • Analysis of Moment Structures • Covariance Structure Analysis • Analysis of Linear Structural Relationships (LISREL) • Covariance Structure Models • Path Analysis

  3. Normal Statistics • Modelling process • What is the best model to describe a set of data • Mean, sd, median, correlation, factor structure, t-value Data Model

  4. SEM • Modelling process • Could this model have led to the data that I have? Model Data

  5. Theory driven process • Theory is specified as a model • Alternative theories can be tested • Specified as models Theory B Theory A Data

  6. Ooohh, SEM Is Hard • It was. Now its not • Jöreskog and Sörbom developed LISREL • Matrices: lx qd ly qeY Fb G • Variables: X Yh x z • Intercepts: tk

  7. The Joy of Path Diagrams Variable Causal Arrow Correlational Arrow

  8. Doing “Normal” Statistics Correlation x y

  9. Doing “Normal” Statistics T-Test x y

  10. Doing “Normal” Statistics One way ANOVA (Dummy coding) x1 x2 y x3

  11. Doing “Normal” Statistics Two- way ANOVA (Dummy coding) x1 x2 y x1 * x2

  12. Doing “Normal” Statistics Regression x x y x

  13. Doing “Normal” Statistics MANOVA y1 x1 y2 x2 y3

  14. Doing “Normal” Statistics ANCOVA x y z

  15. etc . . .

  16. Identification • Often thought of as being a very sticky issue • Is a fairly sticky issue • The extent to which we are able to estimate everything we want to estimate

  17. X = 4 Unknown: x

  18. x = 4 y = 7 Unknown: x, y

  19. x + y= 4 x - y = 1 Unknown: x, y

  20. x + y = 4 Unknown: x, y

  21. Things We Know = Things We Want to Know x=4 x + y = 4, x - y = 2 Just identified Can never be wrong “Normal” statistics are just identified

  22. Things We Know < Things We Want to Know x + y = 7 Not identified Can never be solved

  23. Things We Know > Things We Want to Know x + y = 4, x - y = 2, 2x - y = 3 over-identified Can be wrong SEM models areover-identified

  24. Identification • We have information • (Correlations, means, variances) • “Normal” statistics • Use all of the information to estimate the parameters of the model • Just identified • All parameters estimated • Model cannot be wrong

  25. Over-identification • SEM • Over-identified • The model can be wrong • If a model is a theory • Enables the testing of theories

  26. Parameter Identification x - 2 = y x + 2 = y • Should be identified according to our previous rules • it’s not though • There is model identification • there is not parameter identification

  27. Sampling Variation and c2 • Equations and numbers • Easy to determine if its correct • Sample data may vary from the model • Even if the model is correct in the population • Use the c2 test to measure difference between the data and the model • Some difference is OK • Too much difference is not OK

  28. Simple Over-identification Estimate 1 parameter -just-identified x y Estimate 0 parameters -over-identified x y

  29. Example 1 • Rab = 0.3, N = 100 • Estimate = 0.3, SE = 0.105, C.R. = 2.859 • The correlation is significantly different from 0 a b

  30. a b • Model • Tests the hypothesis that the correlation in the population is equal to zero • It will never be zero, because of sampling variation • The c2 tells us if the variation is significantly different from zero

  31. Example 2 • Test the model • Force the value to be zero • Input parameters = 1 • Parameters estimated = 0 • The model is now over-identified and can therefore be wrong a b

  32. The program gives a c2 statistic • The significance of difference between the data and the model • Distributed with df = known parameters - input parameters • c2 = 9.337, df = 1 - 0 = 1, p = 0.002 • So what? A correlation of 0.3 is significant?

  33. Hardly a Revelation • No. We have tested a correlation for significance. Something which is much more easily done in other ways • But • We have introduced a very flexible technique • Can be used in a range of other ways

  34. 0.15 a b Testing Other Than Zero • Estimated parameters usually tested against zero • Reasonable? • Model testing allows us to test against other values • c2 = 2.3, n.s. • Example 3

  35. Example 4: Comparing correlations • 4 variables • mothers' sensitivity • mothers' parental bonding • fathers' sensitivity • fathers' parental bonding • Does the correlation differ between mothers and fathers?

  36. 0.1 M PB F S 0.2 0.3 0.5 0.2 M S F PB 0.1

  37. Example 4a • analyse with all parameters free • 0 df, model is correct • Example 4b • fix FS-FPB and MS-MPB to be equal. • See if that model can account for the data

  38. M PB F S dave dave M S F PB c2 = 1.82, df = 1 p = 0.177 dave = 0.41 (s.e. 0.08)

  39. Latent Variables • The true power of SEM comes from latent variable modelling • Variables in psychology are rarely (never?) measured directly • the effects of the variable are measured • Intelligence, self-esteem, depression • Reaction time, diagnostic skill

  40. Latent Measured Measuring a Latent Variable • Latent variables are drawn as ellipses • hypothesised causal relationship with measured variables • Measured variable has two causes • latent variable • “other stuff” • random error

  41. x = t + e • Reliability is: • the square root of proportion of variance in x that is accounted • the correlation between x and e Error Measured True Score

  42. Identification and Latent Variables • 1 measured variable • not (even close to) identified • 4 measured variables • 6 known, 4 estimated • model is identified

  43. Need four measured variables to identify the model • Need to identify the variance of the latent variable • fix to 1

  44. Why oh why oh why? • Why bother with all these tricky latent variables? • 2 reasons • unidimensional scale construction • attenuation correction

  45. Unidimensionality 1.00 0.68 1.00 0.73 0.63 1.00 0.68 0.63 0.69 1.00 • Correlation matrix • c2 = 3.65, df = 2, p = 0.16

  46. Attenuation Correction • Why bother? • Gets accurate measure of correlation between true scores • Why bother • theories in psychology are ordinal • attenuation can only cause relationships to lower

  47. The Multivariate Case • Much more complex and unpredictable x1 y1 d e c a x2 y2 b

  48. Some More Models • Multiple Trait Multiple Method Models (MTMM) • Temporal Stability • Multiple Indicator Multiple Cause (MIMIC)

  49. MTMM • Multiple Trait • more than one measure • Multiple Method • using more than one technique • Variance in measured score comes from true score, random error variance, and systematic error variance, associated with the shared methods

  50. What? • Example 6 (From Wothke, 1996) • Three traits • Getting along with others (G) • Dedication (D) • Apply learning (L) • Three methods • Peer nomination (PN) • Peer Checklist (PC) • Supervisor ratings (SC)

More Related