1 / 9

Discriminant Function Analysis

Discriminant Function Analysis. Mechanics. Equations. To get our results we’ll have to use those same SSCP matrices as we did with Manova. Equations.

creola
Télécharger la présentation

Discriminant Function Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Discriminant Function Analysis Mechanics

  2. Equations • To get our results we’ll have to use those same SSCP matrices as we did with Manova

  3. Equations • The diagonals for the matrices are the sums of squared deviations about means for that variable, while the offdiagonals contain the cross-products of those deviations for the variables involved

  4. The eigenvalues and eigenvectors will again be found for the BW-1 matrix as in Manova • We will use the eigenvectors (vi) to come to our eventual coefficients used in the linear combination of DVs • The discriminant score for a given case represents the position of that case along the continuum (axis) defined by that function • In the original our new axes (dimensions, functions) could be anywhere, but now will have an origin coinciding with the grand centroid (where all the means of the DVs meet)

  5. Equations • Our original equation here in standardized form • a standardized discriminant function score ( ) equals the standardized scores times its standardized discriminant function coefficient ( )

  6. Note that we can label our coefficients in the following fashion • Raw – vi • From eigenvectors • Not really interpretable as coefficients and have no intrinsic meaning as far as the scale is concerned • Unstandardized - ui • Actually are in a standard score form (mean = 0, within groups variance = 1) • Discriminant scores represent distance in standard deviation units, from the grand centroid • Standardized – di • uis for standardized data • Allow for a determination of relative importance

  7. Classification • Classification score for group j is found by multiplying the raw score on each predictor (x) by its associated classification function coefficient (cj), summing over all predictors and adding a constant, cj0

  8. Equations • The coefficients are found by taking the inverse of the within subjects variance-covariance matrix W (just our usual SSCP matrix values divided by within groups df [N-k]) and multiplying it by the column vector of predictor means: • and the intercept is found by: Where Cj is the row vector of coefficients. A 1 x m vector times a q x 1 vector results in a scalar (single value)

  9. Prior probability • The adjustment is made to the classification function by adding the natural logarithm of the prior probability for that group to the constant term • or subtracting 2 X this value from the Mahalanobis’ distance • Doing so will make little difference with very distinct groups, but can in situations where there is more overlap • Note that this should only be done for theoretical reasons • If a strong one cannot be found, one is better of not messing with it

More Related