1 / 7

3.8 Component Analysis and Discriminants

Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher. 3.8 Component Analysis and Discriminants

lizan
Télécharger la présentation

3.8 Component Analysis and Discriminants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern ClassificationAll materials in these slides were taken fromPattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000with the permission of the authors and the publisher Pattern Classification, Chapter 2 (Part 2)

  2. 3.8 Component Analysis and Discriminants • Combine features to reduce the dimension of the feature space • Linear combinations are simple to compute • Project high dimensional data onto a lower dimensional space • Two classical approaches for finding “optimal” linear transformations • PCA (Principal Component Analysis) “Projection that best represents the data in a least- square sense” • MDA (Multiple Discriminant Analysis) “Projection that best separatesthe data in a least-squares sense” 8

  3. 3.8.1 Principal Component Analysis (PCA) • Finds the direction that best represents the data in the least squares sense • Solving the least squares optimization problem results in the so called scatter matrix • Which is a constant times the covariance matrix • So the best directions are simply the Eigenvectors corresponding to the largest Eigenvalues of the covariance matrix 8

  4. 3.8.2 Fisher Linear Discriminant • Whereas PCA seeks directions efficient for representation, discriminant analysis seeks directions efficient for discrimination • This is the classical discriminant analysis 8

  5. 8

  6. 3.8.3 Multiple Discriminant Analysis • Generalization of Fisher’s linear discriminant • Seeks the optimum subspace with the greatest separation of the projected distributions • Defines within-class and between-class scatter matrices • Because we want small within-class scatter and large between-class scatter, this transformation maximizes the ratio of the between-class scatter to the within-class scatter 8

  7. 8

More Related