1 / 5

13. Component Analysis (also Chap 3)

Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher. 13. Component Analysis (also Chap 3).

lecea
Télécharger la présentation

13. Component Analysis (also Chap 3)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern ClassificationAll materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000with the permission of the authors and the publisher

  2. 13. Component Analysis (also Chap 3) • Combine features to reduce the dimension of the feature space • Linear combinations are simple to compute and tractable • Project high dimensional data onto a lower dimensional space • Two classical approaches for finding “optimal” linear transformation • PCA (Principal Component Analysis) “Projection that best represents the data in a least- square sense” • MDA (Multiple Discriminant Analysis) “Projection that best separatesthe data in a least-squares sense” (generalization of Fisher’s Linear Discriminant for two classes) Pattern Classification, Chapter 10

  3. PCA (Principal Component Analysis) “Projection that best represents the data in a least- square sense” • The scatter matrix of the cloud of samples is the same as the maximum-likelihood estimate of the covariance matrix • Unlike a covariance matrix, however, the scatter matrix includes samples from all classes! • And the least-square projection solution (maximum scatter) is simply the subspace defined by the d’<d eigenvectors of the covariance matrix that correspond to the largest d’ eigenvalues of the matrix • Because the scatter matrix is real and symmetric the eigenvectors are orthogonal Pattern Classification, Chapter 10

  4. Fisher Linear Discriminant • While PCA seeks directions efficient for representation, discriminant analysis seeks directions efficient for discrimination Pattern Classification, Chapter 10

  5. Multiple Discriminant Analysis • Generalization of Fisher’s Linear Discriminant for more than two classes Pattern Classification, Chapter 10

More Related