1 / 25

Principal component analysis

Principal component analysis. Hypothesis: Hebbian synaptic plasticity enables perceptrons to perform principal component analysis. Outline. Variance and covariance Principal components Maximizing parallel variance Minimizing perpendicular variance Neural implementation covariance rule.

Faraday
Télécharger la présentation

Principal component analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Principal component analysis

  2. Hypothesis: Hebbian synaptic plasticity enables perceptrons to perform principal component analysis

  3. Outline • Variance and covariance • Principal components • Maximizing parallel variance • Minimizing perpendicular variance • Neural implementation • covariance rule

  4. direction of maximum variance in the input space principal eigenvector of the covariance matrix goal: relate these two definitions Principal component

  5. Variance • A random variablefluctuating about its mean value. • Average of the square of the fluctuations.

  6. Covariance • Pair of random variables, each fluctuating about its mean value. • Average of product of fluctuations.

  7. Covariance examples

  8. Covariance matrix • N random variables • NxN symmetric matrix • Diagonal elements are variances

  9. Principal components • eigenvectors with k largest eigenvalues • Now you can calculate them, but what do they mean?

  10. Handwritten digits

  11. Principal component in 2d

  12. One-dimensional projection

  13. Covariance to variance • From the covariance, the variance of any projection can be calculated. • Let w be a unit vector

  14. Maximizing parallel variance • Principal eigenvector of C • the one with the largest eigenvalue.

  15. Orthogonal decomposition

  16. Total variance is conserved • Maximizing parallel variance = Minimizing perpendicular variance

  17. Rubber band computer

  18. Correlation/covariance rule • presynaptic activity x • postsynaptic activity y • Hebbian

  19. Stochastic gradient ascent • Assume data has zero mean • Linear perceptron

  20. Preventing divergence • Bound constraints

  21. Oja’s rule • Converges to principal component • Normalized to unit vector

  22. Multiple principal components • Project out principal component • Find principal component of remaining variance

  23. clustering vs. PCA • Hebb: output x input • binary output • first-order statistics • linear output • second-order statistics

  24. Data vectors • xa means ath data vector • ath column of the matrix X. • Xia means matrix element Xia • ith component of xa • x is a generic data vector • xi means ith component of x

  25. Correlation matrix • Generalization of second moment

More Related