1 / 40

Lecture 13-14 Face Recognition – Subspace/Manifold Learning

EE4-62 MLCV. Lecture 13-14 Face Recognition – Subspace/Manifold Learning . Tae-Kyun Kim. EE4-62 MLCV. Face Image Tagging and Retrieval. Face tagging at commercial weblogs Key issues User interaction for face tags Representation of a long- time accumulated data

pegeen
Télécharger la présentation

Lecture 13-14 Face Recognition – Subspace/Manifold Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EE4-62 MLCV Lecture 13-14Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim

  2. EE4-62 MLCV Face Image Tagging and Retrieval • Face tagging at commercial weblogs • Key issues • User interaction for face tags • Representation of a long- time accumulated data • Online and efficient learning • Active research area in Face Recognition Test and MPEG-7 for face image retrieval and automatic passport control • Our proposal promoted to MPEG7 ISO/IEC standard

  3. Principal Component Analysis (PCA)- Maximum Variance Formulation of PCA- Minimum-error formulation of PCA- Probabilistic PCA

  4. Maximum Variance Formulation of PCA

  5. Minimum-error formulation of PCA

  6. Applications of PCA to Face Recognition

  7. EE4-62 MLCV (Recap) Geometrical interpretation of PCA • Principal components are the vectors in the direction of the maximum variance of the projection samples. • For given 2D data points, u1 and u2 are found as PCs • Each two-dimensional data point is transformed to a single variable z1 representing the projection of the data point onto the eigenvector u1. • The data points projected onto u1 has the max variance. • Infer the inherent structure of high dimensional data. • The intrinsic dimensionality of data is much smaller.

  8. Eigenfaces • Collect a set of face images • Normalize for scale, orientation (using eye locations) • Construct the covariance matrix and obtain eigenvectors D=wh w h

  9. EE4-62 MLCV Eigenfaces • Project data onto the subspace • Reconstruction is obtained as • Use the distance to the subspace for face recognition

  10. Matlab Demos – Face Recognition by PCA

  11. Face Images • Eigen-vectors and Eigen-value plot • Face image reconstruction • Projection coefficients (visualisation of high-dimensional data) • Face recognition

  12. EE4-62 MLCV Probabilistic PCA • A subspace is spanned by the orthonormal basis (eigenvectors computed from covariance matrix) • Can interpret each observation with a generative model • Estimate (approximately) the probability of generating each observation with Gaussian distribution, PCA: uniform prior on the subspace PPCA: Gaussian dist.

  13. Continuous Latent Variables

  14. EE4-62 MLCV Probabilistic PCA

  15. Maximum likelihood PCA

  16. Limitations of PCA

  17. Unsupervised learning PCA vs LDA (Linear Discriminant Analysis)

  18. EE4-62 MLCV Linear model Linear Manifold = Subspace Nonlinear Manifold PCA vs Kernel PCA

  19. Gaussian Distribution Assumption IC1 PC2 IC2 PC1 PCA vs ICA (Independent Component Analysis)

  20. EE4-62 MLCV (also by ICA)

More Related