1 / 41

Lecture 13-14 Face Recognition – Subspace/Manifold Learning

EE4-62 MLCV. Lecture 13-14 Face Recognition – Subspace/Manifold Learning . Tae-Kyun Kim. EE4-62 MLCV. Face Recognition Applications. Applications include Automatic face tagging at commercial weblogs Face image retrieval in MPEG7 (our solution is MPEG7 standard)

kort
Télécharger la présentation

Lecture 13-14 Face Recognition – Subspace/Manifold Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EE4-62 MLCV Lecture 13-14Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim

  2. EE4-62 MLCV Face Recognition Applications • Applications include • Automatic face tagging at commercial weblogs • Face image retrieval in MPEG7 (our solution is MPEG7 standard) • Automatic passport control • Feature length film character summarisation • A key issue is in Efficient representation of face images.

  3. Face Recognition vs Object Categorisation Class 2 Class 1 Intra-class variation Face image data sets Inter-class variation Object categorisation data sets Class 2 Class 1 Intra-class variation Inter-class variation

  4. Both problems are hard, cause we need to minimise intra-class variations while maximising inter-class variations. Face image variations are subtle, compared to those of generic object categories. Subspace/manifold techniques, over Bag of Words, are primary-arts for face analysis.

  5. Principal Component Analysis (PCA)- Maximum Variance Formulation of PCA- Minimum-error formulation of PCA- Probabilistic PCA

  6. Maximum Variance Formulation of PCA

  7. (

  8. Minimum-error formulation of PCA 0, otherwise

  9. Applications of PCA to Face Recognition

  10. EE4-62 MLCV (Recap) Geometrical interpretation of PCA • Principal components are the vectors in the direction of the maximum variance of the projection samples. • For given 2D data points, u1 and u2 are found as PCs • Each two-dimensional data point is transformed to a single variable z1 representing the projection of the data point onto the eigenvector u1. • The data points projected onto u1 has the max variance. • Infer the inherent structure of high dimensional data. • The intrinsic dimensionality of data is much smaller.

  11. Eigenfaces (how to train) • Collect a set of face images • Normalize for scale, orientation (using eye locations) • Construct the covariance matrix and obtain eigenvectors D=wh w h M: number of eigenvectors

  12. EE4-62 MLCV Eigenfaces (how to use) • Project data onto the subspace • Reconstruction is obtained as • Use the distance to the subspace for face recognition

  13. Eigenfaces (how to use) x c1 c2 Method 1 : reconstruction by c-th class subspace c3 | assign Method 2 x : mean projection of c-th class data | assign

  14. Matlab Demos – Face Recognition by PCA Face Images Eigen-vectors and Eigen-value plot Face image reconstruction Projection coefficients (visualisation of high-dimensional data) Face recognition

  15. EE4-62 MLCV Probabilistic PCA • A subspace is spanned by the orthonormal basis (eigenvectors computed from covariance matrix) • Can interpret each observation with a generative model • Estimate (approximately) the probability of generating each observation with Gaussian distribution, PCA: uniform prior on the subspace PPCA: Gaussian dist.

  16. Continuous Latent Variables

  17. EE4-62 MLCV Probabilistic PCA

  18. Maximum likelihood PCA

  19. Limitations of PCA

  20. Unsupervised learning PCA finds the direction for maximum variance of all data, while LDA (Linear Discriminant Analysis) finds the direction that is optimal in terms of the inter-class/intra-class data variations. PCA vs LDA Refer to the textbook, C. M. Bishop, Pattern Recognition and Machine Learning, Springer

  21. EE4-62 MLCV Linear model PCA is a linear projection method. It is okay when data is well constrained to a hyperplane. When data lies in a nonlinear manifold, PCA is extended to Kernel PCA by the kernel trick (Lectures 9-10) . Linear Manifold = Subspace Nonlinear Manifold PCA vs Kernel PCA Refer to the textbook, C. M. Bishop, Pattern Recognition and Machine Learning, Springer

  22. Gaussian assumption PCA models data as Gaussian distributions (2nd order statistics), whereas ICA (Independent Component Analysis) captures higher-order statistics. IC1 PC2 ICA PCA IC2 PC1 PCA vsICA Refer to, A. Hyvarinen, J. Karhunen, E. Oja, Independent Component Analysis, John Wiley & Sons, Inc.

  23. EE4-62 MLCV PCA bases look holistic and less intuitive. ICA or NMF (Non-negative Matrix Factorisation) yields bases, which capture local facial components. (also by ICA) Daniel D. Lee and H. Sebastian Seung (1999). "Learning the parts of objects by non-negative matrix factorization". Nature401 (6755): 788–791.

More Related