html5-img
1 / 41

Face Recognition From Video Part (II)

Face Recognition From Video Part (II). Advisor: Wei-Yang Lin Presenter: C.J. Yang & S.C. Liang. Outline. Method (I) : A Real-Time Face Recognition Approach from Video Sequence using Skin Color Model and Eigenface Method [1]

questa
Télécharger la présentation

Face Recognition From Video Part (II)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Face Recognition From Video Part (II) Advisor: Wei-Yang Lin Presenter: C.J. Yang & S.C. Liang

  2. Outline • Method (I) : A Real-Time Face Recognition Approach from Video Sequence using Skin Color Model and Eigenface Method[1] • Method (II):An Automatic Face Detection and Recognition System for Video Streams[4] • Conclusion

  3. A Real-Time Face Recognition Approach from Video Sequence using Skin Color Model and Eigenface Method Islam, M.W.; Monwar, M.M. Paul, P.P.; Rezaei, S, IEEE Electrical and Computer Engineering, Canadian Conference on, May 2006 Page(s):2181 - 2185

  4. Introduction • Real time face recognition

  5. Method (I) Video sequences Real time image acquisition Using MATLAB Image Acquisition Toolbox 1.1 Face detection Face recognition Results

  6. Face Detection - Skin Color Model • Adaptable to people of different skin colors and to different lighting conditions • Skin colors of different people are very close, but they differ mainly in intensities

  7. Face Detection - Skin Color Model (cont.) [2] Selected skin-color region Cluster in color space [2] R.S. Feris, T. E. de Campos, and R. M. C. Junior, "Detection and tracking of facial features in video sequences," proceedings of the Mexican International Conference on Artificial Intelligence. Advances in Artificial Intelligence, pp. 127 - 135, 2000.

  8. N(m,C) m=E{x} where x= C=E{(x-m)(x-m)T} = Face Detection - Skin Color Model (cont.) • Chromatic solors are defined by a normalization process g g r r Cluster in chromatic space Gaussian Model

  9. Face Detection - Skin Color Model (cont.) • Obtain the likelihood of skin for any pixel of an image with the Gaussian fitted skin color model • Transform a color image into a grayscale image • Using threshold value to show skin regions

  10. Face Detection - Skin Region Segmentation • Segmentation and approximate face location detection process Gray scale image r=0.41~0.50 g=0.21~0.30

  11. Face Detection - Skin Region Segmentation (cont.) Median filter

  12. Face Detection - Face Detection • Approximate face locations are detected using a proper height-width proportion of general face • Rough face locations are verified by an eye template-matching scheme

  13. Face Recognition - Defining Eigenfaces • Main idea of PCA method • Find the vectors which best account for the distribution of face images within the entire image space • Vectors • Eigenvectors of covariance matrix corresponding to the original face images • Face-like Eigenfaces • Vectors define the subspace of face images face space

  14. Face Recognition - Defining Eigenfaces

  15. Face Recognition - Defining Eigenfaces (cont.) Calculate the Eigenfaces from the training set Keeping only the M Eigenfaces which correspond to the highest Eigenvalues, and M Eigenfaces denote the face space Calculate the corresponding location in M-dimensional weight space for each known individual Calculate a set of weights based on a new face image and the M Eigenfaces

  16. Face Recognition - Defining Eigenfaces (cont.) Determine if the image is a face If it is a face, classify the weight pattern as either a known person or as unknown person [3] [3] M. A. Turk, and A. P. Pentland, "Face recognition using Eigenfaces," proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991.

  17. Steps Obtain a set S with M face images (N by N) Obtain the mean image Find the difference Calculate the Covariance matrix C where Face Recognition - Calculating Eigenfaces

  18. To find eigenvectors from C is a huge computational task. Solution : Find the eigenvectors of ATA first Multiply A Gain the eigenvectors Find the eigenvalues of C The M Eigenvectors are sorted in order of descending Eigenvalues and chosen to represent Eigenspace. Face Recognition - Calculating Eigenfaces (cont.)

  19. Project each of the train images into Eigenspace Give a vector of weights to represent the contribution of each Eigenface When a new face image is encountered, project it into Eigenspace Measure the Euclidean distance An acceptance or rejection is determined by applying a threshold Face Recognition - Recognition Using Eigenfaces

  20. Method (I) - Result

  21. Method (I) - Conclusion • In this face recognition approach, • Skin color modeling approach is used for face detection • Eigenface algorithm is used for face recognition

  22. An Automatic Face Detection and Recognition System for Video Streams A. Pnevmatikakis and L. Polymenakos 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI), 2005 [4]

  23. Introduction • Authors present the AIT-FACER algorithm • The system is intended for meeting rooms • where background and illumination are fairly constant • As participants enter the meeting room, the system is expected to identify and recognize all of them in a natural and unobtrusive way • i.e., participants do not need to enter one-by-one and then pose still in front of a camera for the system to work

  24. AIT-FACER System • Four modules • Face Detector • Eye Locator • Frontal Face Verifier • Face Recognizer along with performance metrics • The goal of the first three modules • Detect possible face segments in video frames • Normalize them (in terms of shift, scale and rotation) • Assign to them a confidence level describing how frontal they are • Feed them to the face recognizer finally

  25. AIT-FACER System (cont.) Normalize face segments To tell frontal faces and profile faces apart Detect possible face segments Decide if the face is frontal or not To alleviatethe effect of lighting variations and shadows • DFFS: Distance-From-Face-Space

  26. Foreground Estimation • Algorithm • Subtract the empty room image • The empty room image is utilized as background • Sum the RGB channels and binarize the result • In order to produce solid foreground segments • We perform a median filtering operation on 8x8 pixel blocks is performed • Color normalization • Which is used to minimize the effects of shadows on a frame level • We set the brightness of the foreground segment at 95% • The preferred and visibly better way is Gamma correction, but a faster solution is needed for our real-time system

  27. Foreground Estimation (cont.)

  28. Skin Likelihood Segmentation • Color model • based on the skin color and non-skin color histograms • Log-likelihood L(r,g,b) • s[rgb] is the pixel count contained in bin rgb of the skin histogram • n[rgb] is the equivalent count from the non-skin histogram • Ts and Tn are the total counts contained in the skin and non-skin histograms, respectively [7]

  29. Skin Likelihood Segmentation (cont.) • Algorithm • Obtain the likelihood map • The likelihood map L(r,g,b) is binarized • Pixels take the value 1 (skin color) if L(r,g,b) > -.75 • The rest pixels take the value 0 • The different segments become connected in the skin map • By using 8-way connectivity • The bounding boxes of the segments are identified and boxes with small area (<0.2% of the frame area) are discarded • Because their resolution is too low for recognition • Choose segments with face-like elliptical aspect ratios • The eigenvalues resulted by performing PCA are used to estimate the elliptical aspect ratio of the region

  30. Skin Likelihood Segmentation (cont.)

  31. Eye Detector • Thought • If we can identify the eyes and their location reliably, we can perform necessary normalizations in terms of shift, scale and rotation • Two stages • First, the eye zone (eyes and bridge of the nose area) is detected in the face candidate segments • As a second stage, we detect the eyes in the identified eye zone

  32. Eye Detector (cont.)

  33. Frontal Face Verification • Problem • Skin segmentation heuristics define many areas that are not frontal faces • Further, the eye detector always defines two dark spots as eyes, even when the segment is not a frontal face • Solution • The first stage uses DFFS to compute the distance from a frontal face prototype • Segments with smaller DFFS values are considered frontal faces with larger confidence • A two-class LDA classifier is trained to discriminate frontal from non-frontal head views

  34. Frontal Face Verification (cont.) • The 100 normalized segments in ascending DFFS order

  35. Face Recognition • All normalized segments are finally processed by an LDA classifier and an identity tag is attached to each one

  36. Result

  37. Video-Based Face Recognition Evaluation in the CHIL Project – Run 1 Ekenel, H.K.; Pnevmatikakis, A.; IEEE on Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), 2006 [5]

  38. Smart-Room

  39. Face Image

  40. [6]

  41. Reference • [1] Islam, M.W.; Monwar, M.M.; Paul, P.P.; Rezaei, S.;” A Real-Time Face Recognition Approach from Video Sequence using Skin Color Model and Eigenface Method,” IEEE Electrical and Computer Engineering, Canadian Conference on, May 2006 Page(s):2181 - 2185 • [2] R.S. Feris, T. E. de Campos, and R. M. C. Junior, "Detection and tracking of facial features in video sequences," proceedings of the Mexican International Conference on Artificial Intelligence. Advances in Artificial Intelligence, pp. 127 - 135, 2000 • [3] M. A. Turk, and A. P. Pentland, "Face recognition using Eigenfaces," proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991 • [4] A. Pnevmatikakis and L. Polymenakos, “An Automatic Face Detection and Recognition System for Video Streams,” 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI), 2005 • [5] Ekenel, H.K.; Pnevmatikakis, A.; “Video-Based Face Recognition Evaluation in the CHIL Project – Run 1,” IEEE on Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), 2006 • [6] CHIL, http://chil.server.de/servlet/is/2764/ • [7] M. Jones and J. Rehg. “Statistical color models with application to skin detection,” Computer Vision and Pattern Recognition, pp. 274–280, 1999.

More Related