270 likes | 393 Vues
This paper explores innovative image representations derived from the Principal Component Analysis (PCA) transformation of color component images for face recognition tasks. Traditional RGB representations are evaluated alongside YIQ, HSV, and YCbCr spaces, focusing on their effectiveness in classification. The study introduces a Local Binary Pattern (LBP) feature extraction method to enhance recognition accuracy. Experimental results highlight the performance of new representations, achieving a maximum face verification rate of 83.41% at a 0.1% false acceptance rate, demonstrating the method's efficacy for diverse image datasets.
E N D
Face Recognition Using New Image Representations Zhiming Liu and Qingchuan Tao 2009 IEEE
Outline • Introduction • Motivation • New Image Representation Via PCA Transformation • Experiments • Conclusion
Introduction • While the commonly used gray-scale image is derived from the linear combination of R, G, and B color component images, the new Image representations are derived from the Principal Component Analysis (PCA) tranform upon the hybrid configurations of different color component images.
Introduction • We propose to encode the facial information from the new image representations by using an effective Local Binary Pattern (LBP) feature extraction method, which extracts and fuses the multi-resolution LBP features.
Motivation • For color face image recognition, the RGB color space is commonly used in some methods. • As YIQ, HSV, and YCbCr transformed from the RGB space, are adopted to perform face recognition.
Motivation • First, we calculate the correlation coefficients contained between the individual components in RGB, YIQ, and YCbCr color spaces.
Motivation • Based on the within-class scatter matrix Swand the between-class scatter matrix Sbof the training database, we can evaluate the class separability by using the Fisher criterion: J4 = tr(Sb)/tr(Sw).
Motivation • Sw:類別內散佈矩陣(within-class scatter matrix ) • Sb:類別間散佈矩陣(between-class scatter matrix )
Motivation • Table II gives thecalculation results, which indicate that the color componentsG and B have the weakest power of imageclassification,at least for the FRGC training database.
New Image Representation Via PCA • We assume that , , and arecoloumn vectors:where N=mxn. • Wecan form a data matrix using all the trainingimages: • where l is the number of training images.
New Image Representation Via PCA • The covariancematrix of may be formulated as follows : • where is the expectation operator, t denotes the transposeoperation, and.
New Image Representation Via PCA • The PCA of a randomvector X factorizes the covariance matrix into thefollowing form: • where is anorthonormaleigenvectormatrixand is a diagonal eigenvalue matrix with diagonal elements indecreasing order .
New Image Representation Via PCA • Then a new image representation can be derived by projecting three color component images of an image onto :
Experiments • In particular, the training set contains 12,776 images that are either controlled or uncontrolled. • The target set has 16,028 controlled images and the query set has 8,014 uncontrolled images.
Experiments • A. Effectiveness of New Image Representations for Face Recognition • Some new image representations, such as URCrQ , URCbQ, and so on, can be generated by using the transformation derived from PCA. • Note that before transformation, in (4) are normalized to have zero mean and unit variance, respectively.
Experiments • Table III shows the face verification rates (FVR) at 0.1% false accept rate (FAR) , where only image representations with FVR beyond 60% are listed, and R, Y, and URGB are also included for comparison.
Experiments • Fig. 1 shows some color component images and the resulting new image representations by using the transform coefficients.
Experiments • Table IV show that there are strong decorrelations between UYCbQ and UYCrQ, URCrQ.
Experiments • The fused classification results are detailed in Table V, which indicates that the best performance 77.10% , can be reached by fusing UYCrQ and UYCbQ, as expected.
Experiments • B. LBP-based Face Recognition Using New Image Representation • In this section, we present an effective method to use LBP features for face recognition. • The LBP operator is defined as follows:
Experiments • After extensions, LBP can be expressed as: , where P and R mean P sampling points on a circle of radius R. • A LBP multi-resolution feature fusion is proposed as shown in Fig. 2.
Experiments • The third set of experiments evaluates face recognition performance by using the proposed multi-resolution LBP feature fusion on new image representations.
Experiments • The proposed LBP method is implemented to UYCrQ, UYCbQ, R, and Y images, and the corresponding experimental results are shown in Table VI.
Experiments • The final results are given in Table VII, which indicates that the best FVR of 83.41% at 0.1% FAR is achieved by fusing the classification outputs of UYCrQand Y images.
Experiments • Fig . 3 shows the corresponding ROC curves for the best FVR obtained by our method.
Conclusion • The experiments show the satisfactory results have been achieved by using these new images and LBP features. • The future work will be focused on seeking the more reliable criteria to choose the color component images, as well as the new learning methods to derive the color transformation.