1 / 35

Integrating Faces and Fingerprints for Personal Identification IEEE Trans on PAMI, VOL. 20, NO. 12, 1998 Lin Hong and An

Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer, Sudeep Sarkar, Barnabas. Integrating Faces and Fingerprints for Personal Identification IEEE Trans on PAMI, VOL. 20, NO. 12, 1998

kerem
Télécharger la présentation

Integrating Faces and Fingerprints for Personal Identification IEEE Trans on PAMI, VOL. 20, NO. 12, 1998 Lin Hong and An

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comparison and Combination of Ear and Face Images in Appearance-Based BiometricsIEEE Trans on PAMI, VOL. 25, NO.9, 2003Kyong Chang, Kevin W. Bowyer, Sudeep Sarkar, Barnabas Integrating Faces and Fingerprints for Personal Identification IEEE Trans on PAMI, VOL. 20, NO. 12, 1998 Lin Hong and Anil Jain Presented by: Zhiming Liu Instructor: Dr. Bebis

  2. Multimodal Biometrics • All these biometric techniques have their own advantages and disadvantages and are admissible depending on the application domain. • Combine them to improve the performance

  3. Face versus Ear • Normalization • Original face images are cropped to 768*1,024 and original ear images are cropped to 400*500.

  4. Face versus Ear • The cropped images are normalized to the 130*150. • The masks are used for face images and ear images to remove the backgrounds. • The images are histogram equalized.

  5. Face versus Ear • Eigen-faces and Eigen-ears • PCA computes the eigenvectors and eigenvalues. • Following the FERET approach, we use the eigenvectors corresponding to the first 60 percent of the large eigenvalues and drop the first eigenvector as it represents illumination. • Another approach uses the fixed percent of total energy.

  6. Face versus Ear • Database • The training set consists of 197 subjects, each of whom has both a face image and an ear image. • There is a separate (gallery, probe) data set for three experiments: the day variation, the lighting variation and the pose variation.

  7. Face versus Ear • Experimental Results: Face versus Ear Face and ear recognition performance in the day variation experiment

  8. Face versus Ear Face and ear recognition performance in the lighting variation experiment

  9. Face versus Ear Face and ear recognition performance in the pose variation experiment: 22.5 degree rotation to the left between the gallery and probe images

  10. Face versus Ear • Experimental Results: Face versus Ear • Simple combination technique: the normalized, masked ear and face images of a subject are concatenated to form a combined face-plus-ear image. • Compute eigenvectors and eigenvalues using these combined images.

  11. Face versus Ear Face combined with ear recognition performance in the day variation experiment Rank-one recognition rate: 90.9% for combination versus 71.6% for the ear and 70.5% for the face

  12. Face versus Ear Face combined with ear recognition performance in the lighting variation experiment Rank-one recognition rate: 87.4% for combination versus 68.5% for the ear and 64.9% for the face

  13. Face versus Ear Face combined with ear recognition performance in the pose variation experiment

  14. Face versus Ear • Discussion • Results don’t support a conclusion that an ear-based or face-based biometric should necessarily offer better performance than the other. • Results do support the conclusion that a multimodal biometric using both the ear and the face can out-perform a biometric using either one alone. • If the different eigenvectors affect the performance?

  15. Face versus Ear

  16. Face versus Fingerprint • Fingerprint Verification An alignment-based “elastic” matching algorithm: • Alignment stage: transformations such as translation, rotation, and scaling between an input and a temple in the database are estimated, then the input minutiae are aligned with the template minutiae. • Matching stage: both the input minutiae and the template minutiae are converted to “strings” in the polar coordinate system, and an “elastic” string matching algorithm is used to match the resulting strings.

  17. Face versus Fingerprint 1)Let and denote the p minutiae in the template and q minutiae in the input image. • After estimating transformation parameters and aligning two minutiae patterns, convert the template pattern and input pattern into the polar coordinate representations: • Match P* and Q* with a modified dynamic-programming algorithm. • The matching score, S, is defined as:

  18. Face versus Fingerprint • Decision Fusion • Abstract level: the output from each module is only a set of possible labels without any confidence associated with the labels; in this case, the simple majority rule may be employed to reach a more reliable decision. • Rank level: the output from each module is a set of possible labels ranked by decreasing confidence values, but the confidence values themselves are not specified. • Measurement level: the output from each module is a set of possible labels with associated confidence values; in this case, more accurate decision can be made by integrating different confidence measures to a more informative confidence measure.

  19. Face versus Fingerprint • We need to define a measure that indicates the confidence of the decision criterion and a decision fusion criterion. • The confidence of a given decision criterion may be characterized by its FAR. • In order to estimate FAR, the impostor distribution needs to be computed.

  20. Face versus Fingerprint • Impostor Distribution for Fingerprint Verification • The region of interest of both input fingerprint and template is of the same size, W*W. • Let the size of cell be w*w, there are a total of (W*W)/(w*w) = Nc different cells in the region of interest of fingerprint. • Assume that each fingerprint has the same number of minutiae, Nm (<= Nc), which are distributed randomly in different cells and each cell contains at most one minutiae. • Each minutiae is directed towards one of the D possible orientation with equal probability.

  21. Face versus Fingerprint • For a given cell, the probability that the cell is empty with no minutiae present is Pempty = Nm/Nc, and the probability that cell has a minutiae that is directed toward a specific orientation is P = (1-Pempty)/D. • A pair of corresponding minutiae between a template and an input is considered to be identical if and only if they are in the cells at the same position and directed in the same direction.

  22. Face versus Fingerprint • With the above simplifying assumptions, the number of corresponding minutiae pairs between any two randomly selected minutiae patterns is a random variable, Y, which has a binomial distribution with parameters Nm and P: • The probability that the number of corresponding minutiae pairs between any two sets of minutiae patterns is less than a given threshold value, y, is:

  23. Face versus Fingerprint • Impostor Distribution for Face Recognition • Top n matches are obtained by calculating the DFFS arranging in the increasing order. • The relative distances between consecutive DFFSs are invariant to the mean shift of the DFFSs. • The probability that a retrieved top n match is incorrect is different for different ranks. • Thus, the impostor distribution is a function of both the relative DFFS values, , and the rank order, i: Fi() represents the probability that the consecutive DFFS values between impostor and their claimed individual at rank i are larger than a value , and Porder(i) is the probability that the retrieved match at rank i is an impostor.

  24. Face versus Fingerprint • Estimate Porder(i) • X denote the DFFS between an individual and his own template. It is a random variable with density functionf(X). • X1β, X2β, …, XN-1β denote the DFFS values between an individual and the templates of the other individuals in the database, with density functions, • For an individual, π, the rank, I, of X among X1β, X2β, …, XN-1βis a random variable with probability where , when p<<1 and N is very large, • P(I) is the probability that matches at rank i are genuine individuals. Therefore,

  25. Face versus Fingerprint • Estimate Fi() • Assume for a given individual, π, X1β, X2β, …, XN-1β are arranged in increasing order of values. • Define the non-negative distance between the (i+1)th and ith DFFS values as the ith DFFS distance, • The distribution, fi(i), of the ith distance, i, is obtained from the joint distribution wi(Xβ, i) of the ith value, Xβ, and the ith distance, i, where

  26. Face versus Fingerprint • Estimate Fi() (cont’d) • With the distribution, fi(i), of the ith distance defined, the probability that the DFFS of the impostor at rank i is larger than a threshold value, , is

  27. Face versus Fingerprint • Decision Fusion • Each of the top n possible identities established by the face recognition module is verified by the fingerprint verification module: either rejects all the n possibilities or accepts only one of them as the genuine identity. • It is usually specified that the FAR of the system should be less than a given value. • The goal of decision fusion, in essence, is to derive a decision criterion which satisfies the FAR specification.

  28. Face versus Fingerprint • Decision Fusion (cont’d) • The composite impostor distribution at rank i may be defined as which defines the probability that an impostor is accepted at rank i with consecutive relative DFFS, , and fingerprint matching score, Y. • I1, I2, …, In denote the n possible identities established by face recognition, {X1,X2,…Xn} denote the corresponding n DFFSs, {Y1,Y2,…Yn} denote the corresponding n fingerprint matching scores, and FAR0 denote the specified value of FAR.

  29. Face versus Fingerprint • Experimental Results • 1,500 fingerprint images from 150 individuals with 10 images per individual. • 1,132 images of 86 individuals. • Randomly selected 640 fingerprints of 64 individuals as the training set.

  30. Face versus Fingerprint • Experimental Results (cont’d) Impostor distribution for fingerprint: the mean and standard deviation of the impostor distribution are estimated to be 0.7 and 0.64.

  31. Face versus Fingerprint • Experimental Results (cont’d) • 542 face images were used as training samples. • First 64 eigenfaces were used for face recognition. • The top 5 impostor distributions were approximated. the impostor distribution for face recognition at rank No. 1, where the stars (*) represent empirical data and the solid curve represents the fitted distribution: the mean square error between the empirical distribution and fitted distribution is 0.0014

  32. Face versus Fingerprint • Experimental Results (cont’d) • Randomly assign each of the remaining 86 individuals in the fingerprint database to an individual in the face database. • One fingerprint for each individual is randomly selected as the template for the individual. • Each of the remaining 590 faces was paired with a fingerprint to produce a test pair.

  33. Face versus Fingerprint • Experimental Results (cont’d)

  34. Face versus Fingerprint • Experimental Results (cont’d)

  35. Questions?

More Related