1 / 15

Deformation Modeling for Robust 3D Face Matching

Deformation Modeling for Robust 3D Face Matching. Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University. Problem. Although 3D facial scans do not vary with lighting or pose changes, nonrigid facial deformations can hurt recognition

dympna
Télécharger la présentation

Deformation Modeling for Robust 3D Face Matching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. JainDept. of Computer Science & Engineering Michigan State University

  2. Problem • Although 3D facial scans do not vary with lighting or pose changes, nonrigid facial deformations can hurt recognition • Collecting and storing multiple expression template scans for each subject is not practical • Expressions can have differing intensities

  3. Proposed Scheme • A (hierarchical) geodesic sampling is used to quantify facial expression • Expression variations are learned from a small control group • These variations are used to create a deformable model from gallery templates • This deformable model is fit to the target scan and matching distance computed

  4. Sampling • Landmarks are manually selected (nose tip, eye corners, mouth corners, and mouth contour) • Geodesic distance between certain features is computed (hierarchically in latest work) • Geodesics are split into L segments of equal length to generate L-1 new feature points

  5. Deformation Transfer • Register non-neutral scan with neutral scan of same face to estimate landmark displacement • Establish a mapping Φ from the neutral gallery to the neutral target face • Use Φ to transfer landmarks in the non-neutral gallery scan to the (synthesized) non-neutral target • Establish a mapping ψ from the neutral to non-neutral target • Interpolate ψ using thin-plate-spline mapping • Boundary constraints are included in thin-plate-spline calculation as additional landmark points

  6. Registration • Neutral and non-neutral target are aligned using features which don’t move much with expression changes, such as eye corners and nose tip • This separates rigid transformations from nonrigid transformations

  7. Thin-Plate Splines • Goal: find a mapping from landmark set U to V with known correspondences • Method: imagine V as a thin metal sheet and find a function which minimizes bending energy • Solution: F(u) = c + A*u + WT*s(u) • s(u) = (|u – u1|, |u – u2|, …)T • An analytical solution can be obtained for 3D points

  8. Deformable Model Construction • To generate a deformable model, each learned expression is simulated on a neutral gallery face • Face is represented as a combination of shape vectors: • M is the number of synthesized templates, αi is the weight of each template • By adjusting the weights αi, various combinations of expressions can be generated • To reduce computational complexity, one deformable model per expression is generated

  9. Matching • Coarse alignment performed as during deformation transfer • Alignment refined with iterative closest point algorithm • Associate each point with nearest neighbor, calculate transform to minimize distance, repeat • Minimize a cost function by solving for αis • R and T are rotation and translation matrices, S is the deformable model, and St is the test scan • Use these αis to compute a new iterative closest point distance, and return to step 2 until convergence

  10. Experiment I • Self-collected database of 10 subjects at 3 different poses, with 7 different expressions, for 210 total scans and 10 gallery models • 5 subjects at random chosen as control group, leaving 105 scans for recognition • Results:

  11. Experiment II • Control group: 10 subjects from Experiment I • Test group: 90 additional subjects, with 6 scans each at different viewpoints (in most cases) • 533 total test scans • Results:

  12. Experiment III • A subset of FRGC v2.0 dataset • Scans with the earliest timestamp and neutral expression are used as templates • 50 gallery scans, 150 test scans • 10 subjects in Experiment I used as control group • Latest results (after publication):

  13. Conclusions • One area for improvement (noted in the paper) was the dependence on manual landmark labeling • Also, I thought that there might be some application of geometric invariants to replace their registration step (which is subject to local minima)

  14. Questions?

More Related