1 / 28

Reconstruction of 3D Face Surface from Slices: A Literature Survey Mahmudul Hasan CPSC 601.20: Biometric Technologies

Reconstruction of 3D Face Surface from Slices: A Literature Survey Mahmudul Hasan CPSC 601.20: Biometric Technologies Department of Computer Science, University of Calgary 2500 University Drive NW, Calgary, AB T2N 1N4, Canada mhasan@cpsc.ucalgary.ca Table of Contents Introduction

liam
Télécharger la présentation

Reconstruction of 3D Face Surface from Slices: A Literature Survey Mahmudul Hasan CPSC 601.20: Biometric Technologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reconstruction of 3D Face Surface from Slices: A Literature Survey Mahmudul Hasan CPSC 601.20: Biometric Technologies Department of Computer Science, University of Calgary 2500 University Drive NW, Calgary, AB T2N 1N4, Canada mhasan@cpsc.ucalgary.ca

  2. Table of Contents

  3. Introduction • The main objective of this study was to perform a comparative analysis of the existing 3D face surface reconstruction algorithms in terms of their basic methodologies and performance issues. • In addition, this study also focuses on some general 3D surface reconstruction algorithms which can contribute in the reconstruction of 3D faces. • This study has categorized the existing algorithms based on their requirement of prior knowledge about the class of solutions. • A detailed comparative study is presented based on the advantages, limitations and areas of application of the studied 3D face surface reconstruction techniques.

  4. Introduction (cont.) • Face recognition has recently received significant attention as one of the most successful applications of image analysis and understanding, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research [1]. • The problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology [1]. • Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem [1].

  5. Introduction (cont.) • One of the major findings of Face Recognition Vendor Test (FRVT) 2002 [2] was that the three-dimensional morphable models and normalization increase the performance of face recognition. • 3D model-based methods [3, 4, 5, 6] provide potential solutions to pose invariant face recognition. • 3D face models are usually derived from laser scanned 3D heads (range data) or reconstructed using shape from shading [7].

  6. Introduction (cont.) • Realistic looking facial modeling and animation is one of the most interesting and difficult problems in computer graphics [8]. • So far, in most of the popular commercially available tools, the 3-D facial models are obtained not directly from images but by laser- scanning of people’s faces [8]. • These scanners are usually expensive and a number of hours of work is required prior to animating the model [8]. • To avoid the shortcoming of laser-scan based face modeling, image based face modeling methods have received significant attentions in the past several years [8]. Some of these methods reconstruct the 3D face from one or more 2D face images (slices).

  7. Background: Existing Methods • In 1996, J.J. Atick et al presented a technique for recovering 3D face shape from a single 2D image using only the shading information i.e. solving the shape-from-shading problem [3]. • In 2004, D. Onofrio et al proposed a method that determines correspondences between surface patches on different views of a face through a modeling of disparity maps based on Markov Random Fields (MWFs) [8]. • In 2004, V. Blanz et al presented an algorithm based on a set of feature point locations which produces high-resolution shape estimates of the 3D face from a 2D face image [9, 10]. • In 2006, V. Blanz et al presented an algorithm based on an analysis-by-synthesis technique that estimates shape and pose by fully reproducing the appearance of the face in the image [10]. • In 2006, Z. Zhang et al proposed a minimum variance estimation framework for 3D face reconstruction from multiple views and a new 3D surface reconstruction algorithm based on deformable subdivision mesh [11].

  8. Background: General 3D Surface Reconstruction Algorithms • In 1994, D. Shiwei et al proposed a method where the range image is segmented into regions corresponding to the surface patches on objects. Then, algebraic surfaces are fitted to the range points in these regions by solving a generalized eigenvector problem [12]. • In 1996, G. Barequet et al presented an algorithm which reconstructs a solid model given a series of planar cross-sections. The main contribution of this work was the use of knowledge obtained during the interpolation of neighboring layers while attempting to interpolate a particular layer [13]. • In 2002, S.F. Frisken et al presented an efficient method for estimating 3D Euclidian distance field from 2D range images which can be used by many existing algorithms that reconstructs 3D models from range data [14]. • Few other 3D surface reconstruction exist which are based on given sample points [15, 16, 17] and labeled image regions [18].

  9. 3D Face Surface Reconstruction • The goal of 3D face surface reconstruction is to reconstruct a 3D face given one or more 2D face images. • The key approach which has been used to solve this problem is to use a 3D template which is then deformed to represent the target face in the database. • For most of the existing algorithms, the template is built such a way that it is deformable to represent all the existing faces in a particular database. • A great challenge is to deform the reconstructed 3D face to represent the target face in the database when the input 2D images have variations in illumination, pose, and facial expression.

  10. Applications of 3D Face Surface Reconstruction • The reconstructed 3D face can be used to register a face in the database or for the purpose of face recognition. • 3D face surface reconstruction has some interesting applications in animation and face recognition where a single view of a person can be used to generate the new views to any pose [3]. • It also potentially has some biomedical applications; for example, it can be used to design custom masks for facial burn victims from pre-burn photos. These masks are mostly designed using laser scans of person’s face which are expensive, not convenient, and not always feasible for burn victims [3].

  11. Statistical Approach to Shape from Shading (A1) • This technique can recover 3D face shape from a single 2D face image using only the shading information [3]. • Shading is the variation in brightness from one point to another in an image [3]. • Shading carries information about shape because the amount of light a surface patch reflects depends on its orientation (surface normal) relative to the incident light. So, in the absence of variability in surface reflectance properties (surface material), the variability in brightness can only be due to changes in local surface orientation and hence conveys strong information about shape [3]. • The statistical technique, principal component analysis (PCA) has been used to derive a low dimensional parametrization of head shape space [3]. • The ideal diffuser model or Lambertian model for surface reflectance is used under this technique [3].

  12. Statistical Approach to Shape from Shading (A1) (cont.) • Lambertian surfaces have two basic properties: firstly, they reflect light diffusely or equally in all directions; secondly, their brightness at any point is proportional to the cosine of the angle between the surface normal at that point and the incident light ray [3]. • This algorithm, although idealized, turns out to be a fairly realistic approximation to many surfaces including human skin [3].

  13. Area Matching Based on Belief Propagation (A2) • This method that determines correspondences between surface patches on different views of the face through a modeling of disparity maps based on Markov Random Fields (MWFs) [8]. • Under this technique, images were acquired by trinocular calibrated cameras and correspondences between the three views were determined [8]. • To deal with the problems of occlusions and textureless regions, disparity maps were modeled with Markov Random Fields (MRFs), in order to propagate information from textured to textureless regions [8]. • The Belief Propagation algorithm is applied to obtain the maximum-a- posteriori estimation of the disparity maps [8]. • In order to reduce false matching due to occlusions, outliers were eliminated by epipolar constraint check [8].

  14. Area Matching Based on Belief Propagation (A2) (cont.) • In the above mentioned trinocular calibrated camera system, one of the cameras, taken as a reference (master) has reasonable frontal, occlusions free view, while the others (slaves) show some occlusions [8]. • The proposed algorithm computes two dense disparity maps between the master and the other two slaves views, and each map is modeled by one pairwise MRF [8]. • The marginal probability is estimated performing Belief Propagation (BP) iterations on the MRF [8]. • At the end of the process, the MRF results are coupled in order to satisfy epipolar constraint on the triplet of images and hence to eliminate outliers [8].

  15. Area Matching Based on Belief Propagation (A2) (cont.) Fig. 1. Face image triplet [8] Fig. 2. Reconstructed 3D face [8]

  16. Analysis-by-Synthesis Technique Based on 3D Morphable Model (A3) (cont.) • In order to solve the ill-posed problem of reconstructing an unknown shape with unknown texture from a single image, the morphable model approach uses prior knowledge about the class of solutions [10]. • In case of 3D face reconstruction, this prior knowledge is represented by a parametrized manifold of face-like shapes embedded in the high-dimensional space of general textured surfaces of a given topology [10]. • More specifically, the morphable model captures the variations observed within a dataset of 3D scans of examples by converting them to a vector space representation [10]. • For surface reconstruction, the search is restricted to the linear span of these examples [10]. • Under this technique, estimation of 3D shapes, texture, pose and lighting are done simultaneously in an analysis-by-synthesis loop [10]. • The main goal of the analysis is to find suitable parameters for the morphable model that make the synthetic image as similar as possible to the original image in terms of pixel wise image difference [10].

  17. Analysis-by-Synthesis Technique Based on 3D Morphable Model (A3) (cont.) Fig. 3. The top row shows the reconstructions of 3D shape and texture. In the second row, results are rendered into the original images with pose and illumination recovered by the algorithm. The third row shows novel views [10].

  18. Shape Estimation Based on a Set of Feature Point Locations (A4) • From a small number of 2D positions of feature points, the algorithm can recover 3D shape of human faces at high resolution, inferring both depth and the missing vertex coordinates [9]. • The system is based on a morphable model that has been built from laser scans of 200 faces, using a modified optical flow algorithm to compute dense point-to-point correspondence. Each face is represented by the coordinates of 75972 vertices at a spacing of less than 1mm. 140 most relevant principal components have been used [9]. • For shape reconstruction, the user clicks on feature points in the image and the corresponding points on the 3D reference model. Good results are achieved with 15 to 20 points [9]. • Due to the automated 3D alignment, no estimate of pose, position and size is required. The system successfully compensates for rotation, scaling and translation [9]. • The color values of the image are mapped as a texture on the surface, and missing color values are reflected from visible parts or filled in with the average texture of the morphable model [9].

  19. Shape Estimation Based on a Set of Feature Point Locations (A4) (cont.) Fig. 4. From an original image at unknown pose (top, left) and a frontal starting position (top, right), the algorithm estimates 3D shape and pose from 17 feature coordinates, including 7 directional constraints (second row). 140 principal components and 7 vectors for transformations were used. The third row shows the texture-mapped result. Computation time is 250ms [9,10].

  20. Minimum Variance Estimation of 3D Face Shape (A5) • This 3D face surface reconstruction method is based on deformable mesh [11]. • The developed system uses six synchronized cameras to capture face images from six different views [11]. • Then, a minimum variance estimation framework for 3D face reconstruction is applied to reconstruct a personalized 3D model of the face [11]. • Next, a 3D surface reconstruction algorithm based on deformable subdivision mesh is applied to the images captured from different views to get more observations of the 3D face, especially the depth information, which could not be obtained from a single image directly [11]. • This algorithm continuously deforms a triangular mesh to minimize an energy function that measures the matching cost of input images [11]. • Finally, the minimum variance estimation is again used to refine the result of the 3D surface reconstruction algorithm [11].

  21. Minimum Variance Estimation of 3D Face Shape (A5) (cont.) Fig. 5. Synchronized images captured from six views [11] Fig. 6. (a) 2D face alignment; (b) Initial 3D shape estimated from the 2D facial feature points; (c) The deformable mesh; (d) Result of 3D surface reconstruction; (e) The 3D shape estimated from 3D points [11]

  22. Comparative Study

  23. Comparative Performance Analyses

  24. Comparative Performance Analyses (cont.)

  25. Comparative Performance Analyses (cont.)

  26. Findings & Conclusions • A brief survey of existing 3D face surface reconstruction techniques has been conducted under this study. • In addition, this study also focused on some general 3D surface reconstruction algorithms which can contribute in 3D face surface reconstruction. • Along with the description of the methodologies of five 3D face surface reconstruction algorithms, a detailed comparative analyses of their characteristics, advantages, limitations and the areas of application has been presented under this study. • The comparative study found two broad categories of 3D face surface reconstruction techniques; one of which requires prior knowledge about the class of solutions and other works independently based on the input 2D images.

  27. Findings & Conclusions (cont.) • The focus of research in 3D face surface reconstruction is shifting more towards uncontrolled imaging conditions. • The techniques that employ 3D morphable models for faces seem to handle the uncontrolled imaging conditions most promisingly. • 3D face surface reconstruction under different pose, facial expression and illumination is still a great challenge to the researchers.

  28. Thank You. Questions or Comments?

More Related