Download
example based simulation geometry acquisition facial animation n.
Skip this Video
Loading SlideShow in 5 Seconds..
EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION PowerPoint Presentation
Download Presentation
EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION

EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION

347 Vues Download Presentation
Télécharger la présentation

EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. SIGGRAPH 2011 PAPER READING EXAMPLE-BASED SIMULATION, GEOMETRY ACQUISITION, FACIAL ANIMATION Tong Jing 2011-7-6

  2. PAPERS: • Example-Based Simulation: • Data-Driven Elastic Models for Cloth: Modeling and Measurement • Geometry Acquisition: • Global Registration of Dynamic Range Scans for Articulated Model Reconstruction • Facial Animation: • Realtime Performance-Based Facial Animation • Leveraging Motion Capture and 3D Scanning for High-Fidelity Facial Performance Acquisition • Interactive Region-based Linear 3D Face Models • High-Quality Passive Facial Performance Capture Using Anchor Frames • Computer-Suggested Facial Makeup. EG 2011 • Learning Skeletons for Shape and Pose. I3D 2010

  3. DATA-DRIVEN SIMULATION Simulation  Data Capture Improve Reconstruction Data Accuracy Example Data  Simulation Improve Simulation Speed Estimate parameters form Data  Simulation Improve Simulation Accuracy • Physically Guided Liquid Surface Modeling from Videos. Siggraph 2009 • Example-Based Wrinkle Synthesis for Clothing Animation. Siggraph 2010 • Data-Driven Elastic Models for Cloth: Modeling and Measurement . Siggraph 2011

  4. SIMPLE CLOTH SIMULATION • Hook's Law: • Stress = k * Strain • Problems: • Most current cloth simulation techniques simply use linear and isotropic elastic models with manually selected stiffness parameters.

  5. REALISTIC CLOTH SIMULATION • Anisotropic • Different angles, different stiffness • Nonlinear: • Different stretching degree, different stiffness

  6. PIECEWISE ANISOTROPIC LINEAR ELASTIC MODEL Different stiffness for different angles and stretching degree

  7. PIECEWISE ANISOTROPIC LINEAR ELASTIC MODEL stress stiffness tensor matrix strain

  8. PIECEWISE ANISOTROPIC LINEAR ELASTIC MODEL Different stiffness for different angles and stretching degree

  9. MEASUE THE STRETCHING PARAMETERS

  10. MEASUE THE STRETCHING PARAMETERS

  11. MEASUE THE BENDING PARAMETERS

  12. DATABASE

  13. RESULTS

  14. LIMITATIONS • Piecewise linear elastic model not good for large deformed cloth simulationn • Does not consider the cloth memory property • Only looked at measuring static parameters. Dynamic parameters, such as internal damping, should also be measured.

  15. CONCLUSION • Data-driven models wide accepted in computer graphics, such as motion capture for animation, measured BRDFs for reflectance. • Explored the new domain of data-driven elastic models for cloth.

  16. Previous Work Require small changes in pose / temporal coherence ICP-based approach of (Pekelny & Gotsman, EG 2008) User labels rigid parts in the first frame Each rigid part registered in subsequent frames Reconstructed Surface and Skeleton Registered 3D Scans (Images from Pekelny and Gotsman 2008)

  17. Previous Work • Require small changes in pose / temporal coherence • Model a space-time surface (Mitra et al., SGP 2007) • Requires dense spatial and temporal sampling (Image from Mitra et al. 2007) Example: A 2D time-varying surface

  18. Previous Work • Require user-placed feature markers • Example-based 3D scan completion (Pauly et al., SGP 2005) • Fill holes by warping similar shapes in a database (Images from Pauly et al. 2005)

  19. Previous Work • Require knowledge of the entire shape (template) • Correlated Correspondence (Anguelov et al., NIPS 2004) • Goal is to match the corresponding point in the template • Cost function: matches features and preserves geodesic distance (Images from Anguelov et al. 2004) Partial Example Registered result Ground Truth Template Model

  20. Will Chang’s two related work • Automatic Registration for Articulated Shapes. SGP 2008 • Pairwise • Assuming no knowledge of a template, user-placed markers, segmentation, or the skeletal structure of the shape • Motion Sampling. Idea from Partial Symmetry Detection (Mitra et al. ‘06)

  21. Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Source Shape Target Shape

  22. Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Sampled Points Source Shape Target Shape

  23. Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Source Shape Target Shape

  24. Motion Sampling Illustration Find transformations that move parts of the source to parts of the target Translate Rotate and Translate Translate Rotate and Translate Rotations Translations Source Shape Target Shape Transformation Space

  25. Motion Sampling Illustration Find transformations that move parts of the source to parts of the target s1t1 s1t2 Rotations s1 t1 Translations t2 s2 s2t2 s2t1 Source Shape Target Shape Transformation Space

  26. Will Chang’s two related work • Range Scan Registration Using Reduced Deformable Models. EG 2009 • Pairwise • Not require user specified markers, a template, nor manual segmentation of the surface geometry • Represent the linear skinningweight functions on a regular grid localized to the surface geometry

  27. GOAL • Automatically globallyreconstruct articulated models (mesh, joint, skin weight ) • simultaneously aligns partial surface data and recovers the motion model • Can deal with large motion and occlution • Without the use of markers, user-placed correspondences, segmentation, or template model

  28. All frames align to the first frame • Maintain and update a global reduced graph data: dynamic sample graph (DSG)

  29. the weights of neighboring points sould be the same.

  30. RESULTS

  31. PROBLEMS • Too many sub-procedures, too many parameters ( automatic ? ) • Too slow (100+ seconds per frame)

  32. METHODS FOR ACQUIRING 3D FACIAL PERFORMANCES animation temporal resolution low high marker based motion capture systems (2000HZ, 200 markers) image-based systems geometry spatial resolution High-speed structured light systems (30HZ, smooth mesh) this work (2000HZ, million verts?) 3D laser scanning (static, million verts) high

  33. Goal • acquiring high-fidelity 3D facial performances with realistic dynamic wrinkles (temporal) and fine scale facial details (spatial) • Ideas • Leverages motion capture and 3D scanning technology • Using blend shape model

  34. OVERVIEW motion capture data acquisition (T frames,240 fps, 100 markers) select a minimum set of key frames (K frames, K<<T,100 markers) marker mesh registration (generating motion capture markers with every face scan) capture corresponding Face Scans (K frames, 80K Mesh) face scans registration (generating dense, consistent surface correspondences across all the scans) facial performance reconstruction by blend shape (T frames,240 fps, 80K Mesh)

  35. motion capture data acquisition (T frames,240 fps, 100 markers) select a minimum set of key frames (K frames, K<<T,100 markers) • select a minimum set of facial expressions by minimizing the reconstruction blend shape errors … Key frames Blend shape coefficients for frame t All motion capture data

  36. marker mesh registration (generating motion capture markers with every face scan) capture corresponding Face Scans (K frames, 80K Mesh) • differences between the “reference” expressions and the “performed” expressions • Solving by extennded ICP

  37. marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans)

  38. marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) Large-scale Mesh Registration distances of markers keep Laplacian coordinates closet points

  39. marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) Large-scale Mesh Registration Fine-scale Mesh Registration

  40. marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) Region-based mesh registration: registrate geometric details between one scan and its three closest scans Minimize the geometry features of source and target meshes

  41. marker mesh registration (generating motion capture markers with every face scan) face scans registration (generating dense, consistent surface correspondences across all the scans) • Choose the mean curvature at each vertex as geometry features • Using optical flow after cylindrical image mapping

  42. face scans registration (generating dense, consistent surface correspondences across all the scans) facial performance reconstruction by blend shape (T frames,240 fps, 80K Mesh) • Using blend shape model

  43. RESULTS

  44. CONTRIBUTIONS • Automatically determines a minimal set of face scans required for facial performance reconstruction • A two-step registration process that builds dense, consistent surface correspondences across all the face scans

  45. CONCLUSION AND FUTURE WORK • Combines the power of motion capture and 3D scanning technology • Match both the spatial resolution of static face scans and the acquisition speed of motion capture systems • Modifying the data for new applications • Methods dealing with the high-fidelity facial data • Eye and lip movements