1 / 51

SIGGRAPH Course 30: Performance-Driven Facial Animation

SIGGRAPH Course 30: Performance-Driven Facial Animation. Section: Marker-less Face Capture and Automatic Model Construction Part 1: Chris Bregler, NYU Part 2: Li Zhang, Columbia University. Face Tracking Approaches. Marker-based hardware motion capture systems

verne
Télécharger la présentation

SIGGRAPH Course 30: Performance-Driven Facial Animation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SIGGRAPH Course 30:Performance-Driven Facial Animation Section: Marker-less Face Capture and Automatic Model Construction Part 1: Chris Bregler, NYU Part 2: Li Zhang, Columbia University

  2. Face Tracking Approaches • Marker-based hardware motion capture systems • Tom Tolles (House of Moves) presentation 9:00 (earlier) • Parag Havaldar (Sony Pictures Imageworks) presentation at 2:15 pm

  3. Marker-based Face Capture:

  4. Marker-less Face Capture:

  5. Early Computer Face Capture • Single Camera Input • 2D Output • Off-line • Interactive-Refinement • Make-up • Contour / Local Features • Hand Crafted • Linear Models / Tracking Kass, M., Witkin, A., & Terzopoulos, D. (1987) Snakes: Active contour models.

  6. Disney:

  7. Early “Markerless Facecapture” • Disney: Step-Mother  Eleanor Audley

  8. Early Computer Face Capture • Single Camera Input • 2D Output • Off-line • Interactive-Refinement • Make-up • Contour / Local Features • Hand Crafted • Linear Models / Tracking Kass, M., Witkin, A., & Terzopoulos, D. (1987) Snakes: Active contour models.

  9. Markerless Face Capture - Overview - • Single / Multi Camera Input • 2D / 3D Output • Off-line / Real-time • Interactive-Refinement / Face Dependent / Independent • Make-up / Natural • Flow / Contour / Texture / Local / Global Features • Hand Crafted / Data Driven • Linear / Nonlinear Models / Tracking

  10. Common Framework Tracking = Error Minimization Error = Feature Error + Model Error

  11. Difference: Tracking = Error Minimization Error = Feature Error + Model Error

  12. Difference: Tracking = Error Minimization Error = Feature Error + Model Error

  13. Difference: Tracking = Error Minimization Error = Feature Error + Model Error

  14. Tracking = Error Minimization Kass, M., Witkin, A., & Terzopoulos, D. (1987) Snakes: Active contour models.

  15. Tracking = Error Minimization Error = Feature Error + Model Error

  16. Tracking = Error Minimization Most general feature: Error = Optical Flow + Model Error

  17. Tracking = Error Minimization Err(u,v) =  || I(x,y) – J(x+u, y+v) ||

  18. Basics in Optical Flow: Lucas-Kanade 1D Image F G u ? Intensity - x Linearization: Spatial Gradient Temporal Gradient

  19. Lucas-Kanade: 2D Image F G ROI ROI (u,v) SpatialGradient Temporal Gradient

  20. Lucas-Kanade: Error Minimization: 2D Image Minimize E(u,v): => C D -1 C D

  21. Marker-less Face Capture: In general: ambiguous using local features

  22. 2 I (1) - I(1) v t 1 I (2) - I(2) v t 2 ... I (n) - I(n) v t n Optical Flow - = E(V)

  23. 2 I (1) - I(1) v t 1 I (2) - I(2) v t 2 ... I (n) - I(n) v t n Optical Flow - V = E(V)

  24. 2 I (1) - I(1) v t 1 I (2) - I(2) v t 2 ... I (n) - I(n) v t n Optical Flow + Model Model - V V = E(V)

  25. 2 I (1) - I(1) v t 1 I (2) - I(2) v t 2 ... I (n) - I(n) v t n Optical Flow + Model Model - V V V = M( q) = E(V)

  26. Optical Flow + linearized Model Model - V V 2 V = Mq Z + H V 2 Z + Cq

  27. Optical Flow + 3D Model DeCarlo, Metaxas, 1999 Eisert et al 2003

  28. Optical Flow + MPEG4 Model --> MediaPlayer (Eisert et al)

  29. High-End Production: Optical Flow + 3D Model Disney Gemeni-Project Williams et al 2002 EA Universal Capture Borshukov et al 2002-2006

  30. More “forgiving” Error Norm • - Faces change appearance L2 D

  31. More “forgiving” Error Norm • L2 Norm vs Robust Norm L2 robust D D

  32. 2 I (1) - I(1) v t 1 I (2) - I(2) v t 2 ... I (n) - I(n) v t n Robust Error with EM layers -

  33. 2 I (1) - I(1) v t 1 I (2) - I(2) v t 2 ... I (n) - I(n) v t n Robust Error with EM layers - 0.1 0.9 0.2

  34. Lucas-Kanade + changing Appearance F G - Learned PCA:

  35. Optical Flow and PCA Eigen Tracking (Black and Jepson)

  36. 2D texture and contours + PCA Active Appearance Models (AAM): (Cootes et al)

  37. 2D texture and mesh + PCA

  38. Lucas-Kanade + Apearance Models Lucas-Kanade AAMs: (Baker & Matthews)

  39. Affine Flow + PCA + Robust Norm Disney: Gemeni-Project

  40. Solution based on Factorization • We want 3 things: • 3D non-rigid shape model • for each frame: • 3D Pose • non-rigid configuration (deformation) • > Tomasi-Kanade-92: Rank 3 W = P S

  41. Solution based on Factorization • We want 3 things: • 3D non-rigid shape model • for each frame: • 3D Pose • non-rigid configuration (deformation) • > PCA-based representations: Rank K W = Pnon-rigid S

  42. Space-Time Factorization Complete 2D Tracks or Flow Matrix-Rank <= 3*K Nonrigid flow or Markerset -> “Rigid Stabilization + Blendshapes”

  43. Space-Time Factorization Irani, 1999 Bregler, Hertzmann, Biermann, 2000 Torresani, Yang, Alexander, Bregler, 2001 Brand, 2001 Xiao, Kanade, 2004 Torresani, Hertzmann, 2004

  44. From Pixels to 3D Blend Shapes (Torresani et al 01,02)

  45. t=F . t=2 = 3D positions of point i for the K modes of deformation . t=1 . . . . . . frames wi : full trajectory Q’ mi Space-Time Tracking (Torresani Bregler 2002) Trajectory Constraints

  46. From Pixels to 3D Blend Shapes (Torresani et al 01,02) • Non-Rigid Models (Lorenzo Torresani, Aaron Hertzmann, et al) • Rank Based Tracking • 3D Basis Shapes • Probabilistic Tracking / Models • Occlusion • Dynamical Systems

  47. From Pixels to 3D Blend Shapes (Torresani et al 01,02) • Non-Rigid Models (Lorenzo Torresani, Aaron Hertzmann, et al) • Rank Based Tracking • 3D Basis Shapes • Probabilistic Tracking / Models • Occlusion • Dynamical Systems p ( I(pj,t ) | “point pj,t is visible”) = N ( I(pj,t )| µj ; 2 ) p ( I(pj,t ) | “pixel pj,t is an outlier”) = c zt = A * zt-1 + nt

  48. From Pixels to 3D Blend Shapes (Torresani et al 01,02)

  49. From Pixels to 3D Blend Shapes (Torresani et al 01,02)

  50. Disney Gemeni Project

More Related