1 / 39

Towards More Photorealistic Faces

Towards More Photorealistic Faces. Kenneth L. Hurley. Agenda. Introduction to coding systems Creating Content Transforming a canonical face from photographs Animation techniques Achieving more realistic faces Conclusion. FACS. Facial Action Coding System Paul Eckman and Wallace Friesen

norah
Télécharger la présentation

Towards More Photorealistic Faces

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards More Photorealistic Faces Kenneth L. Hurley

  2. Agenda • Introduction to coding systems • Creating Content • Transforming a canonical face from photographs • Animation techniques • Achieving more realistic faces • Conclusion

  3. FACS • Facial Action Coding System • Paul Eckman and Wallace Friesen • Foundation for most facial animation • Classified facial expression by “Action Units” AU • 6 universal categories • Sadness, anger, joy, fear, disgust and surprise

  4. FACS

  5. Creating Content • Traditional art techniques • 3D Modeler package • Triangle based • Nurbs based • FFD Based • Patches Based • Parameterized • Morph Targets

  6. Examples Triangles Patches Parameterized

  7. 3D Capture Device • Various Available on Market • Captures data with a variety of techniques • Laser Range Finders • Light Projections • Shadow Projections • Sonic Range Finders • Capture houses can do it for you • Cyberware

  8. 2D Capture Techniques • Photogrammetry • Front photo of individual is all that’s required. • Front and side give better 3D coordinates • Automated feature recognition • New techniques are being developed every day • Usually computationally expensive and still needs user intervention • See http://www.cs.rug.nl/~peterkr/FACE/face.html

  9. Direct Parameterized Models • Frederic I. Parke 1974 PhD Thesis entitled “A Parametric Model for Human Faces” • Used to create parameterized heads using Interpolation, Scaling and Translation • Steve DiPaola extended this version. Program is available at http://www.stanford.edu/dept/art/SUDAC/facade/

  10. Animation of Face • Muscle Simulation • Parke and Waters system is most widely used • “Computer Facial Animation” book examples • Uses 18 muscles in example • Open Source, with OpenGL • http://www.crl.research.digital.com/publications/books/waters/Appendix1/opengl/OpenGLW95NT.html

  11. Animation of Face (Cont)

  12. Animation of Face • Open Source Expression Toolkit • Based of of Parkes model • Adds nice user interface • Exports data from 3DS Max • Scripting Language • http://expression.sourceforge.net/

  13. Facial Muscles

  14. Lip Syncing • Phonemes – What are they and why is it important? • Helpful to determine mouth posture • 45 English phonemes, more or less for other languages • Mouth Postures • Nitchie produced 18 significant mouth postures to match called visemes.

  15. Lip Syncing (Cont) • Visemes can match mouth postures with speech phonemes • Tools • Microsoft Speech SDK • Can decipher recorded speech into phonemes • Code to give mouth positions (visemes) from these phonemes. • Free • http://www.microsoft.com/speech

  16. Lip Syncing (Cont) • University of Edinburgh • http://www.cstr.ed.ac.uk/projects/festival/ • Open source, free speech system

  17. Coarticulation • Blending of lip-sync postures. • Pelachaud described algorithm in 1991 • Cohen and Massaro developed a dominance model. • http://mambo.ucsc.edu/psl/ca93.html

  18. MPEG4 Facial Animation • Designed to encode low bandwidth data • Parameters are sent over transmission lines and reconstructed at client • Facial Animation Parameters (FAP) • Facial Definition Parameters (FDP) • Uses 68 FAPs broken up into 10 groups • Uses only 14 visemes • Very similar to “Action Units” in Parkes model

  19. Group Number of FAPs 1: visemes and expressions 2 2: jaw, chin, inner lowerlip, cornerlips, midlip 16 3: eyeballs, pupils, eyelids 12 4: eyebrow 8 5: cheeks 4 6: tongue 5 7: head rotation 3 8: outer lip positions 10 9: nose 4 10: ears 4 FAP Groups

  20. Viseme # 7 phonemes s, z example sir, zeal 8 0 none n, l na lot, not 1 9 p, b, m r put, bed, mill red 10 2 f, v A: far, voice car 11 3 T,D e think, that bed 4 12 t, d I tip, doll tip 13 5 k, g Q call, gas top 6 14 tS, dZ, S U chair, join, she book Viseme and Related Phonemes

  21. MPEG4 FDP Parameters

  22. Bump Mapping • Requires either CPU work or GPU setup • CPU has to compute tangent space basis vectors for light • GPU can do it on a vertex by vertex basis as the vertices come through the pipeline.

  23. Free Form Deformations (FFD) • Can be used to scale facial features • Problems with where to place lattice. • Good solution is DFFD • Dirichlet Free Form Deformations • Basically idea is that the control points are on the surface • http://cui.unige.ch/~moccozet/PAPERS/CA97/ • Rational Free Form Deformations • Another idea is to have non-axis aligned lattices • Still working on implementation

  24. Free Form Deformations (FFD)

  25. Bump Mapping (Cont). • Sample Vertex Shader ; This shader does the per-vertex dot3 work. ; It transforms the light vector by the basis vectors ; passed into the shader. ; The basis vectors only need to change if the model's ; shape changes. ; The output vector is stored in the diffuse channel ; (use the menu to look at the generated light vector) #include "dot3.h" #define V_POSITION v0 #define V_NORMAL v1 #define V_DIFFUSE v2 #define V_TEXTURE v3

  26. Bump Mapping (Cont). • Sample Vertex Shader #define V_SxT v4 #define V_S v5 #define V_T v6 #define S_WORLD r0 #define T_WORLD r1 #define SxT_WORLD r2 #define LIGHT_LOCAL r3 vs.1.0 ; Transform position to clip space and output it dp4 oPos.x, V_POSITION, c[CV_WORLDVIEWPROJ_0] dp4 oPos.y, V_POSITION, c[CV_WORLDVIEWPROJ_1]

  27. Bump Mapping (Cont). • Sample Vertex Shader dp4 oPos.z, V_POSITION, c[CV_WORLDVIEWPROJ_2] dp4 oPos.w, V_POSITION, c[CV_WORLDVIEWPROJ_3] ; Transform basis vectors to world space dp3 S_WORLD.x, V_S, c[CV_WORLD_0] dp3 S_WORLD.y, V_S, c[CV_WORLD_1] dp3 S_WORLD.z, V_S, c[CV_WORLD_2] dp3 T_WORLD.x, V_T, c[CV_WORLD_0] dp3 T_WORLD.y, V_T, c[CV_WORLD_1] dp3 T_WORLD.z, V_T, c[CV_WORLD_2] dp3 SxT_WORLD.x, V_NORMAL, c[CV_WORLD_0] dp3 SxT_WORLD.y, V_NORMAL, c[CV_WORLD_1] dp3 SxT_WORLD.z, V_NORMAL, c[CV_WORLD_2]

  28. Bump Mapping (Cont). • Sample Vertex Shader mul S_WORLD.xyz, S_WORLD.xyz, c[CV_BUMP_SCALE].w mul T_WORLD.xyz, T_WORLD.xyz, c[CV_BUMP_SCALE].w ; transform light by basis vectors to put it ; into texture space dp3 LIGHT_LOCAL.x, S_WORLD.xyz, c[CV_LIGHT_DIRECTION] dp3 LIGHT_LOCAL.y, T_WORLD.xyz, c[CV_LIGHT_DIRECTION] dp3 LIGHT_LOCAL.z, SxT_WORLD.xyz, c[CV_LIGHT_DIRECTION] ; Normalize the light vector dp3 LIGHT_LOCAL.w, LIGHT_LOCAL, LIGHT_LOCAL

  29. Bump Mapping (Cont). • Sample Vertex Shader rsq LIGHT_LOCAL.w, LIGHT_LOCAL.w mul LIGHT_LOCAL, LIGHT_LOCAL, LIGHT_LOCAL.w ; Scale to 0-1 add LIGHT_LOCAL, LIGHT_LOCAL, c[CV_ONE] mul oD0, LIGHT_LOCAL, c[CV_HALF] ; Set alpha to 1 mov oD0.w, c[CV_ONE].w ; output tex coords mov oT0, V_TEXTURE mov oT1, V_TEXTURE

  30. Bump Mapping (Cont) tex t0 ; grab the diffuse texture map tex t1 ; load bump map dp3 r0, v0, t1 ; dot normal with light vector mul r0, r0, t0 ; calculate diffuse value for bumped normal • Sample Pixel Shader

  31. Realistic looking skin • Use of high detail bump maps dramatically add realism. http://www.cc.gatech.edu/cpl/projects/skin/skin.pdf

  32. Realistic looking skin (Cont) • Even better, use diffuse, bump map, specular and environment map

  33. Realistic looking skin (Cont)

  34. Realistic looking skin (Cont)

  35. Realistic looking skin (Cont) mov oT1, V_TEXTURE ; specular map mov oT2, V_TEXTURE mov oT3, V_NORMAL ; pass normal for LDR • Sample Vertex Shader (modify bump map VS)

  36. Realistic looking skin (Cont) tex t0 ; grab the diffuse texture map tex t1 ; load bump map tex t2 ; load specular map tex t3 ; load environment lighting LDR dp3 r0, v0, t1 ; dot normal with light vector mul r0, r0, t0 ; calculate diffuse value for bumped normal mul r1, t2, t3 ; mul specular * environment LDR add r0, r0, r1 ; add specular + diffuse • Sample Vertex Shader

  37. It’s in the Eyes • Disney once said the audience watches the eyes. • Orientation, with respect to the world as well as each other can give sense of focus and direction of look. • Also use environment map to give reflective glossy look.

  38. References • Parke, Frederic I. And Waters, Keith. “Computer Facial Animation” A K Peters., Wellesly, Mass. 1996 • Pighin, Frederic, et. al. “Synthesizing Realistic Facial Expressions from Photographs”. Siggraph 1998 • Gray, Henry. “Gray’s Anatomy of the Human Body” Online http://www.bartleby.com/107/ • Cohen, M. M., & Massaro, D. W. “Modeling coarticulation in synthetic visual speech”. In N. M. Thalmann & D. Thalmann (Eds.) Models and Techniques in Computer Animation. Tokyo: Springer-Verlag. 1993 • Nitchie, E.B. “How to Real Lips for Fun and Profit”. Hawthorne Books, New York, 1979 • A. Haro, B. Guenter, and I. Essa. "Real-time, Photo-realistic, Physically Based Rendering of Fine Scale Human Skin Structure“. Proceedings 12th Eurographics Workshop on Rendering, London, England, June 2001 • Lundgren, Ulf. “Description of how the model was created” http://www.lost.com.tj/Ulf/artwork/3d/behind.html

  39. References • Osterman, J. and Tekalp, Murat. “Face and 2-D Mesh Animation in MPEG-4”. Online at http://www.cselt.it/leonardo/icjfiles/mpeg-4_si/8-SNHC_visual_paper/8-SNHC_visual_paper.htm • http://www.nvidia.com/developer • L. Moccozet and N. M. Thalmann, “Dirichlet free-form deformations and their application to hand simulation”, Proceedings of the Computer Animation’97, pp93-102, 1997.

More Related