1 / 23

Towards Accurately Extracting Facial Expression Parameters

Towards Accurately Extracting Facial Expression Parameters. Xuecheng Liu Dinghuang Ji Zhaoqi Wang and Shihong Xia { liuxuecheng,jidinghuang,zqwang,xsh }@ ict.ac.cn Institute of Computing Technology, Chinese Academy of Sciences, China. Outline. Background Problem Our Method

eagan-pitts
Télécharger la présentation

Towards Accurately Extracting Facial Expression Parameters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards Accurately Extracting Facial Expression Parameters XuechengLiu DinghuangJiZhaoqiWang and ShihongXia {liuxuecheng,jidinghuang,zqwang,xsh}@ict.ac.cn Institute of Computing Technology, Chinese Academy of Sciences, China

  2. Outline • Background • Problem • Our Method • Experiments • Summary

  3. Background Applications of facial animation MoviesGames Tele-Immersion

  4. Background • Blendshapes

  5. Background

  6. Problem Usually, • Head absolute orientation is first extracted, including head rotation matrix R and transition vector t. • The captured facial expression data are then transformed to local face coordinates system. • However, the assumption that four markers are relative fixed may not always be true. This results to error of computing head orientation. Then, above error will be amplified during computing expressions parameters w.

  7. Problem • [HAV 06] propose a hierarchical manner • Doing a global or gross stabilization choosing selective markers (head, ears and the nose bone) • Refining the output by local or fine stabilization by analyzing the marker movements relative to a facial surface model

  8. Our Method • To undermine the error accumulation, we propose a revision based optimization method.

  9. Our Method • To undermine the error agglomeration, we propose a revision based optimization method. Simplify

  10. Our Method • To undermine the error agglomeration, we propose a revision based optimization method. • Then we use a traditional alternative method and non-negative least square to solve the problem.

  11. Our Method • Linear blendshape • Nonlinear blendshape

  12. Experiments • Data and Environment • Training data:500 frames • Testing data:15 segments(#Frame >36,000) • Ratio>= 80 • Basic blendshapes:27个 • Hardware :x86 2.67G Core2 CPU,2G RAM • Software:Matlab

  13. Experiments • We compare with two published which are both representative works in areas. • [HAV 06] is a typical linear blendshapebased method, this work is presented as course of Siggraph 2006 and used in many popular movies. • [LXFW 11]is a typical nonlinear blendshape based method, and is published in CGF 2011.

  14. Experiments • Contrast experiment 1 (with [HAVALDAR 06]) • [HAVALDAR 06] [Our Method]

  15. Experiments • Contrast experiment 1 (with [HAV 06]) Absolute Error Relative Error (b) (a)

  16. Experiments • Contrast experiment 1 (with [HAV 06]) (b) (a)

  17. Experiments

  18. Experiments • Contrast experiment 2 (with [LXFW11]) • [LXFW11] Our Method]

  19. Experiments • Contrast experiment 2 (with [LXFW11]) Absolute Error Relative Error (b) (a)

  20. Experiments • Contrast experiment 2 (with [LXFW11]) (b) (a)

  21. Experiments

  22. Conclusion

  23. Bibliography • [HAV 06] HAVALDAR P. Sony pictures imageworks. In SIGGRAPH ’06: ACM SIGGRAPH 2006 Courses (New York, NY, USA, 2006), ACM, p. 5. • [LXFW11] Xuecheng Liu, Shihong Xia, Shihong Xia, Yiwen Fan, Zhaoqi Wang. Exploring nonlinear relationship of blendshape facial animation. To appear in the Journal of Computer Graphics Forum.

More Related