1 / 20

University of Paris 8

University of Paris 8. Animation improvements and face creation tool for ECAs Nicolas Ech Chafai Benjamin Dariouch Maurizio Mancini Catherine Pelachaud. Overview. aiming at improving agent's facial animation quality: we are studying some motion captured data we apply results to our ECA

ziarre
Télécharger la présentation

University of Paris 8

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. University of Paris 8 • Animation improvements and face creation tool for ECAs • Nicolas Ech Chafai • Benjamin Dariouch • Maurizio Mancini • Catherine Pelachaud

  2. Overview • aiming at improving agent's facial animation quality: • we are studying some motion captured data • we apply results to our ECA • to allow the creation of individualized ECAs: • we developed one tool for MPEG4 face creation • we propose some refinements to MPEG4 specification

  3. MOCAP data analysis • Three main goals: • displacement of FAPs during emotion presentation • synchronization between different FAPs • FAP values during transition between consecutive emotions

  4. Collected MOCAP data • 2 actors • 33 markers, 21 of them are the MPEG4 FAPs • 78 sequences: • basic movements • rising eyebrows • smiling • … • basic emotions • anger • happiness • surprise • …

  5. MOCAP problems • (we discovered that) obtaining usable data is not straightforward • right size and shape markers have to be used • cameras have to be placed properly • data has to be translated to the needed reference system • data has to be filtered from noise

  6. Data example • smile

  7. Video examples • frown clip • file: clips/coline 36 eyebrows • fear clip • file: clips/coline 56 fear

  8. Facial animation model • FAPs displacement during basic emotions • our model was simply based on onset-apex-offset

  9. Data observed model • on real data we observed other general behaviors

  10. Results • we started to introduce the ADSR model: • given a sequence of (phase,intensity,duration) where phase is from {Attack, Decay, Sustain Release}, the FAP curve is built using keyframe Hermite interpolation:

  11. ADSR vs real data

  12. ADSR example • clip

  13. MPEG4 face tool

  14. MPEG4 face tool • imports models from Poser • allows the selection of the areas influenced by FDPs

  15. Tool's features • automatic selection and symmetrization • automatic association region name available FDPs

  16. Example • exports into a data file (containing geometry + regions) readable from the Greta's player

  17. Example • flat.avi (note: female speech)

  18. Added new FAPUs • after adding new faces some refinements for the MPEG4 player will be needed

  19. Example • clip without new FAPUs • clip with new FAPUs

  20. Conclusions • more data has to be captured in proper way • focus more on interaction between different FAPs and transition between sequential expressions • ADSR has to be fully implemented • for documentation, papershttp://www.iut.univ-paris8.fr/greta • for Greta applications available to the other HUMAINE members please contact usm.mancini@iut.univ-paris8.fr

More Related