1 / 25

Collection and Analysis of Multimodal Interaction in Direction Giving Dialogues

Collection and Analysis of Multimodal Interaction in Direction Giving Dialogues. Towards an Automatic Gesture Selection Mechanism for Metaverse Avatars. Seikei University. Takeo Tsukamoto. Yumi Muroya. Yukiko Nakano. Masashi Okamoto. Japan. Overview. Introduction

kael
Télécharger la présentation

Collection and Analysis of Multimodal Interaction in Direction Giving Dialogues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Collection and Analysis of Multimodal Interaction in Direction Giving Dialogues Towards an Automatic Gesture Selection Mechanism for Metaverse Avatars Seikei University Takeo Tsukamoto YumiMuroya Yukiko Nakano Masashi Okamoto Japan

  2. Overview • Introduction • Research Goal and Questions • Approach • Data Collection Experiment • Analysis • Conclusion and Future Work

  3. Introduction • Online 3D virtual worlds based on Metaverse application is growing steadily in popularity • Ex.:Second Life (SL) ⇒The communication method is limitedto: • Online chat with speech balloons • Manual gesture generation Hello

  4. Introduction(Cont.) • Human face-to-face communication is largely dependent on non-verbal behaviors • Ex. direction giving dialogues • Many spatial gestures are used in order to illustrate directions and physical relationships of buildings and landmarks How can we implement natural non-verbal behaviors into Metaverse application ?

  5. Research Goal and Questions • Goal • Establish natural communication between avatars in Metaverse based on human face-to-face communication • Research Questions • Automation : gesture selection • How to automatically generate proper gestures? • Comprehensibility : gesture display • How to intelligibly display gestures to interlocutor?

  6. Previous work • An automatic gesture selection mechanism for Japanese chat texts in Second Life [Tsukamoto,2010] /2you keep going straight in this road, then you will be able to find a house having a round window on your left./

  7. Proxemics • Previous work doesn’t consider proxemics ⇒ There are some cases when avatar’s gesture becomes unintelligible to the others Proxemics is important to implement comprehensible gestures in Metaverse

  8. Approach Conduct an experiment to collect human gestures in direction giving dialogues Collect participant’s verbal and non-verbal data Analyze the relationship between gestures and proxemics

  9. Data Collection Experiment Experimental Procedure • Direction Giver (DG) • Know the way to any place on campus of Seikei Univ. • Direction Receiver (DR) • Know nothing about the campus of Seikei Univ. The DR asks a way to a specific building DG The DG explains how to get to the building DR

  10. Experimental Instruction Direction Receiver • Instructed to completely understand the way to the goal through a conversation with the DG Direction Giver • Instructed to confirm that the DR understood the direction correctly after the explanation was finished

  11. Experimental Materials Each pair recorded a conversationfor each goal place

  12. Experimental Equipments Headset microphone Head Shoulder Right arm Abdominal Experimental Video Equipments

  13. Collected Data Motion Capture Data Video Data Transcription of Utterances

  14. Analysis • Investigated DG’s gesture distribution with respect toproxemics • Analyzed 30 dialogues collected from 10 pairs Analysis was focused on the movements of DG’s right arm during gesturing

  15. Automatic Gesture Annotation • It is very time consuming to manually annotate • nonverbal behaviors • Automatically annotated the gesture occurrence • More than 77% of the gestures are right arm • gestures • Built a decision tree that identified right arm gestures • Weka J48 was used for the decision tree learning Extracted features • Movement of position(x, y, z) • Rotation(x, y, z) • Relative position of the right arm to shoulder(x, y, z) • Distance between right arm and shoulder • Binary judge • Gesturing / • Not gesturing

  16. Automatic Gesture Annotation(Cont.) • As the result of 10-fold cross validation, the accuracy is 97.5% • Accurate enough for automatic annotation Example of automatic annotation

  17. Gesture Display Space • Defined as the overlapamong the DG’s front area, the DR’s front area, and the DR’s front field of vision DG’s body Direction vector DR’s body Direction vector DG DR Gesture Display Space Center DR’s front field of Vision Distance of DR from the center Distance of DG from the center Direction Giver Direction Receiver

  18. Categories of Proxemics • Define 450mm to 950mm as the standard distance from the center of the gesture display space • Human arm length is 60cm to 80cm, by adding 15cm margin

  19. Analysis:Relationship between Proxemics and Gesture Distribution • Analyze the distribution of gestures by plotting the DG’s right arm position Close_to_DG Close_to_DR Close_to_Both Normal Similar Smaller Wider

  20. Analysis:Relationship between Proxemics and Gesture Distribution(Cont.) Close_to_Both < Normal = Close_to_DG < Close_to_Both

  21. Applying the Proxemics Model • Create avatar gestures based on our proxemics model • To test whether the findings are applicable Close_to_DR Close_to_DG

  22. Conclusion • Conducted an experiment to collect human gestures in direction giving dialogues • Investigated the relationship between the proxemics and the gesture distribution • Proposed five types of proxemics characterized by the distance from the gesture display space • Found that the gesture distribution range was different depending on the proxemics of the participants

  23. Future Work • Establish a computational model of determining gesture direction • Examine the effectiveness of the model • whether the users perceive the avatar’s gestures being appropriate and informative

  24. Thank you for your attention

  25. Related work • [Breitfuss, 2008] Built a system that automatically adds gestural behavior and eye gaze • Based on linguistic and contextual information of input text • [Tepper, 2004] Proposed a method for generating novel iconic gestures • Used spatial information about locations and shape of landmarks to represent concept of words • From a set of parameters, iconic gestures are generated without relying on a lexicon of gesture shapes • [Bergmann, 2009] Represented individual variation of gesture shapes using Bayesian network • Built an extensive corpus of multimodal behaviors in direction-giving and landmark description task

More Related