1 / 34

Imitation and Social Intelligence for Synthetic Characters Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation Br

Imitation and Social Intelligence for Synthetic Characters Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation Bruce Blumberg, MIT Media Lab. Socially Intelligent Characters and Robots. Able to learn by observing and interacting with humans, and each other

enrique
Télécharger la présentation

Imitation and Social Intelligence for Synthetic Characters Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation Br

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Imitation and Social Intelligence for Synthetic CharactersDaphna Buchsbaum, MIT Media Lab and Icosystem CorporationBruce Blumberg, MIT Media Lab

  2. Socially Intelligent Characters and Robots • Able to learn by observing and interacting with humans, and each other • Able to interpret other’s actions, intentions and motivations - characters with Theory of Mind • Prerequisite for cooperative behavior

  3. Max and Morris

  4. Max and Morris • Max watches Morris using synthetic vision • Can recognize and imitate Morris’s movements, by comparing them to his own movements (using his own movements as the model/example set) • Uses movement recognition to bootstrap identifying simple motivations and goals and learning about new objects in the environment

  5. © Infant Imitation • These interactions may help infants learn relationships between self and other • ‘like me’ experiences • Simulation Theory

  6. © Simulation Theory • “To know a man is to walk a mile in his shoes” • Understanding others using our own perceptual, behavioral and motor mechanisms • We want to create a Simulation Theory-based social learning system for synthetic characters

  7. Motor Representation: The Posegraph • Nodes are poses • Edges are allowable transitions • A motor program generates a path through a graph of annotated poses • Paths can be compared and classified Related Work: Downie 2001 Masters Thesis; Arikan and Forsyth, SIGGRAPH 2002; Lee et. al., SIGGRAPH 2002

  8. Motor Representation: The Posegraph • Multi-resolution graphs • Nodes are movements • Blending variants of ‘same’ motion

  9. Synthetic Vision • Graphical camera captures Max’s viewpoint • Enforces sensory honesty (occlusion)

  10. Synthetic Vision • Key body parts are color-coded • Max locates them, and remembers their position relative to Morris’s root node. • People watching a movement attend to end-effector locations Root node

  11. © Parsing Motion • Many different movements start and end in the same transitionary poses (Gleicher et. al., 2003) • These poses can be used as segment markers • Related Work: • Bindiganavale and • Badler, CAPTECH 1998; • Fod, Mataric and • Jenkins, Autonomous • Robots 2002; • Lieberman, • Masters Thesis 2004;

  12. Movement Recognition

  13. Movement Recognition

  14. Movement Recognition • Identify the best matching path through the posegraph • Check if this path closely matches an already existing movement

  15. Differing Movement Graphs

  16. Identifying Actions, Motivations and Goals

  17. Action Identification

  18. Action Identification Top-level motivation systems Do-until Trigger Action Object

  19. Representation of Action: Action Tuple Context in which the action can be performed Trigger Optional object to perform action on Object Anything from setting an internal variable to making a motor request. Action Context in which action is completed Do-until

  20. ActionlIdentification “Should I” trigger “can I” trigger

  21. ActionIdentification Find bottom-level actions that use matched movements

  22. ActionIdentification Find bottom-level actions that use matched movements

  23. ActionIdentification Find all paths through The action hierarchy To the matching action

  24. ActionIdentification Check “can-I” triggers, see which actions are possible.

  25. ActionIdentification Check “can-I” triggers, see which actions are possible.

  26. ActionIdentification Check “can-I” triggers, see which actions are possible.

  27. Learning About Objects

  28. LearningAbout Objects ? ? ? ?

  29. LearningAbout Objects

  30. Contributions:What Max can Do • Parse a continuous stream of motion into individual movement units • Classify observed movements as one of his own • Identify observed actions, using his own action system • Identify simple motivations and goals for an action • Learn uses of objects through observation

  31. Future Work:What Max Can’t Currently Do • Solve the correspondence problem • Imitate characters with non-identical morphology • Doesn’t act on knowledge of partner’s goals - cooperative activity • Currently ignores novel movements

  32. Harder Problems • How do you use your knowledge? • Limits of simulation theory • Intentions vs consequences: The problem of the robot that eats for you • What level of granularity do you attend to: wanting the object vs wanting to eat

  33. Acknowledgements • Members of the Synthetic Characters and Robotic Life Groups at the MIT Media Lab • Advisor: • Bruce Blumberg, MIT Media Lab • Thesis Readers: • Cynthia Breazeal, MIT Media Lab • Andrew Meltzoff, University of Washington • Special Thanks To: • Jesse Gray • Marc Downie

More Related