1 / 28

The Problem: Robotic Imitation of Human Actions

Function Approximation for Imitation Learning in Humanoid Robots Rajesh P. N. Rao Dept of Computer Science and Engineering University of Washington, Seattle neural.cs.washington.edu Students : Rawichote Chalodhorn, David Grimes Funding : ONR, NSF, Packard Foundation.

Télécharger la présentation

The Problem: Robotic Imitation of Human Actions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Function Approximation for Imitation Learning in Humanoid Robots Rajesh P. N. RaoDept of Computer Science and EngineeringUniversity of Washington, Seattleneural.cs.washington.eduStudents: Rawichote Chalodhorn, David GrimesFunding: ONR, NSF, Packard Foundation

  2. The Problem: Robotic Imitation of Human Actions Teacher (David Grimes) HOAP-2 Humanoid Robot (Morpheus or Mo)

  3. Example of Motion Capture Data Attempted Imitation Motion Capture Sequence

  4. Goals • Learn from only observations of teacher states • Expert does not control robot • Also called “implicit imitation” (Price & Boutilier, 1999) • Similar to how humans learn from imitation • Avoid hand-coded physics-based models • Learn dynamics in terms of sensory consequences of executed actions • Use teacher demonstration to restrict search space of feasible actions

  5. Step 1: Kinematic mapping • Need to solve the “correspondence problem” • Solved by assuming markers are on scaled version of robot body • Standard inverse kinematics recovers joint angles for motion

  6. Step 2: Dimensionality Reduction • Humanoid robots have large DOF, making action optimization intractable • HOAP-2 has 25 DOF • Fortunately, most actions are highly redundant • Can use dimensionality reduction techniques (e.g., PCA) to represent states and actions

  7. Posture Representation using Eigenposes

  8. Eigenposes for Walking

  9. Step 3: Learning Forward Models using Function Approximation • Basic Idea: 1. Learn forward model in the neighborhood of teacher demonstration • Use function approximation techniques to map actions to observed sensory consequences 2. Use the learned model to infer stable actions for imitation 3. Iterate between 1 and 2 for higher accuracy

  10. Approach 1: RBF Networks for Deterministic Action Selection • Radial Basis Function (RBF) network used to learn the n-th order Markov function: • st is the sensory state vector • E.g., st=t (3D gyroscope signal) • at is the action vector in latent space • E.g., Servo joint angle commands in latent space

  11. Action Selection using the Learned Function • Select optimal action for next time step t: •  measures torso stability based on predicted gyroscope signals: • Search for optimal action at* limited to local region around teacher trajectory in subspace (Chalodhorn et al., Humanoids, 2005; IJCAI 2007; IROS, 2009)

  12. Example: Learning to Walk Human motion capture data Unoptimized (kinematic) imitation

  13. Example: Learning to Walk Motion scaling Take baby steps first (literally!) Final Result (Chalodhorn et al., IJCAI 2007)

  14. Result: Learning to Walk Human Motion Capture Optimized Stable Walk

  15. Approach 2: Gaussian Processes for Probabilistic Action Selection Dynamic Bayesian Network (DBN) for Imitation [Slice at time t] Ot are observations of states St St = low-D joint space, gyro, foot pressure readings Ct are constraints on states (e.g., gyroscope values near zero) (Grimes et al., RSS 2006; NIPS 2007; IROS 2007; IROS 2008)

  16. DBN for Imitative Learning Gaussian Process-based Forward Model (input [st-1,at]): (Grimes, Chalodhorn, & Rao, RSS 2006)

  17. Action Inference using Nonparametric Belief Propagation Maximum marginal posterior actions Evidence (blue nodes)

  18. Summary of Approach Learning and action inference are interleaved to yield progressively more accurate forward models and actions

  19. Example of Learning

  20. Progression of Imitative Learning

  21. Result after Learning Human Action Imitation (Grimes, Rashid, & Rao, NIPS 2007)

  22. Other Examples

  23. From Planning to Policy Learning • Behaviors shown in the previous slides were open-loop, based on planning by inference • Can we learn closed-loop “reactive” behaviors? • Idea: • Learn state-to-action mappings (“policies”) based on the final optimized output of the planner and resulting sensory measurements

  24. Policy Learning using Gaussian Processes • For a parameterized task T(), watch demonstrations for particular values of  • E.g., Teacher lifting objects of different weight • Parameter  not given but intrinsically encoded in sensory measurements • Use inference-based planning to infer stable actions at and states st for demonstrated values of  • Learn Gaussian process policy based on {st, at}: (Grimes & Rao, IROS 2008)

  25. Example: Learning to Lift Objects of Different Weights

  26. Generalization by Gaussian Process Policy

  27. Generalizing to a Novel Object

  28. Summary and Conclusions • Stable full-body human imitation in a humanoid robot may be achievable without a physics-based model • Function approximation techniques play a crucial role in learning a forward model and in action inference • RBF networks, Gaussian processes • Function approximation also used to learn policies for reactive behavior • Dimensionality reduction using PCA (via “eigenposes”) helps keep learning and inference tractable • Challenges: Scaling up to large number of actions, smooth transition between actions, hierarchical control

More Related