1 / 21

A Relational Representation for Procedural Task Knowledge

A Relational Representation for Procedural Task Knowledge. Stephen Hart Roderic Grupen David Jensen Laboratory for Perceptual Robotics University of Massachusetts Amherst New England Manipulation Symposium May 25, 2005. Introduction and Motivation.

kirkan
Télécharger la présentation

A Relational Representation for Procedural Task Knowledge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Relational Representation for Procedural Task Knowledge Stephen Hart Roderic Grupen David Jensen Laboratory for Perceptual Robotics University of Massachusetts Amherst New England Manipulation Symposium May 25, 2005

  2. Introduction and Motivation • Robots performing tasks in real-world environments require methods to: • Produce fault-tolerant behavior • Focus on most salient and relevant information • Handle multi-modal, continuous data • Leverage past experience (i.e. adapt and reuse) • Can we learn probability estimates regarding the effects of sensorimotor variables on task success? • e.g. If I take these actions, how likely am I to succeed at my task?

  3. Generalized Task Expertise • Declarative knowledge • Captures abstract knowledge about the task • e.g. find an object, reach to it, pick it up... • Procedural knowledge • Captures knowledge about how to instantiate the abstract policy in a particular environmental context • e.g. turn my head to the left, use my left hand to reach, use an enveloping grasp...

  4. Schema Theory • Arbib (1995) describes control programs composed of: • Perceptual schema - a Ball might be characterized by “size,” “color,” “velocity,” etc. • Motor schema - actions characterized by a “degree of readiness” and “activity level.” • Are such distinctions misleading? • Gibsonian Affordances: a perceptual feature is only meaningful if it facilitates action • Mirror Neurons: the same neurons will activate when performing an action or when observing someone else perform that action • Claim: All perceptual information can come from appropriately designed controllers

  5. How do we learn procedural structure? • We would like the robot to differentiate its actions based on environmental context • e.g. Pick and Place • Which available sensorimotor features are correlated • structure learning • How these features relate, probabilistically, to each other • parameter learning

  6. Relational Data • Data with complex dependencies between instances or varying structure (not i.i.d.) • Applicable to robotics domain because: • Different training episodes may exhibit varying structure • Data designated as Objects and Attributes • Objects are related through the structure of the data • Attributes are related through learned statistical dependencies • Relational Dependency Networks • approximate the full joint distribution of a set of variables with a set of conditional probability distributions • Perform Gibbs sampling to do joint inference

  7. Some Controller Objects Localize Reach Grasp fingers orientation locale convergence state bounding box dimensions convergence state lift-able

  8. What is Relational About this Data? Simple Assembly 1: Grasp Controller Reach Controller Assemble Controller Grasp Controller Reach Controller

  9. What is Relational About this Data? Simple Assembly 2: Grasp Controller Remanipulate Controller Reach Controller Assemble Controller Grasp Controller Reach Controller

  10. Gathering the Dataset • Observe an autonomous program or a teleoperator performing a task a variety of ways • Each trial may follow a different trajectory • Data is collected after each trial • Model is learned with Proximity http://kdl.cs.umass.edu/proximity/

  11. Experiments • PickUp with DexterTM • 2 objects (3 orientations) • tall box, coffee can • 2 grasps: • 2 VF, 3 VF • 2 reaches: • top approach • side approach • 8 locales • uniformly distributed

  12. Localize Reach Grasp fingers locale orientation convergence state convergence state bounding box dimensions lift-able The Learned Model Graph

  13. Attribute Trees • The RDN algorithm estimates a CPD for each attribute • Learns a locally consistent Relational Probability Tree(RPT) for that attribute • Each tree focuses attention on the most salient predictors of the corresponding attribute • Manages complexity • Allows for easy and intuitive interpretation • Each attribute (sensorimotor feature) has an affordance in terms of the current task

  14. RPT for “Lift-able”

  15. Using the RDN to construct policy • How do we use the learned schema to perform the task again? • At each action point: • perform joint inference on task success variables and find most likely resource assignment • Use this assignment and see how likely success is • Perform next action with resource binding, possibly uncovering new information through interaction

  16. Yeah, but... how does it perform? • Pick up the can with 2 or 3 fingers from the top • Pick up the box with 2 fingers • From the side or the top standing up • From the top laying down • Predicts little probability of success if object is outside reachable workspace

  17. Where to Next? • How do we learn the declarative structure? • Previous work by Huber, Platt, etc. • Capture dynamic responseof controllers during execution • Learn dependencies through direct interaction with the environment • Can we sample a set attributes from uncountable possible set • Resample if poor policies are learned

  18. The End

  19. RDNs in Robotics • What do we know? • a collection of controllers are necessary for a task, usually organized as a sequence of sub-goals • controllers have state, attached resources, and can reveal perceptual information through execution • controllers can execute sequentially or in conjunction • What don’t we know? • Which sensorimotor features of each controller are important and how they correlate

  20. Grasp Grasp Grasp Reach Reach Reach Grasp Grasp Grasp Reach Reach Reach Localize Localize Localize Localize Four Training Structures

  21. sequential relations Localize Controller Reach Controller Grasp Controller Reach Controller conjunctive relations Obstacle Avoidance Controller Kinematic Conditioning Controller Obstacle Avoidance Controller What is Relational About this Data? Pick and Transport: Not independently distributed!!!

More Related