1 / 27

Back to the BlocksWorld: Learning New Actions through Situated Human-Robot Dialogue

Explore how a robot can learn new actions through dialogue with humans in a simplified blocks world using a layered planning/execution system integrated with language and perception modules. The study focuses on teaching completion, duration, and execution in various action learning experiments.

glaspie
Télécharger la présentation

Back to the BlocksWorld: Learning New Actions through Situated Human-Robot Dialogue

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Back to the BlocksWorld: Learning New Actions through Situated Human-Robot Dialogue Presented by Yuqian Jiang 2/27/2019

  2. PROBLEM • Learn new actions through situated human-robot dialogue • ...in a simplified blocks world Source: https://goo.gl/images/nS1JgX

  3. PROBLEM • How does a robot learn the action stack from a dialogue if it knows primitive actions: open gripper, close gripper, move

  4. MOTIVATION • When robots work side-by-side with humans, they can learn new tasks from their human partners through dialogue • Challenges: • Human language: discrete and symbolic, robot representation: continuous • How to represent new knowledge so it can generalize? • How should the human teach new actions?

  5. RELATED WORK • Following natural language instructions • Kollar et al., 2010; Tellex et al., 2011; Chen et al., 2010 • Learning by demonstration • Cakmak et al., 2010 • Connecting language with lower level control systems • Kress-Gazit et al., 2008; Siskind, 1999; Matuszek et al., 2012 • Using dialogue for action learning • Cantrell et al., 2012; Mohan et al., 2013

  6. METHOD • A dialogue system for action learning

  7. Intent Recognizer: • Command or confirmation • Semantic Processor: • Implemented using Combinatory Categorial Grammar (CCG) • Extracts action and object properties

  8. “stack the blue block on the red block on your right.”

  9. Perception Modules: • From camera image and internal status • A conjunction of predicates representing environment • Reference Solver: • Grounds objects in the semantic representation to the objects in the robot’s perception

  10. “stack the blue block on the red block on your right.”

  11. Dialogue manager: • A dialogue policy decides the dialogue acts based on the current state • Language Generator: • Pre-defined templates

  12. ACTION MODULES • Action knowledge • Action execution • Action learning

  13. ACTION LEARNING • If an action is not in the knowledge base, ask for instructions • Follow the instructions • Extract a goal state describing the action effects

  14. ACTION LEARNING

  15. EXPERIMENTS • Teach five new actions under two strategies • Pickup, Grab, Drop, ClearTop, Stack • step-by-step instructions vs. one-shot instructions (“pick up the blue block and put it on top of the red block”) • Five participants (more will be recruited)

  16. EXPERIMENTS

  17. RESULTS: Teaching Completion All failed teaching dialogues are one-shot instructions.

  18. RESULTS: Teaching Duration Step-by-step dialogues take longer to learn.

  19. RESULTS: Execution Step-by-step instructions have better generalization.

  20. CONCLUSION • An approach to learn new actions from human-robot dialogue • On top of a layered planning/execution system • Integrated with language and perception modules • Success in generalizing to new situations in blocks world

  21. CRITIQUE • Simplified domain with only 3 low-level actions • Cannot learn high-level actions that cannot be sequenced using these low-level actions • Cannot learn actions that involve objects that cannot be grounded • Is it really learning a new action, or just a new word that describes a goal using existing actions?

  22. CRITIQUE • Only learns action effects, but no preconditions • Experiments do test situations that violate preconditions, such as picking up a block that has another block on top • Again, only successful because the preconditions of the underlying actions are modeled

  23. CRITIQUE • Evaluation • Nothing surprising about the collaborative/non-collaborative results • Prefer to see more details on other modules of the system, and evaluation of their robustness

  24. CRITIQUE • Challenges: ✔Human language: discrete and symbolic, robot representation: continuous ?How to represent new knowledge so it can generalize? ?How should the human teach new actions?

More Related