1 / 17

Human-Robot “Pickup” Teams with Language-Based Interaction

Human-Robot “Pickup” Teams with Language-Based Interaction. Manuela Veloso, Anthony Stentz, Alexander Rudnicky Brett Browning, M. Bernardine Dias Faculty Thomas Harris, Brenna Argall, Gil Jones Satanjeev Banerjee Students. Sponsored by The Boeing Company. Project Goals.

pprice
Télécharger la présentation

Human-Robot “Pickup” Teams with Language-Based Interaction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human-Robot “Pickup” Teams with Language-Based Interaction Manuela Veloso, Anthony Stentz, Alexander Rudnicky Brett Browning, M. Bernardine Dias Faculty Thomas Harris, Brenna Argall, Gil Jones Satanjeev Banerjee Students Sponsored by The Boeing Company

  2. Project Goals • Robots that discover and understand each other’s capabilities • Robots that can team together and coordinate their activities • Human-robot teams that collaborate to accomplish tasks Carnegie Mellon School of Computer Science

  3. Domain: Treasure Hunt Human-robot teams competing to locate “treasure” in an unknown environment Team 3, report your location Carnegie Mellon School of Computer Science

  4. Treasure Hunt scenarios • One human and two robots search for a treasure and return it to base • [Y1] stationary human; Pioneer/Segway close-coupled team • Semi-mobile human (partially accessible zones) [Y2] • Human and two teams search for treasure(s) [Y2-3] • Two teams of human/robots compete to locate and retrieve treasure(s) [Y4] Carnegie Mellon School of Computer Science

  5. Language Interface Integration • MAP GUI integrated • Graphical display of information from robots • Mixed speech / gesture inputs • Flexible architecture • Dynamic incorporation of additional robots • Improved communications protocols • Language and Dialog • Navigation and search language • Clarification and confirmation keyed to robot capability Carnegie Mellon School of Computer Science

  6. Architecture Sphinx Phoenix Raven Claw 1 Robot 1 OpTrader Helios Tablet Backend Client Helios Map Server Robot 2 Kalliope Rosetta Raven Claw 2 User interface Dialog control Robot system Speech GUI BoingLib Olympus Carnegie Mellon School of Computer Science

  7. Multi-modal interface • Some classes of information are communicated more effectively by gesture than by language • User can specify a search path using stylus, ask robots to identify themselves • Components • Java GUI • TeamTalk dialog system • Fujitsu Stylistic 5500 tablet PC Carnegie Mellon School of Computer Science

  8. Video of Human-Robot interaction Carnegie Mellon School of Computer Science

  9. Near term goals • Dynamic updating of map information • Access to robot capability data • Goal conflict resolution (play / direct command) • Better Op Trader state transparency Carnegie Mellon School of Computer Science

  10. Publications • T. K. Harris, S. Banerjee, and A. I. Rudnicky. Heterogeneous Multi-Robot Dialogues for Search Tasks. (2005) AAAI Spring Symposium: Dialogical Robots, Palo Alto, California. • T. K. Harris, S. Banerjee, A. Rudnicky, J. Sison, Kerry B., and A. Black. A Research Platform for Multi-Agent Dialogue Dynamics. (2004) Proceedings of The IEEE International Workshop on Robotics and Human Interactive Communications, Kurashiki, Japan. Carnegie Mellon School of Computer Science

  11. Demonstration Videos • Robot Coordination • Video • Human-Robot multi-modal interaction • Video Carnegie Mellon School of Computer Science

  12. Ongoing challenges • Search in cluttered environments • Learn to dynamically select of tactics, integrating information from team members (human, robot) and knowledge of environment • Extend play coordination to include complete state machines with additional synchronization primitives and human input • Develop multiple model-based object and teammate tracking • Extend grounding interactions between human and robot • Improve robot infrastructure; system operations made fully routine. Carnegie Mellon School of Computer Science

  13. Y2 Plan • Develop techniques to enable robots to form pickup teams with dynamic sub-team formation and execute coordinated actions • Formalize requirements for robot team participation • Incorporate a new (Boeing) robot into the team • Explore different human-robot team compositions (team size, robot types) Carnegie Mellon School of Computer Science

  14. Y2 Plan • Extend language and agent interfaces to allow humans to interact efficiently with pickup robot teams • Incorporate visual feedback from robots • Access richer robot state information • Extend domain ontology • Extend capability for clarification sub-dialogs • Introduce simple landmark grounding capability Carnegie Mellon School of Computer Science

  15. Y2 Plan • Investigate extensions to the Y1 scenario • Size and complexity of the environment • Dynamic environments • Identify a final Y2 demo scenario Carnegie Mellon School of Computer Science

  16. Technology Transfer to Boeing • TeamTalk spoken language interface • Includes Sphinx recognition system, Phoenix semantic parser, RavenClaw dialogue manager, Rosetta language generator; Boeing has synthesizer (Theta) • Updates provided over time • Working with Boeing to adapt system Carnegie Mellon School of Computer Science

  17. Questions? Carnegie Mellon School of Computer Science

More Related