1 / 31

Synchronous vs. Asynchronous Video in Multi-Robot Search

Synchronous vs. Asynchronous Video in Multi-Robot Search. Prasanna Velagapudi, Jijun Wang, Huadong Wang, Paul Scerri, Michael Lewis, Katia Sycara University of Pittsburgh Carnegie Mellon University. Urban Search and Rescue (USAR). Location and rescue of people in a structural collapse

Télécharger la présentation

Synchronous vs. Asynchronous Video in Multi-Robot Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Synchronous vs. Asynchronous Video in Multi-Robot Search Prasanna Velagapudi, Jijun Wang, Huadong Wang, Paul Scerri, Michael Lewis, Katia Sycara University of Pittsburgh Carnegie Mellon University

  2. Urban Search and Rescue (USAR) • Location and rescue of people in a structural collapse • Urban disasters • Landslides • Earthquakes • Terrorism Credit: NIST

  3. USAR Robots • Robots can help • Unstable voids • Mapping/clearing • Want them to be: • Small • Cheap • Plentiful Credit: NIST

  4. Urban Search and Rescue (USAR) • Now: One operator  one robot • Directly teleoperated • Victim detection through synchronous video • Future: One operator  many robots • Manufacturing robots is easy • Training operators is hard • Need to scale navigation and search

  5. Synchronous Video • Most common form of camera teleoperation • High bandwidth • Low latency • Applications • Surveillance • Bomb disposal • Inspection iRobot PackBot

  6. Synchronous Video • Does not scale with team size

  7. Synchronous Video • Does not scale with team size

  8. Synchronous Video • Does not scale with team size

  9. Asynchronous Imagery • Inspired by planetary robotic solutions • Limited bandwidth • High latency • Multiple photographs from single location • Maximizes coverage • Can be mapped to virtual pan-tilt-zoom camera

  10. Hypothesis • Asynchronicity may improve performance • Helps guarantee coverage • Can review images multiple times • Asynchronicity may reduce mental workload • Only navigation must be done in real-time • Search becomes self-paced

  11. USARSim • Based on UnrealEngine2 • High-fidelity physics • “Realistic” rendering • Camera • Laser scanner (LIDAR) [http://www.sourceforge.net/projects/usarsim]

  12. MrCSMulti-robot Control System

  13. MrCSMulti-robot Control System Status Window Map Overview Video/ Image Viewer Waypoint Navigation Teleoperation

  14. Experimental Conditions • Objective: • Find victims  Mark victims on map • Control 4 robots • Waypoint control (primary) • Direct teleoperation • Explore the map • Map generated online w/ Occupancy Grid SLAM • Simulated laser scanners

  15. Experimental Conditions 10 Victims

  16. Streaming Mode Panorama Mode Panoramas stored for later viewing Streaming live video Experimental Conditions

  17. Experimental Conditions(Streaming Mode)

  18. Experimental Conditions(Panorama Mode)

  19. Subjects • 21 paid participants • 9 male, 12 female • No prior experience with robot control • Frequent computer users: 71% • Played computers games > 1hr/week: 28%

  20. Method • Written instructions • 15-20 min. training session • Both streaming and panoramas enabled • Encouraged to find and mark a victim • 20 min. testing session • 20 min. testing session

  21. Metrics • Switching times • Number of victims • Thresholded accuracy

  22. Panorama 6 Streaming 5 4 3 2 1 0 Within 0.75m Within 1m Within 1.5m Within 2m Accuracy Threshold Victims Found Average # of victims found

  23. 7 Panorama First 6 < 2m < 1.5m 5 4 < 2m 3 < 1.5m Streaming First 2 1 0 First Session Second Session Trial Order Interaction Average # of victims found

  24. 12 10 8 6 4 2 0 0 20 40 60 80 100 120 Number of Switches Switching Time (Streaming Mode) Average # of reported victims

  25. 12 10 8 6 4 2 0 0 20 40 60 80 100 120 Number of Switches Switching Time (Panorama Mode) Average # of reported victims

  26. Conclusions • Streaming is better than panoramic • Perhaps not by as much as expected • Conditions favorable to streaming video • Similar asynchronous performance is good • May avoid forced pace switching • May scale with team size

  27. Switch Time >> Comm. Latency Operator-induced latency Operator switch time # of robots

  28. Victims Found • Repeated Measures ANOVA • 1.5m radius • F(1,19) = 8.038 • p = 0.01 • 2.0m radius • F(1,19) = 9.54 • p = 0.006

  29. Trial Order Interaction • Repeated Measures ANOVA • 1.5m radius • F(1,19) = 7.34 • p = 0.014 • 2.0m radius • F(1,19) = 8.77 • p = 0.008

  30. Switching Time • Streaming mode • Repeated Measures ANOVA • F(1,19) = 3.86 • p = 0.064 • Panorama mode • No relation found

More Related