1 / 20

Adaptive Automation for Human Performance in Large-Scale Networked Systems

Adaptive Automation for Human Performance in Large-Scale Networked Systems. Raja Parasuraman Ewart de Visser George Mason University. Kickoff Meeting, Carnegie Mellon University, August 26, 2008. AFOSR MURI: Modeling Synergies in Large-Scale Human-Machine Networked Systems. Research Goals.

wanda-lowe
Télécharger la présentation

Adaptive Automation for Human Performance in Large-Scale Networked Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Automation for Human Performance in Large-Scale Networked Systems Raja Parasuraman Ewart de Visser George Mason University Kickoff Meeting, Carnegie Mellon University, August 26, 2008. AFOSR MURI: Modeling Synergies in Large-Scale Human-Machine Networked Systems

  2. Research Goals Develop validated theories and techniques to predict behavior of large-scale, networked human-machine systems involving unmanned vehicles Model human decision making efficiency in such networked systems Investigate the efficacy of adaptive automation to enhance human-system performance 2

  3. Collaborations with MURI Team Members Cornell/MIT/Pitt Human-Robot Team Performance and Modeling Human-Agent Collaboration GMU CMU Scaling up to Large Networks All 3

  4. George Mason University Approach Conduct empirical and modeling studies of human decision making performance with multiple robotic assets Examine human-system performance using the Distributed Decision Making simulation (DDD Version 4) (with Mark Campbell of Cornell) Examine efficacy of Adaptive Delegation Interface (ADI) with Machinetta for Human-Agent collaboration (with Paul Scerri of CMU) Develop human-robot performance metrics for use in large networks 4

  5. Joint GMU-Cornell Approach Examine human-system performance (1-4 person teams, multiple unmanned vehicles, using DDD) in simulated reconnaissance missions (GMU) Model human decision-making performance (Cornell) Identify and quantify human “cognitive bottlenecks” (GMU and Cornell) Identify points for “adaptive tasking” or adaptive automation (GMU and Cornell) Scale up to larger networks (more UVs and agents) 5

  6. Adaptive Automation Event Based Performance Based Model Based Adaptable/Adaptive Automation Adaptable Automation Invocation method playbook Proxy Proxy Proxy Teamwork Proxies Parasuraman (2000); Kaber & Endsley (2004); Scerri et al. (2006); Miller & Parasuraman (2007))

  7. Playbook Interface for RoboFlag Playbook: Enables human-automation communication about plans, goals, and methods—akin to calling “plays” from a sports team’s playbook (Miller & Parasuraman, 2007) Validation experiments with RoboFlag (Parasuraman et al., IEEE SMC-Part A, 2005) Human operator supervises multiple Blue Team robots using a delegation interface (Playbook) Adapted from Cornell University Work done under DARPA MICA Program

  8. Methods Single operator sends a team of 4-8 robots (blue team) into opponent territory (populated by red team robots) to locate a specified target and return home as quickly as possible User has Playbook of automated tools to direct robots Waypoint (point and click) control (“Manual”) Automated plays (Circle offense; Circle defense; Patrol border) User selects number of robot(s) to which plays are assigned User can intervene in robot execution of a play and apply corrective measures if necessary Red team robot tactics predictable (always offensive or defensive) or unpredictable (either offensive or defensive)

  9. Hypotheses for Efficacy of Playbook Interface Use of automated plays at times of user’s choosing enhances mission success rate and reduces mission completion time Flexible use or either automated plays or manual control allows user to compensate for “brittleness” of automation particularly when opponent tactics are unpredictable Management workload associated with delegation is only low to moderate 9

  10. Flexible Delegation Enhances System Performance without Increasing User Workload 10 Parasuraman et al., IEEE-SMC Part A, 2005

  11. Playbook for Pre-Mission UCAV Planning User can call high-end play—e.g., Airfield Denial, or Stipulate the method and procedure for doing Airfield Denial by filling in specific variable values (i.e., which airfield to be attacked) what UAVs to be used where they should rendezvous stipulate which sub-methods and optional task path branches to be used Etc. Miller & Parasuraman, Human Factors, 2007 11

  12. Simulation Platforms at GMU DDD 4.0 1-4 person teams Large numbers of UVs/agents Adaptive Delegation Interface (ADI) Designed for planning, executing, and monitoring UV movements Adaptable: High level plans can be proposed by the user and modified by the automation Adaptive: UVs can autonomously adjust to certain events in the scenario 12

  13. Adaptive Delegation For Planning Delegation Interfaces: Execution Many Human-Robot interfaces are primarily execution based RoboFlag is an example of an execution-based delegation interface Delegation Interfaces: Planning Little prior work on real-time planning with robotic vehicles Related work on route planning for pilots: Layton et al. (1994) Preliminary research under DARPA's Multiagent Adjustable Autonomy Framework (MAAF) for Multi-Robot, Multi-Human Teams (with Amos Freedy). 13

  14. Adaptive Delegation Concept Plan verification with doctrine Shared task model Doctrine Checker Adaptive Interface Robotic Operator plan instructions Automated Planning Assistant planning feedback Machinetta planning feedback Sending instructions to vehicles automated plan generation plan execution monitoring Battle Space 14

  15. Automated Route Planning UV1 Time Steps Task Ordering Regions Machinetta Post Processing 0 1 2 3 Time Steps UV2 0 1 2 3 4 5 6 7 8 • Task ordering goes through all possible permutations of the given tasks (if requested) and submits to Machinetta a specific task order to be followed. • Machinetta generates the optimized path plan to reach target locations • Post processing makes use of Machinetta generated paths (for the SEARCH task type) and introduces loading/unloading time (for the EXTRACT task type) into plans. • Given target location, current location of vehicle and time, fuel, task importance and risk avoidance importance; Machinetta iterates through all possible region traversal options and converges on the best (in terms of time, fuel and risk) trajectory possible. (one such trajectory for a vehicle, 4 time steps and 9 regions is shown in figure above) • Machinetta takes into account both user specified parameters (such as task and risk importance), as well as vehicle capabilities (such as speed and fuel), and generates plans that can implement such complex behaviors as delayed action and risk avoidance. 15

  16. Multiagent Adjustable AutonomyFramework 2 1 2 2 2 2 1 1 1 1 2 1 2 5 Adjustable Autonomy UGV 2 asks to confirm  Human responds by confirming IED presence ! 4 Dynamic Reallocation UGV 1 loses comms  UAV assists and functions as relay station 3 Dynamic Reallocation UGV 2 Camera Failure  UGV 1 then provides view 2 Autonomous Behavior Obstacle on Path  UGV 1 avoids obstacle 1 Plan Instantiation Plan is given to UVs  UVs carry out plan

  17. The Adaptive Delegation Interface Task library Mission wizard & compose Mission map Automated planning assistant 17

  18. Mission Planning & Execution Mission Map Task Library Compose View Reconnaissance add asset delete asset Rescue & Extract + Mission Parameters Super Plays UAV 1 UGV 1,2 + UAV Recon clockwise UAV Recon counter-clockwise Plays vehicle parameters vehicle parameters UAV Recon tasks tasks UGV Recon & Extract - - UAV Recon counterclockwise UGV Recon & Extract G1, B5 Tasks move - Move G1 search UAV Recon G1 extract 2:00 go home Search G1 Move G1 Reactions  avoid Extract G1 Search G1 wait stop + UAV Recon B5 Move B5 Finish plan Check plan Submit plan Mission Wizard Automated Planning Assistant Plan ID Iter Type Assets Time Damage Victims Overall Type Message Content Mission Compose Review You should include a UAV in the plan before submission Plan A 2 R&E 1 UAV, 2 UGV  45 5 20 60 Plan B 2 R&E 1 UAV, 2 UGV Status New plans have been generated 55 4 35 45 Mission Execution 18 Standing By Execute plan Finish plan Submit plan Review plan Modify plan

  19. 3 2 1 3 Agent Control Panel Agent Status Panel  Units Assets Task Status Issues Role Action Time  Talon Unit Alpha (2/3) Talon Unit Alpha Recon & clear area of IEDs Cannot see IED Role-reallocation… unknown MI Company  1 14:05 Talon 1 Provide Camera support Camera Failure Re-defining role… unknown  T1 3 3 14:05 Talon 2 None Moving to IED location ~10 min. Disarm IED T2 Talon 3 STOP Options Pop-out   Task Vehicle Sensors Message Center Timeline

  20. Advantages of using the Adaptive Delegation Interface Users can give high-level commands to a set of vehicles No need to input each task individually Automation can generate and finish plans Humans can adjust plans as needed Users can monitor executed plans and intervene if necessary Minimal training needed (20-30 min.) 20

More Related