1 / 17

Mental Models for Human-Robot Interaction

Mental Models for Human-Robot Interaction. Christian Lebiere ( cl@cmu.edu ) 1 Florian Jentsch and Scott Ososky 2 1 Psychology Department, Carnegie Mellon University 2 Institute for Simulation and Training, University of Central Florida. Cognitive Models of Mental Models.

efrem
Télécharger la présentation

Mental Models for Human-Robot Interaction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mental Models for Human-Robot Interaction Christian Lebiere (cl@cmu.edu)1 Florian Jentsch and Scott Ososky2 1Psychology Department, Carnegie Mellon University 2Institute for Simulation and Training, University of Central Florida

  2. Cognitive Models of Mental Models • Mental models provide a representation of situation, various entities, capabilities, & past decisions/actions • Current models are non-computational descriptions • Cognitive models can provide computational link to overall robotic intelligence architecture for dual uses: • Provide a quantitative, predictive understanding of human team shared mental models • Support improved design of human-robot interaction tools and protocols • Provide a cognitively-based computational basis for implementation of mental models in robots

  3. Representation Components • Mental model representation • Ontology of concepts and decisions • Lexical (WordNet), Structural (FrameNet), Statistical (LSA) • Symbolic frameworks • Decision trees, semantic networks • Statistical frameworks • Bayesian networks, semantic similarities • Knowledge of task situation • Situation awareness – mapping to levels of SA • Environment limitations – who sees/knows what (perspective) • Architectural limitations – who remembers what (WM, decay)

  4. Reasoning and inference • Inferring mental models • Instance-based learning (Gonzalez & Lebiere) • E.g., Learning to control systems by observation or imitation • Inferring current knowledge • Perspective-taking in spatial domain (Trafton) • E.g., hide and seek, collaborative work • Predicting decisions • Theory of mind recursion (Trafton, Bringsjord) • Imagery-based simulation (Wintermutte) • Shared plan execution in MOUT (Best & Lebiere) • Sequence learning in game environments (West & Lebiere)

  5. Cognitive Architectures Intentional Module (aPFC) Declarative Module (Temporal/Hippocampus) • Computational representation of invariant cognitive mechanisms • Behavior selection • Production systems • Utility – rewards and costs • Memories • Working memory: buffers • Long-term: semantic/episodic • Activation mechanisms • Learning • Symbolic and statistical • Human factor limitations • Perceptual-motor parameters • Individual differences • Strategies and knowledge • Capacity parameters Goal Buffer (DLPFC) Retrieval Buffer (VLPFC) Matching (Striatum) Productions (Basal Ganglia) Selection (Pallidum) Execution (Thalamus) Visual Buffer (Parietal) Manual Buffer (Motor) Manual Module (Motor/Cerebellum) Visual Module (Occipital/etc) Environment

  6. Pursuit Task • Follow that Guy: human soldier and robot teammate • Shared mental model of pursuit situation scenario • Set of data encoding various scenarios • Items organized according to SMMs held by expert teams (Equipment, Task, Team Interaction, Team) • Decision tree built using information from police “foot pursuit” procedures • For each decision, the most critical item is listed • However, other factors may be considered in weighing decision • Loop to end or continue the pursuit given fluid situation

  7. Data

  8. Scenario Data and Decision Tree

  9. Part 1: Who should pursue? YES YES YES YES Is backup support available? Start H-R Communication reliable (5x5)? Is the terrain negotiable for robot? Are suspects armed? Are sensors reliable in the search area? Is the threat immediate (civilians, etc.) Immediate threat / critical situation? Current last known location? Hold position, report incident YES SK-E3 EQ-C3 EQ-S3 SK-S2 SK-S8 Team pursuit Soldier only pursuit Robot only pursuit No No No No No No No No SK-S7 IA-A1 SK-S8 YES YES YES Continue to Part 2: pursuit loop

  10. Is the suspect armed? SK-S1 SK-S2 SK-S3 SK-A1 Good/ Fair Light/ Moderate No Yes Yes No No Yes Poor No Poor No No No Yes No Yes Good/Fair No Yes Yes Yes Yes Discontinue and Report Was this, or is there potential for a violent crime? Are communications functioning properly? What are the weather conditions? Can a perimeter be set up to contain the suspect? Begin or Continue pursuit Do you have line of sight with suspect? Deciding whether to pursue What are the traveling surface conditions? What is the pedestrian traffic like? IA-A1 EQ-C3 SK-A2 Do you have supervisor clearance? Are backup units available to assist you? Can you apprehend them at a later time? Do you know the identity of the suspect? IA-R1 Continue Pursuit TM-W1 SK-E1 SK-E2 SK-E3 Heavy

  11. General Cognitive Model • Develop general model that takes mental models in the form of decision trees and learns to retrieve and execute them • Each decision is represented as sequence of chained steps • Each piece of data is represented as separate chunk • Model (7 p* production rules) depends on declarative memory to retrieve rule steps, data items and decision instances • No hardcoded decision logic • Each decision depends on matching against past instances combining activation recency, frequency and partial matching • Stochasticity of activation results in probabilistic decisions • Run model in Monte Carlo mode for decision distribution • Cross-validation: train on some scenarios, test on others

  12. Individual Decision Inference

  13. Overall Decision Agreement

  14. Generalized Condition • 35 scenarios • 3 experts • Intermediate decisions • Relative rankings • Desirability ratings • Comments

  15. Results • Match to first-last ranks, poor middle • Slightly different ratings pattern • Comparable cross-expert correlations

  16. Learning • Proceduralize individual steps from declarative instructions to production rules to replicate learning curve from novice to proficiency and expertise • Apply feature selection using utility learning to encode and use only a subset of data items for each decision • Learn shortcuts that combine multiple individual binary decisions into single, multi-outcome decision • Generate rankings/ratings from probability judgments generated from activation of memory retrievals • Abstract decision instances into discrete types

  17. Future Work • Validate model against human participants data along entire learning curve and broad range of situations • Explore Bayesian network formalism as alternative to enhance generalization in multi-step decisions • Integrate cognitive model in multi-agent simulations to validate computational mental model in dynamic decision-making setting • Integrate computational cognitive model on robotic platform to assess ability to improve human-robot interaction through shared models

More Related