1 / 69

Intelligent Behaviors for Simulated Entities

Intelligent Behaviors for Simulated Entities. I/ITSEC 2006 Tutorial. Presented by: Ryan Houlette Stottler Henke Associates, Inc. houlette@stottlerhenke.com 617-616-1293 Jeremy Ludwig Stottler Henke Associates, Inc. ludwig@stottlerhenke.com 541-302-0929. Outline.

kuniko
Télécharger la présentation

Intelligent Behaviors for Simulated Entities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intelligent Behaviors for Simulated Entities I/ITSEC 2006 Tutorial Presented by: Ryan Houlette Stottler Henke Associates, Inc. houlette@stottlerhenke.com 617-616-1293 Jeremy Ludwig Stottler Henke Associates, Inc. ludwig@stottlerhenke.com 541-302-0929

  2. Outline • Defining “intelligent behavior” • Authoring methodology • Technologies: • Cognitive architectures • Behavioral approaches • Hybrid approaches • Conclusion • Questions

  3. The Goal • Intelligent behavior • a.k.a. entities acting autonomously • generally replacements for humans • when humans are not available • scheduling issues • location • shortage of necessary expertise • simply not enough people • when humans are too costly • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion

  4. “Intelligent Behavior” • Pretty vague! • General human-level AI not yet possible • computationally expensive • knowledge authoring bottleneck • Must pick your battles • what is most important for your application • what resources are available

  5. Decision Factors1 • Entity “skill set” • Fidelity • Autonomy • Scalability • Authoring

  6. Factor: Entity Skill Set • What does the entity need to be able to do? • follow a path • work with a team • perceive its environment • communicate with humans • exhibit emotion/social skills • etc. • Depends on purpose of simulation, type of scenario, echelon of entity

  7. Factor: Fidelity • How accurate does the entity’s behavior need to be? • correct execution of a task • correct selection of tasks • correct timing • variability/predictability • Again, depends on purpose of simulation and echelon • training => believability • analysis => correctness

  8. Factor: Autonomy • How much direction does the entity need? • explicitly scripted • tactical objectives • strategic objectives • Behavior reusable across scenarios • Dynamic behavior => less brittle

  9. Factor: Scalability • How many entities are needed? • computational overhead • knowledge/behavior authoring costs • Can be mitigated • aggregating entities • distributing entities

  10. Factor: Authoring • Who is authoring the behaviors? • programmers • knowledge engineers • subject matter experts • end users / soldiers • Training/skills required for authoring • Quality of authoring tools • Ease of modifying/extending behaviors

  11. Choosing an Approach • Also ease of integration with simulation.... Scalability Ease of Authoring Skill Set Fidelity Autonomy

  12. Agent Technologies • Wide range of possible approaches • Will discuss the two extremes Cognitive Architectures Behavioral Approaches EPIC, ACT-R, Soar scripting FSMs deliberative reactive

  13. Authoring Methodologies Behavior Model Agent Architecture Simulation • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion

  14. Basic Authoring Procedure Evaluate entity behavior Run simulation Determine desired behavior Build behavior model DONE! Refine behavior model

  15. Iterative Authoring • Often useful to start with limited set of behaviors • particularly when learning new architecture • depth-first vs. breadth-first • Test early and often • Build initial model with revision in mind • good software design principles apply: modularity, encapsulation, loose coupling • Determining why model behaved incorrectly can be difficult • some tools can help provide insight

  16. The Knowledge Bottleneck • Model builder is not subject matter expert • Transferring knowledge is labor-intensive • For TacAir-Soar, 70-90% of model dev. time • To reduce the bottleneck: • Repurpose existing models • Use SME-friendly modeling tools • Train SMEs in modeling skills • => Still an unsolved problem

  17. The Simulation Interface • Simulation sets bounds of behavior • the primitive actions entities can perform • the information about the world that is available to entities • Can be useful to “move interface up” • if simulation interface is too low-level • abstract away simulation details • in wrapper around agent architecture • in “library” within the behavior model itself • enables behavior model to be in terms of meaningful units of behavior

  18. Cognitive Architectures • Overview • EPIC, ACT-R, & Soar • Examples of Cognitive Models • Strengths / Weakness of Cognitive Architectures • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion

  19. Introduction • What is a cognitive architecture? • “a broad theory of human cognition based on a wide selection of human experimental data and implemented as a running computer simulation” (Byrne, 2003) • Why cognitive architectures? • Advance psychological theories of cognition • Create accurate simulations of human behavior

  20. Introduction • What is cognition? • Where does psychology fit in?

  21. Cognitive Architecture Components

  22. A Theory – The Model Human Processor • Some principles of operation • Recognize-act cycle • Fitt’s law • Power law of practice • Rationality principle • Problem space principle (from Card, Moran, & Newell, 1983)

  23. Architecture • Definition • “a broad theory of human cognition based on a wide selection of human experimental data and implemented as a running computer simulation” (Byrne, 2003) • Two main components in modeling • Cognitive model programming language • Runtime Interpreter

  24. EPIC Architecture • Processors • Cognitive • Perceptual • Motor • Operators • Cognitive • Perceptual • Motor • Knowledge Representation (from Kieras, http://www.eecs.umich.edu/ ~kieras/epic.html)

  25. Model Task Description Task Environment Architecture Runtime Task Strategy Architecture Language

  26. Task Description • There are two points on the screen: A and B. • The task is to point to A with the right hand, and press the “Z” key with the left hand when it is reached. • Then point from A to B with the right hand and press the “Z” key with the left hand. • Finally point back to A again, and press the “Z” key again.

  27. Task Environment A B

  28. Task Strategy –EPIC Production Rules

  29. EPIC Production Rule • (Top_point_A • IF • ( (Step Point AtA) • (Motor Manual Modality Free) • (Motor Ocular Modality Free) • (Visual ?object Text My_Point_A) • ) • THEN • ( • (Send_to_motor Manual Perform Ply Cursor ?object Right) • (Delete (Step Point AtA)) • (Add (Step Click AtA)) • ))

  30. ACT-R and Soar • Motivations • Features • Models

  31. Initial Motivations • ACT-R • Memory • Problem solving • Soar • Learning • Problem solving

  32. ACT-R Architecture (from Bidiu, R., http://actr.psy.cmu.edu/about/)

  33. Some ACT-R Features • Declarative memory stored in chunks • Memory activation • Buffer sizes between modules is one chunk • One rule per cycle • Learning • Memory retrieval, production utilities • New productions, new chunks

  34. ACT-R 6.0 IDE

  35. Task Description • Simple Addition • 1 + 3 = 4 • 2 + 2 = 4 • Goal: mimic the performance of four year olds on simple addition tasks • This is a memory retrieval task, where each number is retreived (e.g. 1 and 3) and then an addition fact is retrieved (1 + 3 = 4) • The task demonstrates partial matching of declarative memory items, and requires tweaking a number of parameters. • From the ACT-R tutorial, Unit 6

  36. (p retrieve-first-number =goal> isa problem arg1 =one state nil ==> =goal> state encoding-one +retrieval> isa number name =one ) (p encode-first-number =goal> isa problem state encoding-one =retrieval> isa number ==> =goal> state retrieve-two arg1 =retrieval ) ACT-R 6.0 Production Rules

  37. Some Relevant ACT-R Models • Best, B., Lebiere, C., & Scarpinatto, C. (2002). A model of synthetic opponents in MOUT training simulations using the ACT-R cognitive architecture. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavior Representation. Orlando, FL. • Craig, K., Doyal, J., Brett, B., Lebiere, C., Biefeld, E., & Martin, E. (2002). Development of a hybrid model of tactical fighter pilot behavior using IMPRINT task network model and ACT-R. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavior Representation. Orlando, FL

  38. Soar Architecture • Problem Space Based

  39. Some Soar Features • Problem space based • Attribute/value hierarchy (WM) forms the current state • Productions (LTM) transform the current state to achieve goals by applying operators • Cycle • Input • Elaborations fired • All possible operators proposed • One selected • Operator applied • Output • Impasses & Learning

  40. Soar 8.6.2 IDE

  41. Task Description • Control the behavior of a Tank on the game board. • Each tank has a number of sensors (e.g. radar) to find enemies, missiles to launch at enemies, and limited resources • From the Soar Tutorial

  42. sp {propose*move (state <s> ^name wander ^io.input-link.blocked.forward no) --> (<s> ^operator <o> +) (<o> ^name move ^actions.move.direction forward)} sp {propose*turn (state <s> ^name wander ^io.input-link.blocked <b>) (<b> ^forward yes ^ { << left right >> <direction> } no) --> (<s> ^operator <o> + =) (<o> ^name turn ^actions <a>) (<a> ^rotate.direction <direction> ^radar.switch on ^radar-power.setting 13) } sp {propose*turn*backward (state <s> ^name wander ^io.input-link.blocked <b>) (<b> ^forward yes ^left yes ^right yes) --> (<s> ^operator <o> +) (<o> ^name turn ^actions.rotate.direction left) } Propose Moves

  43. Prefer Moves • sp {select*radar-off*move • (state <s> ^name wander • ^operator <o1> + • ^operator <o2> +) • (<o1> ^name radar-off) • (<o2> ^name << turn move >>) • --> • (<s> ^operator <o1> > <o2>) • }

  44. Apply Move • sp {apply*move • (state <s> ^operator <o> • ^io.output-link <out>) • (<o> ^direction <direction> • ^name move) • --> • (<out> ^move.direction <direction>) • }

  45. Elaborations • sp {elaborate*state*missiles*low • (state <s> ^name tanksoar • ^io.input-link.missiles 0) • --> • (<s> ^missiles-energy low) • } • sp {elaborate*state*energy*low • (state <s> ^name tanksoar • ^io.input-link.energy <= 200) • --> • (<s> ^missiles-energy low) • }

  46. Some Relevant Soar Models • Wray, R.E., Laird, J.E., Nuxoll, A., Stokes, D., Kerfoot, A. (2005). Synthetic adversaries for urban combat training. AI Magazine, 26(3):82-92. • Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.

  47. Strengths / Weaknesses of Cognitive Architectures • Strengths • Supports aspects of intelligent behavior, such as learning, memory, and problem solving, not supported by other types of architectures • Can be used to accurately model human behavior, especially human-computer interaction, at small grain sizes (measured in ms) • Weaknesses • Can be difficult to author, modify, and debug complicated sets of production rules • High level modeling languages (e.g. CogTool, Herbal, High Level Symbolic Representation language) • Automated model generation (e.g. Konik & Laird, 2006) • Computational issues when scaling to large number of entities

  48. Behavioral Approaches • Focus is on externally-observable behavior • no explicit modeling of knowledge/cognition • instead, behavior is explicitly specified: “Go to destination X, then attack enemy.” • Often a natural mapping from doctrine to behavior specifications • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion

  49. Hard-coding Behaviors • Simplest approach is write behavior in C++/Java: MoveTo(location_X); AcquireTarget(target); FireAt(target); • Don’t do this! • Can only be modified by programmers • Hard to update and extend • Behavior models not easily portable

  50. Scripting Behaviors • Write behaviors in scripting language • UnrealScript • Avoids many problems of hard-coding • not tightly coupled to simulation code • more portable • often simplified to be easier to learn & use • Fine for linear sequences of actions, but do not scale well to complex behavior

More Related