1 / 26

Learning Agents Center George Mason University

Disciple Reasoning and Learning Agents. Gheorghe Tecuci with Mihai Boicu, Dorin Marcu, Bogdan Stanescu, Cristina Boicu, Marcel Barbulescu. Learning Agents Center George Mason University. Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004. Overview.

sack
Télécharger la présentation

Learning Agents Center George Mason University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Disciple Reasoning and Learning Agents Gheorghe Tecuci with Mihai Boicu, Dorin Marcu, Bogdan Stanescu, Cristina Boicu, Marcel Barbulescu Learning Agents Center George Mason University Symposium on Reasoning and Learning in Cognitive Systems Stanford, CA, 20-21 May 2004

  2. Overview Research Problem, Approach, and Application Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Agent Development Experiments Teaching and Learning Demo Acknowledgements

  3. Research Problem and Approach Research Problem:Elaborate a theory, methodology and family of systems for the development of knowledge-base agents by subject matter experts, with limited assistance from knowledge engineers. Approach: Develop a learning agent that can be taught directly by a subject matter expert while solving problems in cooperation. The agent learns from the expert, building, verifying and improving its knowledge base The expert teaches the agent to perform various tasks in a way that resembles how the expert would teach a person. 1. Mixed-initiative problem solving 2. Teaching and learning 3. Multistrategy learning Problem Solving Ontology + Rules Interface Learning

  4. Sample Domain: Center of Gravity Analysis Centers of Gravity:Primary sources of moral or physical strength, power or resistance of the opposing forces in a conflict. Application to current war scenarios (e.g. War on terror, Iraq)with state and non-state actors (e.g. Al Qaeda). Identify COG candidates Test COG candidates Identify potential primary sources of moral or physical strength, power and resistance from: Test each identified COG candidate to determine whether it has all the necessary critical capabilities: Which are the critical capabilities? Are the critical requirements of these capabilities satisfied? If not, eliminate the candidate. If yes, do these capabilities have any vulnerability? Government Military People Economy Alliances Etc.

  5. Overview Research Problem, Approach, and Application Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Agent Development Experiments Teaching and Learning Demo Acknowledgements

  6. Problem Solving: Task Reduction T1 • A complex problem solving task is performed by: • successively reducing it to simpler tasks; • finding the solutionsof the simplest tasks; • successively composing these solutions until the solution to the initial task is obtained. S1 Q1 … S11 A1n A11 S1n T1n T11a S11a T11b S11b … Q11b S11b … S11bm S11b1 A11bm A11b1 … T11b1 T11bm Let T1 be the problem solving task to be performed. Finding a solution is an iterative process where, at each step, we consider some relevant information that leads us to reduce the current task to a simpler task or to several simpler tasks. The question Q associated with the current task identifies the type of information to be considered. The answer A identifies that piece of information and leads us to the reduction of the current task.

  7. COG Analysis: World War II at the time of Sicily 1943 We need to Identify and test a strategic COG candidate for Sicily_1943 Which is an opposing_force in the Sicily_1943 scenario? Allied_Forces_1943 Therefore we need to Identify and test a strategic COG candidate for Allied_Forces_1943 Is Allied_Forces_1943 a single_member_force or a multi_member_force? Allied_Forces_1943 is a multi_member_force Therefore we need to Identify and test a strategic COG candidate for Allied_Forces_1943 which is a multi_member_force What type of strategic COG candidate should I consider for this multi_member_force? I consider a candidate corresponding to a member of the multi_member_force Therefore we need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore we need to Identify and test a strategic COG candidate for US_1943

  8. Overview Research Problem, Approach, and Application Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Agent Development Experiments Teaching and Learning Demo Acknowledgements

  9. Knowledge Base: Object Ontology + Rules Object Ontology A hierarchical representation of the objects and types of objects. A hierarchical representation of the types of features.

  10. Knowledge Base: Object Ontology + Rules We need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? EXAMPLE OF REASONING STEP US_1943 Therefore we need to Identify and test a strategic COG candidate for US_1943 LEARNED RULE IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force QuestionWhich is a member of ?O1 ? Answer?O2 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force THEN Identify and test a strategic COG candidate for ?O2 THEN Identify and test a strategic COG candidate for a force The force is ?O2 INFORMAL STRUCTURE FORMAL STRUCTURE

  11. Learnable knowledge representation Use of the object ontology as an incomplete and evolving generalization language. Plausible version space (PVS) Use of plausible version spaces to represent and use partially learned knowledge: Universe of Instances Plausible Upper Bound Concept Plausible Lower Bound • Rules with PVS conditions • Tasks with PVS conditions • Object features with PVS concept • Task features with PVS concept Feature Domain: PVS concept Range: PVS concept

  12. Overview Research Problem, Approach, and Application Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Agent Development Experiments Teaching and Learning Demo Acknowledgements

  13. Control of modeling, learning and problem solving Input Task Mixed-Initiative Problem Solving Ontology + Rules Generated Reduction Reject Reduction Accept Reduction New Reduction Solution Rule Refinement Task Refinement Rule Refinement Modeling Learning

  14. We need to 2 1 Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? Learns Provides an example Rule_15 US_1943 Therefore we need to Identify and test a strategic COG candidate for US_1943 … We need to 3 5 Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 Applies Rule_15 Which is a member of European_Axis_1943? Refines ? Rule_15 4 Germany_1943 Therefore we need to Identify and test a strategic COG candidate for Germany_1943 Accepts the example Disciple uses the learned rules in problem solving, and refines them based on expert’s feedback. Learning Modeling Problem Solving Refining

  15. Rule learning method Analogy and Hint Guided Explanation Analogy-based Generalization Plausible version space rule plausible explanations PUB guidance, hints Example of a task reduction step PLB Incomplete justification analogy Knowledge Base

  16. Find an explanation of why the example is correct I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 The explanation is the best possible approximation of the question and the answer, in the object ontology. has_as_member Allied_Forces_1943 US_1943

  17. Generate the PVS rule has_as_member Allied_Forces_1943 US_1943 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 Rewrite as explanation ?O1 has_as_member ?O2 Most general generalization Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Condition ?O1 is Allied_Forces_1943has_as_member ?O2 ?O2 is US_1943 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Most specific generalization THEN Identify and test a strategic COG candidate for a force The force is ?O2

  18. Rule refinement method Learning by Analogy And Experimentation Knowledge Base IF <task> Condition<condition 1> Except when condition<condition 2> … Except when condition<condition n> PVS Rule Failure explanation Example of task reductions generated by the agent THEN <subtask 1> … <subtask m> Incorrect example Correct example Learning from Explanations Learning from Examples

  19. Overview Research Problem, Approach, and Application Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Agent Development Experiments Teaching and Learning Demo Acknowledgements

  20. Modeling the problem solving process of the subject matter expert and development of the object ontology of the agent. Teaching of the agent by the subject matter expert. Agent Development Methodology

  21. Use of Disciple at the US Army War College 319jw Case Studies in Center of Gravity Analysis Disciple helps the students to perform a center of gravity analysis of an assigned war scenario. Disciple was taught based on the expertise of Prof. Comello in center of gravity analysis. Problemsolving Teaching DiscipleAgent KB Learning Global evaluations of Disciple by officers from the Spring 03 course Disciple helped me to learn to perform a strategic COG analysis of a scenario The use of Disciple is an assignment that is well suited to the course's learning objectives Disciple should be used in future versions of this course

  22. Use of Disciple at the US Army War College 589jw Military Applications of Artificial Intelligence course Students teach Disciple their COG analysis expertise, using sample scenarios (Iraq 2003, War on terror 2003, Arab-Israeli 1973) Students test the trained Disciple agent based on a new scenario (North Korea 2003) Global evaluations of Disciple by officers during three experiments I think that a subject matter expert can use Disciple to build an agent, with limited assistance from a knowledge engineer Spring 2001 COG identification Spring 2002 COG identification and testing Spring 2003 COG testing based on critical capabilities

  23. Parallel development and merging of KBs 432 concepts and features, 29 tasks, 18 rules For COG identification for leaders Initial KB Domain analysis and ontology development (KE+SME) Knowledge Engineer (KE) All subject matter experts (SME) Training scenarios: Iraq 2003 Arab-Israeli 1973 War on Terror 2003 Parallel KB development (SME assisted by KE) 37 acquired concepts and features for COG testing Extended KB DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG DISCIPLE-COG stay informed be irreplaceable communicate be influential have support be protected be driving force Team 1 Team 2 Team 3 Team 4 Team 5 5 features 10 tasks 10 rules 14 tasks 14 rules 2 features 19 tasks 19 rules 35 tasks 33 rules 3 features 24 tasks 23 rules KB merging (KE) Learned features, tasks, rules Integrated KB Unified 2 features Deleted 4 rules Refined 12 rules Final KB: +9 features  478 concepts and features +105 tasks 134 tasks +95 rules 113 rules 5h 28min average training time / team 3.53 average rule learning rate / team COG identification and testing (leaders) DISCIPLE-COG Testing scenario: North Korea 2003 Correctness = 98.15%

  24. Other Disciple agents Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a damaged bridge or road. Demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge basecapturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert. Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and the tenets of army operations. Demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp). Demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple.

  25. Overview Research Problem, Approach, and Application Problem Solving Method: Task Reduction Learnable Knowledge Representation: Plausible Version Spaces Multistrategy Learning during Problem Solving Agent Development Experiments Teaching and Learning Demo Acknowledgements

  26. Acknowledgements This research was sponsored by the Defense Advanced Research Projects Agency, Air Force Research Laboratory, Air Force Material Command, USAF under agreement number F30602-00-2-0546, by the Air Force Office of Scientific Research under grant number F49620-00-1-0072 and by the US Army War College.

More Related