1 / 96

Knowledge Acquisition and Problem Solving

CS 785 Fall 2004. Knowledge Acquisition and Problem Solving. Agent Teaching and Multistrategy Learning. Gheorghe Tecuci tecuci@gmu.edu http://lac.gmu.edu/. Learning Agents Center and Computer Science Department George Mason University. Overview. What is Machine Learning.

kerryn
Télécharger la présentation

Knowledge Acquisition and Problem Solving

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 785 Fall 2004 Knowledge Acquisition and Problem Solving Agent Teaching and Multistrategy Learning Gheorghe Tecuci tecuci@gmu.eduhttp://lac.gmu.edu/ Learning Agents Center and Computer Science Department George Mason University

  2. Overview What is Machine Learning Generalization and Specialization Operations Basic Learning Strategies The Rule Learning Problem in Disciple The Multistrategy Rule Learning Method Strategies for Explanation Generation Demo: Agent Teaching and Rule Learning Recommended Reading

  3. What is Learning? Learning is a very general term denoting the way in which people and computers: • acquire and organize knowledge (by building, modifying and organizing internal representations of some external reality), • discover new knowledge and theories (by creating hypotheses that explain some data or phenomena), or • acquire skills (by gradually improving their motor or cognitive skills through repeated practice, sometimes involving little or no conscious thought). Learning results in changes in the agent (or mind) that improve its competence and/or efficiency.

  4. The Disciple agent is concerned with the first type of learning: acquiring and organizing knowledge from a subject matter expert (by building, modifying and organizing internal representations of some external reality). The external reality is a strategic scenario and how a subject matter expert reasons to identify and test strategic COG candidates for that scenario.

  5. The architecture of an intelligent agent Implements a general problem solving method that uses the knowledge from the knowledge base to interpret the input and provide an appropriate output. Intelligent Agent Implements learning methods for extending and refining the knowledge in the knowledge base. Input/ Problem Solving Engine Sensors Learning Engine User/ Environment Output/ Ontology Rules/Cases/… Knowledge Base Effectors Data structures that represent the objects from the application domain, general laws governing them, actions that can be performed with them, etc.

  6. Overview What is Machine Learning Generalization and Specialization Operations Basic Learning Strategies The Rule Learning Problem in Disciple The Multistrategy Rule Learning Method Strategies for Explanation Generation Demo: Agent Teaching and Rule Learning Recommended Reading

  7. Generalization and specialization rules Fundamental to learning are the processes of generalization and specialization. We will present several basic rules for generalizing or specializing expressions representing concepts. These rule are used to generalize concepts or to specialize concepts. A generalization rule is a rule that transforms an expression into a more general expression. A specialization rule is a rule that transforms an expression into a less general expression. The reverse of any generalization rule is a specialization rule.

  8. Turning constants into variables Generalizes an expression by replacing a constant with a variable. The set of multi_group_forces with 5 subgroups. ?O1 is multi_group_force number_of_subgroups 5 Japan_1944_Armed_Forces generalization specialization Axis_forces_Sicily 5 ?N1 ?N15 Allied_forces_operation_Husky ?O1 is multi_group_force number_of_subgroups ?N1 The set of multi_group_forces with any number of subgroups.

  9. The top expression represents the following concept: the set of multi group forces with 5 subgroups. This set contains, for instance, Axis_forces_Sicily from the Sicily_1943 scenario. By replacing 5 with a variable ?N1 that can take any value, we generalize this concept to the following one: the set of multi group forces with any number of subgroups. In particular ?N1 could be 5. Therefore the second concept includes the first one. Conversely, by replacing ?N1 with 5, we specialize the bottom concept to the top one. The important thing to notice here is that by a simple syntactic operation (transforming a number into a variable) we can generalize a concept. This is how an agent generalizes concepts.

  10. Climbing/descending the generalization hierarchy Generalizes an expression by replacing a concept with a more general one. democratic_government representative_democracy parliamentary_democracy The set of single state forces governed by representative democracies ?O1 is single_state_force has_as_governing_body ?O2 ?O2 is representative_democracy generalization specialization representative_democracy democratic_government democratic_government representative_democracy The set of single state forces governed by democracies ?O1 is single_state_force has_as_governing_body ?O2 ?O2 is democratic_government

  11. One can also generalize an expression by replacing a concept from its description with a more general concept, according to some generalization hierarchy. The reverse operation, of replacing a concept with a less general one, leads to the specialization of an expression. The agent can also generalize a concept by dropping a condition. That is, by dropping a constraint that its instances must satisfy. This rule is illustrated in the next slide.

  12. Dropping/adding condition Generalizes an expression by removing a constraint from its description. The set of multi-member forces that have international legitimacy. ?O1 is multi_member_force has_international_legitimacy “yes” generalization specialization ?O1 is multi_member_force The set of multi-member forces (that may or may not have international legitimacy).

  13. Generalizing/specializing numbers Generalizes an expression by replacing a number with an interval, or by replacing an interval with a larger interval. The set of multi_group_forces with exactly 5 subgroups. ?O1 is multi_group_force number_of_subgroups 5 generalization specialization 5 [3 .. 7] [3 .. 7] 5 ?O1 is multi_group_force number_of_subgroups ?N1 ?N1 is-in [3 .. 7] The set of multi_group_forces with at least 3 subgroups and at most 7 subgroups. generalization specialization [3 .. 7][2 .. 10] [2 .. 10] [3 .. 7] ?O1 is multi_group_force number_of_subgroups ?N1 ?N1 is-in [2 .. 10] The set of multi_group_forces with at most 10 subgroups.

  14. A concept may also be generalized by replacing a number with an interval containing it, or by replacing an interval with a larger interval. The reverse operations specialize the concept. Yet another generalization rule, which is illustrated in the next slide, is to add alternatives. According to the expression from the top of this slide, ?O1 is any alliance. Therefore this expression represents the following concept: the set of all alliances. This concept can be generalized by adding another alternative for ?O1, namely the alternative of being a coalition. Now ?O1 could be either an alliance or coalition. Consequently, the expression from the bottom of this slide represents the following more general concept: the set of all alliances and coalitions.

  15. Adding/removing alternatives Generalizes an expression by replacing a concept C1 with the union (C1 U C2), which is a more general concept. The set of alliances. ?O1 is alliance has_as_member ?O2 generalization specialization ?O1 is {alliance coalition} has_as_member ?O2 The set including both the alliances and the coalitions.

  16. Overview What is Machine Learning Generalization and Specialization Operations Basic Learning Strategies The Rule Learning Problem in Disciple The Multistrategy Rule Learning Method Strategies for Explanation Generation Demo: Agent Teaching and Rule Learning Recommended Reading

  17. Representative learning strategies • Learning by analogy • Instance-based learning • Reinforcement learning • Neural networks • Genetic algorithms and evolutionary computation • Bayesian Learning • Multistrategy learning • Rote learning • Learning from instruction • Learning from examples • Explanation-based learning • Conceptual clustering • Quantitative discovery • Abductive learning

  18. Empirical inductive learning from examples The learning problem Given • a set of positive examples (E1, ..., En) of a concept • a set of negative examples (C1, ... , Cm) of the same concept • a learning bias • other background knowledge Determine • a concept description which is a generalization of the positive examples that does not cover any of the negative examples Purpose of concept learning: Predict if an instance is an example of the learned concept.

  19. Learning from examples Compares the positive and the negative examples of a concept, in terms of their similarities and differences, and learns the concept as a generalized description of the similarities of the positive examples. This allows the agent to recognize other entities as being instances of the learned concept. Illustration: Positive examples of cups: P1 P2 ... Negative examples of cups: N1 … Description of the cup concept: has-handle(x), ... Requires many examples Does not need much domain knowledge Improves the competence of the agent

  20. The goal of this learning strategy is to learn a general description of a concept (for instance the concept of “cup”) by analyzing positive examples of cups (i.e. objects that are cups) and negative examples of cups (i.e. objects that are not cups). The learning agent will attempt to find out what is common to the cups and what distinguishes them from non-cups. For instance, in this illustration, the agent may learn that a cup should have a handle because all the positive examples of cups have handles, and the negative examples of cups do not have handles. However, the color does not seem to be important for a cup because the same color is encountered for both cups and non-cups. To learn a good concept description through this learning strategy requires a very large set of positive and negative examples. On the other hand, this is the only information that the agent needs. That is, the agent does not require prior knowledge to perform this type of learning. The result of this learning strategy is the increase of the problem solving competence of the agent. Indeed, the agent will learn to do things it was not able to do before. In this illustration it will learn to recognize cups, something that it was not able to do before.

  21. Explanation-based learning (EBL) The EBL problem Given • A concept example cup(o1) Ü color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... • Goal: the learned concept should have only operational features (e.g. features present in the examples) • BK: cup(x) Ü liftable(x), stable(x), open-vessel(x). liftable(x) Ü light(x), graspable(x). stable(x) Ü has-flat-bottom(x). ... Determine • An operational concept definition cup(x) Ü made-of(x, y), light-mat(y), has-handle(x), has-flat-bottom(x), up-concave(x).

  22. Explanation-based learning cup(o1) cup(x) ... ... liftable(o1) stable(o1) ... stable(x) liftable(x) ... graspable(o1) light(o1) light(x) graspable(x) light-mat(plastic) light-mat(y) made-of(o1,plastic) has-handle(o1) made-of(x,y) has-handle(x) Learns to recognize more efficiently the examples of a concept by proving that a specific instance is an example of it, and thus identifying the characteristic features of the concept. A example of a cup cup(o1): color(o1, white), made-of(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1),... Proof generalization generalizes them: The proof identifies the characteristic features: • made-of(o1, plastic) is needed to prove cup(o1) • made-of(o1, plastic) is generalized to made-of(x, y); • the material needs not be plastic. • has-handle(o1) is needed to prove cup(o1) • color(o1,white) is not needed to prove cup(o1)

  23. The goal of this learning strategy is to improve the efficiency in problem solving. The agent is able to perform some task but in an inefficient way. We would like to teach the agent to perform the task faster. Consider, for instance, an agent that is able to recognize cups. The agent receives a description of a cup that includes many features. The agent will recognize that this object is a cup by performing a complex reasoning process, based on its prior knowledge. This process is illustrated by the proof tree from the left hand side of this slide. The object o1 is made of plastic which is a light material. Therefore o1 is light. o1 has a handle and therefore it is graspable. Being light and graspable, it is liftable. And so on … being liftable, stable and an open vessel, it is a cup. However, the agent can learn from this process to recognize a cup faster. Notice that the agent used the fact that o1 has a handle in order to prove that o1 is a cup. This means that having a handle is an important feature. On the other hand the agent did not use the color of o1 to prove that o1 is a cup. This means that color is not important. Notice how the agent reaches the same conclusions as in learning from examples, but through a different line of reasoning, and based on a different type of information. The next step in the learning process is to generalize the tree from the left hand side into the tree from the right hand side. While the tree from the left hand side proves that the specific object o1 is a cup, the tree from the right hand side shows that any object x that is made of some light material y, has a handle and some other features is a cup. Therefore, to recognize that an object o2 is a cup, the agent only needs to look for the presence of these features discovered as important. It no longer needs to build a complex proof tree. Therefore cup recognition is done much faster. Finally, notice that the agent needs only one example to learn from. However, it needs a lot of prior knowledge to prove that this example is a cup. Providing such prior knowledge to the agent is a very complex task.

  24. General features of Explanation-based learning • Needs only one example • Requires complete knowledge about the concept (which makes this learning strategy impractical). • Improves agent's efficiency in problem solving

  25. Learning by analogy The learning problem Learns new knowledge about an input entity by transferring it from a known similar entity. The learning method • ACCESS: find a known entity that is analogous with the input entity. • MATCHING: match the two entities and hypothesize knowledge. • EVALUATION: test the hypotheses. • LEARNING: store or generalize the new knowledge.

  26. Learning by analogy: illustration Illustration: The hydrogen atom is like our solar system.

  27. Learning by analogy is the process of learning new knowledge about some entity by transferring it from a known entity. For instance, I can teach students about the structure of the hydrogen atom by using the analogy with the solar system. I am telling the students that the hydrogen atom has a similar structure with the solar system, where the electrons revolve around the nucleus as the planets revolve around the sun. The students may then infer that other features of the solar system are also features of the hydrogen atom. For instance, in the solar system, the greater mass of the sun and its attraction of the planets cause the planets to revolve around it. Therefore, we may conclude that this is also true in the case of the hydrogen atom: the greater mass of the nucleus and its attraction of the electrons cause the electrons to revolve around the sun. This is indeed true and represents a very interesting discovery. The main problem with analogical reasoning is that not all the facts related to the solar system are true for the hydrogen atom. For instance, the sun is yellow, but the nucleus is not. Therefore, facts derived by analogy have to be verified. A general heuristic is that similar causes have similar effects. That is, if A is similar to A’ and A causes B. Then we would expect A’ to cause B’ which should be similar to B.

  28. Learning by analogy: illustration Illustration: The hydrogen atom is like our solar system. The Sun has a greater mass than the Earth and attracts it, causing the Earth to revolve around the Sun. The nucleus also has a greater mass then the electron and attracts it. Therefore it is plausible that the electron also revolves around the nucleus. General idea of analogical transfer: similar causes have similar effects.

  29. Multistrategy learning Multistrategy learning is concerned with developing learning agents that synergistically integrate two or more learning strategies in order to solve learning tasks that are beyond the capabilities of the individual learning strategies that are integrated.

  30. Complementariness of learning strategies Explanation- based learning Multistrategy Learningfrom examples learning Examples several many one needed Knowledge complete incomplete very little needed knowledge knowledge Type of induction and/ induction deduction inference or deduction improves Effect on improves improves competence or/ agent's competence efficiency and efficiency behavior

  31. The individual learning strategies have complementary strengths and weaknesses. For instance learning from example requires a lot of example while explanation-based learning requires only one example. On the other hand, learning from examples does not require any prior knowledge while explanation-based learning requires a lot of prior knowledge. Multistrategy learning attempts to synergistically integrate such complementary learning strategies, in order to take advantage of their relative strengths to compensate for their relative weaknesses. The Disciple agent uses a multistrategy learning strategy, as will be presented in the following.

  32. Overview What is Machine Learning Generalization and Specialization Operations Basic Learning Strategies The Rule Learning Problem in Disciple The Multistrategy Rule Learning Method Strategies for Explanation Generation Demo: Agent Teaching and Rule Learning Recommended Reading

  33. The rule learning problem: definition GIVEN: • an example of a problem solving episode; • a knowledge base that includes an object ontology and a set of problem solving rules; • an expert that understands why the given example is correct and may answer agent’s questions. DETERMINE: • a plausible version space rule that is an analogy-based generalization of the specific problem solving episode.

  34. Input example I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 This is an example of a problem solving step from which the agent will learn a general problem solving rule.

  35. Learned PVS rule IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 QuestionWhich is a member of ?O1 ? Answer?O2 explanation ?O1 has_as_member ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force THEN Identify and test a strategic COG candidate for ?O2 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force INFORMAL STRUCTURE OF THE RULE THEN Identify and test a strategic COG candidate for a force The force is ?O2 FORMAL STRUCTURE OF THE RULE

  36. This is the rule that is learned from the input example. It has both a formal structure (used for formal reasoning), and an informal structure (used to communicate more naturally with the user). Let us consider the formal structure of the rule. This is an IF-THEN structure that specifies the condition under which the task from the IF part can be reduced to the task from the THEN part. This rule, however, is only partially learned. Indeed, instead of a single applicability condition, it has two conditions: 1) a plausible upper bound condition which is more general than the exact (but not yet known) condition, and 2) a plausible lower bound condition which is less general than the exact condition. Completely learning the rule means learning an exact condition. However, for now we will show how the agent learns this rule from the input example shown on a previous slide. The basic steps of the learning method are those from the next side.

  37. Overview What is Machine Learning Generalization and Specialization Operations Basic Learning Strategies The Rule Learning Problem in Disciple The Multistrategy Rule Learning Method Strategies for Explanation Generation Demo: Agent Teaching and Rule Learning Recommended Reading

  38. Rule learning method Analogy and Hint Guided Explanation Analogy-based Generalization Plausible version space rule plausible explanations PUB guidance, hints Example of a task reduction step PLB Incomplete justification analogy Knowledge Base

  39. Basic steps of the rule learning method 1. Formalize and learn the tasks 2. Find a formal explanation of why the example is correct. This explanation is an approximation of the question and the answer, in the object ontology. 3. Generalize the example and the explanation into a plausible version space rule.

  40. 1. Formalize the tasks • Sample formalization rule: • obtain the task name by replacing each specific instance with a more general concept • for each replaced instance define a task feature of the form “The concept is instance” Task name Task features I need to I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 Therefore I need to Therefore I need to Identify and test a strategic COG candidate for a force The force is US_1943 Identify and test a strategic COG candidate for US_1943

  41. Because the tasks from the modeling are in unrestricted English Disciple cannot reason with them. We need to formalize these tasks. For each task we need to define an abstract phrase that indicates what this task is about (the task name), and a list of specific phrases that give all the details about the task (the task features). The task name should not contain any instance (such as Allied_Forces_1943). All these instances should appear in the task features. In general, the task name may be obtained from the English expression in the left hand side by simply replacing each specific object with a more abstract concept. Then we will add a corresponding task feature that specifies the value for this abstract concept.

  42. Sample task formalizations • Any other formalization is acceptable if: • the task name does not contain any instance or constant; • each instance from the informal task appears in a feature of the formalized task. I need to Identify the strategic COG candidatesfor the Sicily_1943 scenario Identify the strategic COG candidatesfor a scenario The scenario is Sicily_1943 Which is an opposing forcein the Sicily_1943 scenario? Anglo_allies_1943 Therefore I need to Identify the strategic COG candidatesfor Anglo_allies_1943 Identify the strategic COG candidatesfor an opposing force The opposing force is Anglo_allies_1943 Is Anglo_allies_1943 a single member force or a multi-member force? Anglo_allies_1943 is a multi-member force Therefore I need to Identify the strategic COG candidates forthe Anglo_allies_1943 which isa multi-member force Identify the strategic COG candidates foran opposing force which isa multi-member force The opposing force is Anglo_allies_1943

  43. Task learning Identify and test a strategic COG candidate for US_1943 Identify and test a strategic COG candidate for a force The force is US_1943 Identify and test a strategic COG candidate for ?O1 object subconcept_of INFORMAL STRUCTURE OF THE TASK force subconcept_of subconcept_of multi_member_force opposing_force subconcept_of subconcept_of Identify and test a strategic COG candidate for a force The force is ?O1 multi_state_force single_member_force subconcept_of instance_of instance_of multi_state_alliance Plausible upper bound condition ?O1 is force Plausible lower bound condition ?O1 is single_state_force subconcept_of Single_state_force equal_partners_ multi_state_alliance instance_of instance_of has_as_member US_1943 FORMAL STRUCTURE OF THE TASK Allied_Forces_1943

  44. The top part of this slide shows the English expression and the formalized expression of a specific task. From the English expression of the specific task the agent learns the informal structure of the general task by replacing the specific instance US_1943, with the variable ?O1. From the formalized expression of the specific task, the agent learns the formal structure of the general task. The formal structure also specifies the conditions that ?O1 should satisfy. However, the agent cannot formulate the exact condition, but only two bounds for the exact condition that will have to be learned. The plausible lower bound condition is more restrictive, allowing ?O1 to only be a single-state force. This condition is obtained by replacing US_1943 with its most specific generalization in the object ontology. The plausible upper bound condition is less restrictive. ?O1 could be any force. This condition is obtained by replacing US_1943 with the most general sub-concept of <object> which is more general than US_1943. The plausible upper bound condition allows the agent to generate more tasks, because now ?O1 can be replaced with any instance of force. However, there is no guarantee that the generated task is a correct expression. The agent will continue to improve the learned task, generalizing the plausible lower bound condition and specializing the plausible upper bound condition until they become identical and each object that satisfies the obtained condition leads to a correct task expression.

  45. 2. Find an explanation of why the example is correct I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 The explanation is an approximation of the question and the answer, in the object ontology. has_as_member Allied_Forces_1943 US_1943

  46. The expert has defined the example during the modeling process. During the task formalization process, the expert and the agent have collaborated to formalize the tasks. Now the expert and the agent have to collaborate to also formalize the question and the answer. This formalization is the explanation from the bottom of this slide. It consists of a relation between two elements from the agent's ontology: “Allied_Forces_1943 has_as_member US_1943” It states, in Disciple’s language, that US_1943 is a member of Allied_Forces_1943. An expert can understand such formal expressions because they actually correspond to his own explanations. However, he cannot be expected to be able to define them because he is not a knowledge engineer. For one thing, he would need to use the formal language of the agent. But this would not be enough. He would also need to know the names of the potentially many thousands of concepts and features from the agent’s ontology (such as “has_as_member”). While defining the formal explanation of this task reduction step is beyond the individual capabilities of the expert and the agent, it is not beyond their joint capabilities. Finding such explanation pieces is a mixed-initiative process involving the expert and the agent. In essence, the agent will use analogical reasoning and help from the expert to identify and propose a set of plausible explanation pieces from which the expert will have to select the correct ones. Once the expert is satisfied with the identified explanation pieces, the agent will generate a general rule.

  47. 3. Generate the PVS rule We need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 has_as_member Allied_Forces_1943 US_1943 Therefore we need to Identify and test a strategic COG candidate for a force The force is US_1943 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 Rewrite as explanation ?O1 has_as_member ?O2 Most general generalization Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Condition ?O1 is Allied_Forces_1943has_as_member ?O2 ?O2 is US_1943 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Most specific generalization has_as_member domain: multi_member_force range: force THEN Identify and test a strategic COG candidate for a force The force is ?O2

  48. Notice that the explanation is first re-written as a task condition, and then two generalizations of this condition are created: a most conservative one (the plausible lower bound condition) and a most aggressive one (the plausible upper bound condition). The plausible lower bound is the minimal generalization of the condition from the left hand side of the slide. Similarly, the most general generalization is the plausible upper bound.

  49. Analogical reasoning Analogy criterion multi_member_force force instance_of instance_of has_as_member ?O1 ?O2 less general than less general than explanation similar explanation similar has_as_member has_as_member Allied_Forces_1943 US_1943 European_Axis_1943 Germany_1943 explains? explains similar example initial example I need to I need to Identify and test a strategic COG candidate corresponding to a member of a force The force is Allied_Forces_1943 Identify and test a strategic COG candidate corresponding to a member of a force The force is European_Axis_1943 Therefore I need to Therefore I need to similar Identify and test a strategic COG candidate for a force The force is US_1943 Identify and test a strategic COG candidate for a force The force is Germany_1943

  50. The agent uses analogical reasoning to generalize the example and its explanation into a plausible version space rule. This slide provides a justification for the generalization procedure used by the agent. Let us consider that the expert has provided to the agent the task reduction example from the bottom left of this slide. This reduction is correct because “Allied_Forces_1943 has_as_member US_1943”. Now let us consider the European_Axis_1943 which has as member Germany_1943. Using the same logic as above, one can create the task reduction example from the bottom right of the slide. This is a type of analogical reasoning that the agent performs. The explanation from the left hand side of this slide explains the task reduction from the left hand side. This explanation is similar with the explanation from the right hand side of this slide (they have the same structure, being both less general than the analogy criterion from the top of this slide). Therefore one could expect that this explanation from the right hand side of the slide would explain an example that would be similar with the initial example. This example is the one from the right hand side of the slide. To summarize: The expert provided the example from the left hand side of this slide and helped the agent to find its explanation. Using analogical reasoning the agent can perform by itself the reasoning from the bottom right hand side of the slide.

More Related