1 / 65

Knowledge Acquisition and Problem Solving

CS 7850 Fall 2004. Knowledge Acquisition and Problem Solving. Introduction. Gheorghe Tecuci tecuci@gmu.edu http://lac.gmu.edu/. Learning Agents Center and Computer Science Department George Mason University. Overview. Class introduction and course’s objectives.

Télécharger la présentation

Knowledge Acquisition and Problem Solving

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 7850 Fall 2004 Knowledge Acquisition and Problem Solving Introduction Gheorghe Tecuci tecuci@gmu.eduhttp://lac.gmu.edu/ Learning Agents Center and Computer Science Department George Mason University

  2. Overview Class introduction and course’s objectives Artificial Intelligence and intelligent agents Domain for hands-on experience Knowledge acquisition for agents development Overview of the course

  3. Cartoon

  4. Course Objectives Provide an overview of Knowledge Acquisition and Problem Solving. Present principles and major methods of knowledge acquisition for the development of knowledge-based agents that incorporate the problem solving knowledge of a subject matter expert. Major topics include: overview of knowledge engineering; analysis and modeling of the reasoning process of a subject matter expert; ontology design and development; rule learning; problem solving and knowledge-base refinement. The course will emphasize the most recent advances in this area, such as: agent teaching and learning; mixed-initiative knowledge base refinement; knowledge reuse; frontier research problems.

  5. Course Objectives (cont) Link Knowledge Acquisition and Problem Solving concepts to hands-on applications by building a knowledge-based agent. Learn about all the phases of building a knowledge-based agent and experience them first-hand by using the Disciple agent development environment to build an intelligent assistant that helps the students to choose a Ph.D. Dissertation Advisor. Disciple has been developed in the Learning Agents Center of George Mason University and has been successfully used to build knowledge-based agents for a variety of problem areas, including: planning the repair of damaged bridges and roads; critiquing military courses of action; determining strategic centers of gravity in military conflicts; generating test questions for higher-order thinking skills in history and statistics.

  6. Course organization and grading policy Course organization • The classes will consist of: • a theoretical recitation part where the instructor will present and discuss the various methods and phases of building a knowledge-based agent; • a practical laboratory part where the students will apply this knowledge to specify, design and develop the Ph.D. selection advisor. Regular assignments will consist of incremental developments of the Ph.D. selection advisor which will be presented to the class. Grading Policy - Exam, covering the theoretical aspects presented – 50% - Assignments, consisting of lab participation and the contribution to the development of the Ph.D. selection advisor – 50%

  7. Readings Lecture notes provided by the instructor (required). Tecuci G., Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies, Academic Press, 1998 (recommended). Additional papers recommended by the instructor.

  8. Overview Class introduction and course’s objectives Artificial Intelligence and intelligent agents Domain for hands-on experience Knowledge acquisition for agents development Overview of the course

  9. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents Why are intelligent agents important

  10. What is Artificial Intelligence Artificial Intelligence is the Science and Engineering that is concerned with the theory and practice of developing systems that exhibit the characteristics we associate with intelligence in human behavior: perception, natural language processing, reasoning, planning and problem solving, learning and adaptation, etc.

  11. Central goals of Artificial Intelligence Understand the principles that make intelligence possible(in humans, animals, and artificial agents) Developing intelligent machines or agents(no matter whether they operate as humans or not) Formalizing knowledge and mechanizing reasoningin all areas of human endeavor Making the working with computers as easy as working with people Developing human-machine systems that exploit the complementariness of human and automated reasoning

  12. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents Why are intelligent agents important

  13. What is an intelligent agent • An intelligent agent is a system that: • perceives its environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or other complex environment); • reasons to interpret perceptions, draw inferences, solve problems, and determine actions; and • acts upon that environment to realize a set of goals or tasks for which it was designed. input/ sensors IntelligentAgent output/ user/ environment effectors

  14. What is an intelligent agent (cont.) Humans, with multiple, conflicting drives, multiple senses, multiple possible actions, and complex sophisticated control structures, are at the highest end of being an agent. At the low end of being an agent is a thermostat.It continuously senses the room temperature, starting or stopping the heating system each time the current temperature is out of a pre-defined range. The intelligent agents we are concerned with are in between. They are clearly not as capable as humans, but they are significantly more capable than a thermostat.

  15. What is an intelligent agent (cont.) An intelligent agent interacts with a human or some other agents via some kind of agent-communication language and may not blindly obey commands, but may have the ability to modify requests, ask clarification questions, or even refuse to satisfy certain requests. It can accept high-level requests indicating what the user wants and can decide how to satisfy each request with some degree of independence or autonomy, exhibiting goal-directed behavior and dynamically choosing which actions to take, and in what sequence.

  16. What an intelligent agent can do • An intelligent agent can : • collaborate with its user to improve the accomplishment of his or her tasks; • carry out tasks on user’s behalf, and in so doing employs some knowledge of the user's goals or desires; • monitor events or procedures for the user; • advise the user on how to perform a task; • train or teach the user; • help different users collaborate.

  17. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents Why are intelligent agents important

  18. Knowledge representation and reasoning OBJECT SUBCLASS-OF BOOK CUP TABLE INSTANCE-OF ON ON BOOK1 CUP1 TABLE1 An intelligent agent contains an internal representation of its external application domain, where relevant elements of the application domain (objects, relations, classes, laws, actions) are represented as symbolic expressions. Model of the Domain Application Domain ONTOLOGY represents If an object is on top of another object that is itself on top of a third object then the first object is on top of the third object. RULE  x,y,z  OBJECT, (ON x y) & (ON y z)  (ON x z) (cup1 on book1) & (book1 on table1)  (cup1 on table1) (cup1 on table1) This mapping allows the agent to reason about the application domain by performing reasoning processes in the domain model, and transferring the conclusions back into the application domain.

  19. Basic agent architecture Implements a general method of interpreting the input problem based on the knowledge from the knowledge base Intelligent Agent Input/ Problem Solving Engine Sensors User/ Environment Ontology Rules/Cases/… Knowledge Base Output/ Effectors Data structures that represent the objects from the application domain, general laws governing them, action that can be performed with them, etc.

  20. Transparency and explanations The knowledge possessed by the agent and its reasoning processes should be understandable to humans. The agent should have the ability to give explanations of its behavior, what decisions it is making and why. Without transparency it would be very difficult to accept, for instance, a medical diagnosis performed by an intelligent agent. The need for transparency shows that the main goal of artificial intelligence is to enhance human capabilities and not to replace human activity.

  21. Ability to communicate An agent should be able to communicate with its users or other agents. The communication language should be as natural to the human users as possible. Ideally, it should be free natural language. The problem of natural language understanding and generation is very difficult due to the ambiguity of words and sentences, the paraphrases, ellipses and references which are used in human communication.

  22. Use of huge amounts of knowledge In order to solve "real-world" problems, an intelligent agent needs a huge amount of domain knowledge in its memory (knowledge base). Example of human-agent dialog: User: The toolbox is locked. Agent: The key is in the drawer. In order to understand such sentences and to respond adequately, the agent needs to have a lot of knowledge about the user, including the goals the user might want to achieve.

  23. Use of huge amounts of knowledge (example) User: The toolbox is locked. Agent: Why is he telling me this? I already know that the box is locked. I know he needs to get in. Perhaps he is telling me because he believes I can help. To get in requires a key. He knows it and he knows I know it. The key is in the drawer. If he knew this, he would not tell me that the toolbox is locked. So he must not realize it. To make him know it, I can tell him. I am supposed to help him. The key is in the drawer.

  24. Exploration of huge search spaces An intelligent agent usually needs to search huge spaces in order to find solutions to problems. Example: A search agent on the Internet.

  25. Use of heuristics Intelligent agents generally attack problems for which no algorithm is known or feasible, problems that require heuristic methods. A heuristic is a rule of thumb, strategy, trick, simplification, or any other kind of device which drastically limits the search for solutions in large problem spaces. Heuristics do not guarantee optimal solutions. In fact they do not guarantee any solution at all. A useful heuristic is one that offers solutions which are good enough most of the time.

  26. Reasoning with incomplete or conflicting data The ability to take into account data items that are more or less in contradiction with one another (conflicting data or data corrupted by errors). The ability to provide some solution even if not all the data relevant to the problem is available at the time a solution is required. Examples: The reasoning of a physician in an intensive care unit. Planning a military course of action. Example: The reasoning of a military intelligence analyst that has to cope with the deception actions of the enemy.

  27. Ability to learn The ability to improve its competence and performance. An agent is improving its competence if it learns to solve a broader class of problems, and to make fewer mistakes in problem solving. An agent is improving its performance if it learns to solve more efficiently (for instance, by using less time or space resources) the problems from its area of competence.

  28. Extended agent architecture The learning engine implements methods for extending and refining the knowledge in the knowledge base. Intelligent Agent Input/ Problem Solving Engine Sensors Learning Engine User/ Environment Output/ Ontology Rules/Cases/Methods Knowledge Base Effectors

  29. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents Why are intelligent agents important

  30. Sample tasks for intelligent agents Planning: Finding a set of actions that achieve a certain goal. Example: Determine the actions that need to be performed in order to repair a bridge. Critiquing: Expressing judgments about something according to certain standards. Example: Critiquing a military course of action (or plan) based on the principles of war and the tenets of army operations. Interpretation: Inferring situation description from sensory data. Example: Interpreting gauge readings in a chemical process plant to infer the status of the process.

  31. Sample tasks for intelligent agents (cont.) Prediction: Inferring likely consequences of given situations. Examples: Predicting the damage to crops from some type of insect. Estimating global oil demand from the current geopolitical world situation. Diagnosis: Inferring system malfunctions from observables. Examples: Determining the disease of a patient from the observed symptoms. Locating faults in electrical circuits. Finding defective components in the cooling system of nuclear reactors. Design: Configuring objects under constraints. Example: Designing integrated circuits layouts.

  32. Sample tasks for intelligent agents (cont.) Monitoring: Comparing observations to expected outcomes. Examples: Monitoring instrument readings in a nuclear reactor to detect accident conditions. Assisting patients in an intensive care unit by analyzing data from the monitoring equipment. Debugging: Prescribing remedies for malfunctions. Examples: Suggesting how to tune a computer system to reduce a particular type of performance problem. Choosing a repair procedure to fix a known malfunction in a locomotive. Repair: Executing plans to administer prescribed remedies. Example: Tuning a mass spectrometer, i.e., setting the instrument's operating controls to achieve optimum sensitivity consistent with correct peak ratios and shapes.

  33. Sample tasks for intelligent agents (cont.) Instruction: Diagnosing, debugging, and repairing student behavior. Examples: Teaching students a foreign language. Teaching students to troubleshoot electrical circuits. Teaching medical students in the area of antimicrobial therapy selection. Control: Governing overall system behavior. Example: Managing the manufacturing and distribution of computer systems. Any useful task: Information fusion. Information assurance. Travel planning. Email management. Help in choosing a Ph.D. Dissertation Advisor

  34. Artificial Intelligence and intelligent agents What is Artificial Intelligence What is an intelligent agent Characteristic features of intelligent agents Sample tasks for intelligent agents Why are intelligent agents important

  35. Why are intelligent agents important Humans have limitations that agents may alleviate (e.g. memory for the details that isn’t effected by stress, fatigue or time constraints). Humans and agents could engage in mixed-initiative problem solving that takes advantage of their complementary strengths and reasoning styles.

  36. Why are intelligent agents important (cont) The evolution of information technology makes intelligent agents essential components of our future systems and organizations. Our future computers and most of the other systems and tools will gradually become intelligent agents. We have to be able to deal with intelligent agents either as users, or as developers, or as both.

  37. Intelligent agents: Conclusion Intelligent agents are systems which can perform tasks requiring knowledge and heuristic methods. Intelligent agents are helpful, enabling us to do our tasks better. Intelligent agents are necessary to cope with the increasing complexity of the information society.

  38. Overview Class introduction and course’s objectives Artificial Intelligence and intelligent agents Domain for hands-on experience Knowledge acquisition for agents development Overview of the course

  39. Problem: Choosing a Ph.D. Dissertation Advisor Choosing a Ph.D. Dissertation Advisor is a crucial decision for a successful dissertation and for one’s future career. An informed decision requires a lot of knowledge about the potential advisors. In this course we will develop an agent that interacts with a student to help selecting the best Ph.D. advisor for that student. See the project notes: “1. Problem”

  40. Overview Class introduction and course’s objectives Artificial Intelligence and intelligent agents Domain for hands-on experience Knowledge acquisition for agents development Overview of the course

  41. Knowledge Acquisition for agent development Approaches to knowledge acquisition Disciple approach to agent development Demo: Agent teaching and learning Research vision on agents development

  42. How are agents built: Manual knowledge acquisition Intelligent Agent Problem Solving Engine Subject Matter Expert Knowledge Engineer Dialog Programming Knowledge Base Results A knowledge engineer attempts to understand how a subject matter expert reasons and solves problems and then encodes the acquired expertise into the agent's knowledge base. The expert analyzes the solutions generated by the agent (and often the knowledge base itself) to identify errors, and the knowledge engineer corrects the knowledge base.

  43. Why it is hard The knowledge engineer has to become a kind of subject matter expert in order to properly understand expert’s problem solving knowledge. This takes time and effort. Experts express their knowledge informally, using natural language, visual representations and common sense, often omitting essential details that are considered obvious. This form of knowledge is very different from the one in which knowledge has to be represented in the knowledge base (which is formal, precise, and complete). This transfer and transformation of knowledge, from the domain expert through the knowledge engineer to the agent, is long, painful and inefficient (and is known as "the knowledge acquisition bottleneck“ of the AI systems development process).

  44. Mixed-initiative knowledge acquisition Intelligent Learning Agent Problem Solving Engine Subject Matter Expert Dialog Knowledge Learning Engine Knowledge Base Results The expert teaches the agent how to perform various tasks, in a way that resembles how an expert would teach a human apprentice when solving problems in cooperation. This process is based on mixed-initiative reasoning that integrates the complementary knowledge and reasoning styles of the subject matter expert and the agent, and on a division of responsibility for those elements of knowledge engineering for which they have the most aptitude, such that together they form a complete team for knowledge base development.

  45. Mixed-initiative knowledge acquisition (cont.) This is the most promising approach to overcome the knowledge acquisition bottleneck. DARPA’s Rapid Knowledge Formation Program (2000-2004): Emphasized the development of knowledge bases directly by the subject matter experts.  Central objective: Enable distributed teams of experts to enter and modify knowledge directly and easily, without the need for prior knowledge engineering experience. The emphasis was on content and the means of rapidly acquiring this content from individuals who possess it with the goal of gaining a scientific understanding of how ordinary people can work with formal representations of knowledge.  Program’s primary requirement: Development of functionality enabling experts to understand the contents of a knowledge base, enter new theories, augment and edit existing knowledge, test the adequacy of the knowledge base under development, receive explanations of theories contained in the knowledge base, and detect and repair errors in content.

  46. Autonomous knowledge acquisition Autonomous Learning Agent Problem Solving Engine Knowledge Learning Engine Data Knowledge Base Data Base Results The learning engine builds the knowledge base from a data base of facts or examples. In general, the learned knowledge consists of concepts, classification rules, or decision trees. The problem solving engine is a simple one-step inference engine that classifies a new instance as being or not an example of a learned concept. Defining the Data Base of examples is a significant challenge. Current practical applications limited to classification tasks.

  47. Autonomous knowledge acquisition (cont.) Autonomous Language Understanding and Learning Agent Problem Solving Engine Text Text Understanding Engine Knowledge Data Learning Engine Knowledge Base Results The knowledge base is built by the learning engine from data provided by the text understanding system able to understand textbooks. In general, the data consists of facts acquired from the books. This is not yet a practical approach, even for simpler agents.

  48. Knowledge Acquisition for agent development Approaches to knowledge acquisition Disciple approach to agent development Demo: Agent teaching and learning Research vision on agents development

  49. Disciple approach to agent development Research Problem:Elaborate a theory, methodology and family of systems for the development of knowledge-based agents by subject matter experts, with limited assistance from knowledge engineers. Approach: Develop a learning agent that can be taught directly by a subject matter expert while solving problems in cooperation. The expert teaches the agent how to perform various tasks in a way that resembles how the expert would teach a person. The agent learns from the expert, building, verifying and improving its knowledge base 1. Mixed-initiative problem solving 2. Teaching and learning 3. Multistrategy learning Problem Solving Ontology + Rules Interface Learning

  50. Sample Disciple agents Disciple-WA (1997-1998): Estimates the best plan of working around damage to a transportation infrastructure, such as a damaged bridge or road. Demonstrated that a knowledge engineer can use Disciple to rapidly build and update a knowledge basecapturing knowledge from military engineering manuals and a set of sample solutions provided by a subject matter expert. Disciple-COA (1998-1999): Identifies strengths and weaknesses in a Course of Action, based on the principles of war and the tenets of Army operations. Demonstrated the generality of its learning methods that used an object ontology created by another group (TFS/Cycorp). Demonstrated that a knowledge engineer and a subject matter expert can jointly teach Disciple.

More Related