1 / 46

Principles of Cased-Based Reasoning

Principles of Cased-Based Reasoning. Gilles Fouque Postdoctoral Fellow UCLA Computer Science Department CoBase Research Group. Plan. Cased-Based Reasoning - an overview An ideal CBR architecture CBR important steps. CBR: An Overview. Learning Process. Definitions.

dannyp
Télécharger la présentation

Principles of Cased-Based Reasoning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Principles of Cased-Based Reasoning Gilles Fouque Postdoctoral Fellow UCLA Computer Science Department CoBase Research Group

  2. Plan • Cased-Based Reasoning - an overview • An ideal CBR architecture • CBR important steps

  3. CBR: An Overview

  4. Learning Process

  5. Definitions Riesbeck, C.K. Schank, R.C. A CBR solves new problems by adapting solutions that were used to solve old problems. Example: X describes how his wife would never cook his steak as rare as he likes it. When X told this to Y, Y was reminded when he tried to get his hair cut and the barber would not cut as short as he wanted it. CBR motto (Hammond, K.): If it worked, use it again, If it works, don’t worry about it, If it did not work, remember not to do it again, If it does not work, fix it.

  6. Short Historic • R. Schank – 1982: Yale • Dynamic Memory, Memory Organization Package (Script) • J. Kolodner – 1983: Georgia Tech • Memory Organization and Retrieval

  7. Analogy and CBR

  8. CBR Classification

  9. An Ideal CBR Architecture

  10. CBR Important Steps • Retrieving: case representation, indexing, similarity metrics • Adaptation: case transformation • Testing: evaluation • Learning: utility problem, forget

  11. Case Retrieving • Case Representation: • Information to represent in a case • Memory organization to use • Indexing: • Selection of indexing features • Search problem • Similarity Metrics: • Retrieval of relevant cases • Partial matching based on similarities/dissimilarities classical distance/knowledge based/inductive hierarchies

  12. Adaptation Not a perfect match • Case Transformation: • Domain specific strategies/CBR

  13. Testing • Evaluation: • Appropriateness of the solution Simulation User feedback • Repair: • Domain specific repair strategies • Identification of the failure cause • Avoid the problem to arise again Learning from failure

  14. Learning A CBR becomes more efficient and competent by increasing its memory of old solutions and adapting them. • Case memory modification: • Generalization/Accumulation/Indexing • New Failure strategies • New repairing strategies • The utility problem: • Anticipation of the usefulness of the new stored case • What to forget?

  15. Some CBR Applications • Recipes: CHEF • Patent law: HYPO • Medical diagnosis: PROTOS • Catering: JULIA • Software reuse: CAESAR

  16. Conclusion • Rely on experience rather than theory, • Learning from success and failure, • Doing the best it can.

  17. Bibliography • Riesbeck, C.K. and Schank, R.C. 1989. Inside Cased-Based Reasoning. Lawrence Erlbaum Associates, Hillsdale, NJ. • Kolodner, J.L. 1980. Retrieval and Organization Strategies in conceptual Memory: A Computer Model. Ph.D. dissertation, Yale University. • Hammond, K.J. 1989. Viewing Planning as a Memory Task. Academic Press, Perspectives in AI. • Bareiss, E.R. 1989. Exemplar-Based Knowledge Acquisition: A Unified Approach to Concept Representation, Classification, and Learning. Academic Press, Perspectives in AI. • Machine Learning, Vol. 10, Number 3.

  18. A Case-Based Reasoning Approach to Associative Query Answering Gilles Fouque Wesley W. Chu Henrick Yau

  19. Associative Query Answering • Provides information useful to but not explicitly asked for by the user Example: Query: “What is the flight schedule of UA1002?” Answer: Departure 9:00am LAX Arrival 7:00pm Detroit Metro Possible Associations: 1. Stopover at Chicago 2. Stopover time is 1 hr 3. Dinner is provided on the flight 4. Price of ticket is $300 5. No in-flight movies 6. Flight will probably be delayed because of snowy conditions in Chicago. Association depends on User Type and Query Context User Type: 1. A passenger [all] 2. A person whose friend is a passenger [6] Query Context: 1. A person who has bought the ticket [not 3] 2. A person who is going to buy the ticket [3]

  20. A Case-Based Reasoning Approach CBR: Case-Based Reasoning systems store information about situations in their memory. As new situations arise, similar situations are searched out to help solve these problems. CBR systems evolve over time: • learn from its own mistakes and failures • learn from its own successes • acquire more knowledge in the process Goodness of a CBR system depends on: 1. How much experience/knowledge it has 2. Its ability to understand new cases in terms of old 3. Its ability to respond quickly 4. Its ability to incorporate user feedback into the system 5. Its ability to acquire new experience

  21. Associative Query Answering in CoBase Idea: Store past user queries in Case Memory as knowledge. When a new query comes in, it is compared to the past cases. Inferences are made from queries similar to the new query. Example: Past Query: Select departure_time, arrival_time, fare From Flights Where origin = “LA” and destination = “Detroit” and date = 12/03/93 New Query: Select departure_time, arrival_time From Flights Where origin = “Chicago” and destination = “San Francisco” and date = 06/06/94 -> A possible association is airfare.

  22. Associative Query Answering Inclusion of additional information not explicitly asked by the user but relevant to his/her query.

  23. Associative Query Answering

  24. Which Associated Information to be Used?

  25. Association Control • Search for Associative Links • Termination

  26. Methodology for Association Control • Case-Based Reasoning (CBR) is used • Cases (past queries) are stored in the Case Memory and similar cases are linked together • Weights are assigned to the links to represent the usefulness of association • Usefulness of association depends on query context and user type • CBR uses user’s feedback (success or failure) to update the link weight between a case and its associations

  27. Case Memory

  28. Selection and Ranking of Attributes To find similar cases to the user query Q. • CBR searches the case memory for all association links containing the set of attributes in Q • CBR evaluates the user features of the cases based on the similarity of the cases with the user query and their weights in the association link • Weight of association link depends on query context and user profile The selected attributes are appended to the user query to derive the associated information

  29. Schema for Reducing Search Complexity in Case Memory • Based on the attributes in the input query, a list of the association links that are useful for association is generated from the Case Memory • Since only cases similar to the user query are to be extracted, if the cases are indexed on the attributes that they contain, then The search time complexity • Only depends on the number of attributes in the user query • Is dependent of the number of stored cases

  30. Algorithm for Associative Query Answering

  31. Updating Association Link Weights (Learning) • Users select relevant attributes for association. • Based on the attribute selected, CBR updates the weights of the association links.

  32. Learning Three forms of learning: • Update weights on association links based on user feedback • Addition of a new case into the case memory • Modification of an existing case to reflect the newly learned experience Idea: When a new query comes in, the CBR either adds it as a new case or it modifies existing cases to incorporate the new experience. Criterion: The similarity of the new case with the old cases. Compare the new case with the case most similar to it in the case memory. If they are: • Exactly the same – Not much to be done • Very similar to each other – Modify the existing case to incorporate new features • Not very similar to each other – Add the new case into case memory

  33. Acquiring Experience From New Orleans Example: Past Query: Select departure_time, arrival_time From Flights Where origin = “Chicago” and destination = “San Francisco” and date = 12/03/93 New Query Select departure_time, arrival_time, fare From Flights Where origin = “LA” and destination = “Detroit” and date = 06/06/94  The attribute Flights.fare should be added into the existing case.

  34. Acquiring Experience From New Queries (cont’d) Example: New Query: Select destination From Flights Where airline = “United” and origin = “LA”  A new case should be added.

  35. Incremental Query Answering/Reusable Queries Idea: The methodology of incremental query answering or reusable queries helps the CBR to identify query dependencies Example: First Query: Select flight_number  marked FN From Flights Where origin = “LA” and destination = “Detroit” and date = 06/06/94 Second Query: Select fare From Flights Where flight_number = $(FN)  It is apparent that the two queries are very much related with each other and future associations can be made.

  36. Learning

  37. Cased-Based Approach to Associative Query Answering

  38. Implementation • Step 1 – Selection of similar cases • Indexing schema is used to select association links included in the user query. • The set of cases similar to the user query is selected from the association links. • Step 2 – Selection and ranking of associated attributes • Selected cases for association are ranked by their usefulness. • Step 3 – Updating of association weights • The association weights are updated based on user feedback.

  39. User Interface • A set of relevant associative attributes are presented to the user. • The attributes are ranked according to the association usefulness of the attribute based on the user profile and query context.

  40. Candidate Attributes for Association

  41. Revised (After User Feedback) Candidate Attributes for Association

  42. Characteristics of the CoBase System 1. Case Memory • A history of past user queries is stored 2. Similarity Computation • It is both attribute-based and value-based • Value-based method makes use of TAHs 3. Inter-relation of Cases • Similar cases within the case memory are interconnected together by association links. • Weights are assigned on association links to indicate its usefulness for association. 4. User feedback • The system ranks all associations it has computed and returns the top candidates to the user. 5. Adaptation • Based on user feedback, association weights are increased or decreased.

  43. Characteristics of the CoBase System (cont’d) Goodness of a CBR system depends on: • How much experience/knowledge it has: - Initial case history of 100+ queries • Its ability to understand new cases in terms of old - Attribute-based and value-based similarity ranking - Currently, joint attributes not considered • Its ability to respond quickly - Hashing is used to store features of cases - Quick retrieval: 300+ cases => around 2-3s 1500+ cases => around 6-7s - Scalability • Its ability to incorporate user feedback into the system - Updating formulae available to incorporate user feedback into the system • Its ability to acquire new experience - An important feature which is still lacking

  44. Conclusions • Association control is based on experience acquisition, query context, and user profile. • Evolution of association is managed by CBR which adapts its knowledge from user feedback. • A prototype has been constructed on top of CoBase, showing it is feasible and scalable. • Further investigation areas: • Behavior of learning algorithm of using user feedback for modifying association weights.

More Related