1 / 70

Template knowledge models

Template knowledge models. Reusing knowledge model elements. Lessons. Knowledge models partially reused in new applications Type of task = main guide for reuse Catalog of task templates small set in this book see also other repositories. The need for reuse.

yen-doyle
Télécharger la présentation

Template knowledge models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Template knowledge models Reusing knowledge model elements

  2. Lessons • Knowledge models partially reused in new applications • Type of task = main guide for reuse • Catalog of task templates • small set in this book • see also other repositories

  3. The need for reuse • prevent "re-inventing the wheel" • cost/time efficient • decreases complexity • quality-assurance

  4. Task template • reusable combination of model elements • (provisional) inference structure • typical control structure • typical domain schema from task point-of-view • specific for a task type • supports top-down knowledge modeling

  5. A typology of tasks • range of task types is limited • advantage of KE compared to general SE • background: cognitive science/psychology • several task typologies have been proposed in the literature • typology is based on the notion of “system”

  6. The term “system” • abstract term for object to which a task is applied. • in technical diagnosis: artifact or device being diagnosed • in elevator configuration: elevator to be designed • does not need to exist (yet)

  7. Analytic versus synthetic tasks • analytic tasks • system pre-exists • it is typically not completely "known" • input: some data about the system, • output: some characterization of the system • synthetic tasks • system does not yet exist • input: requirements about system to be constructed • output: constructed system description

  8. Task hierarchy

  9. Structure of template description in catalog • General characterization • typical features of a task • Default method • roles, sub-functions, control structure, inference structure • Typical variations • frequently occurring refinements/changes • Typical domain-knowledge schema • assumptions about underlying domain-knowledge structure

  10. Classification • establish correct class for an object • object should be available for inspection • "natural" objects • examples: rock classification, apple classification • terminology: object, class, attribute, feature • one of the simplest analytic tasks; many methods • other analytic tasks: sometimes reduced to classification problem especially diagnosis

  11. Classification: pruning method • generate all classes to which the object may belong • specify an object attribute • obtain the value of the attribute • remove all classes that are inconsistent with this value

  12. Classification:inference structure

  13. Classification: method control while new-solution generate(object -> candidate) do candidate-classes := candidate union candidate-classes; while new-solutionspecify(candidate-classes -> attribute) and length candidate-classes > 1 do obtain(attribute -> new-feature); current-feature-set := new-feature union current-feature-set; for-each candidate in candidate-classes do match(candidate + current-feature-set -> truth-value); if truth-value = false; then candidate-classes := candidate-classes subtract candidate;

  14. ตัวอย่าง • Generate Object • Student’s Grade • Specify Class • A, B+, B, C+, C, D+, D, F • Obtain Attribute • {Mid-Term Exam}, {Final Exam}, {Attendance} • Match Feature and Class • {Attendance} < 5 = [F] • {Mid-Term} + {Final Exam} > 80 = [A] • Truth Value • Student{Mid-Term Exam, Final Exam, Attendance} = [Class]

  15. Classification:inference structure

  16. Classification: method variations • Limited candidate generation • Different forms of attribute selection • decision tree • information theory • user control • Hierarchical search through class structure

  17. Classification: domain schema

  18. Rock classification

  19. Nested classification

  20. Rock classification prototype

  21. Assessment • find decision category for a case based on domain-specific norms. • typical domains: financial applications (loan application), community service • terminology: case, decision, norms • some similarities with monitoring • differences: • timing: assessment is more static • different output: decision versus discrepancy

  22. Assessment: abstract & match method • Abstract the case data • Specify the norms applicable to the case • e.g. “rent-fits-income”, “correct-household-size” • Select a single norm • Compute a truth value for the norm with respect to the case • See whether this leads to a decision • Repeat norm selection and evaluation until a decision is reached

  23. Assessment:inference structure case abstract abstracted specify norms select case evaluate norm norm decision match value

  24. Assessment: method control while new-solution abstract(case-description -> abstracted-case) do case-description := abstracted-case; end while specify(abstracted-case -> norms); repeat select(norms -> norm); evaluate(abstracted-case + norm -> norm-value); evaluation-results := norm-value union evaluation-results; until has-solution match(evaluation-results -> decision);

  25. Assessment control: UML notation [more abstractions] abstract specify norms [no more abstractions] [match fails [match succeeds: no decision] decision found] select norm evaluate match norm decision

  26. Assessment: method variations • norms might be case-specific • cf. housing application • case abstraction may not be needed • knowledge-intensive norm selection • random, heuristic, statistical • can be key to efficiency • sometimes dictated by human expertise • only acceptable if done in a way understandable to experts

  27. Assessment: domain schema

  28. Claim handling for unemployment benefits

  29. Decision rules for claim handling

  30. Diagnosis • find fault that causes system to malfunction • example: diagnosis of a copier • terminology: • complaint/symptom, hypothesis, differential, finding(s)/evidence, fault • nature of fault varies • state, chain, component • should have some model of system behavior • default method: simple causal model • sometimes reduced to classification task • direct associations between symptoms and faults • automation feasible in technical domains

  31. Diagnosis: causal covering method • Find candidate causes (hypotheses) for the complaint using a causal network • Select a hypothesis • Specify an observable for this hypothesis and obtain its value • Verify each hypothesis to see whether it is consistent with the new finding • Continue this process until a single hypothesis is left or no more observables are available

  32. Diagnosis:inference structure

  33. Diagnosis: method control while new-solution cover(complaint -> hypothesis) do differential := hypothesis add differential; end while repeat select(differential -> hypothesis); specify(hypothesis -> observable); obtain(observable -> finding); evidence := finding add evidence; foreach hypothesis in differential do verify(hypothesis + evidence -> result); if result = false then differential := differential subtract hypothesis until length differential =< 1 or “no observables left” faults := hypothesis;

  34. Diagnosis: method variations • inclusion of abstractions • simulation methods • see literature on model-based diagnosis • library of Benjamins

  35. Diagnosis: domain schema

  36. Monitoring • analyze ongoing process to find out whether it behaves according to expectations • terminology: • parameter, norm, discrepancy, historical data • main features: • dynamic nature of the system • cyclic task execution • output "just" discrepancy => no explanation • often: coupling monitoring and diagnosis • output monitoring is input diagnosis

  37. Monitoring:data-driven method • Starts when new findings are received • For a find a parameter and a norm value is specified • Comparison of the find with the norm generates a difference description • This difference is classified as a discrepancy using data from previous monitoring cycles

  38. Monitoring: inference structure

  39. Monitoring: method control receive(new-finding); select(new-finding -> parameter) specify(parameter -> norm); compare(norm + finding -> difference); classify(difference + historical-data -> discrepancy); historical-data := finding add historical-data;

  40. Monitoring: method variations • model-driven monitoring • system has the initiative • typically executed at regular points in time • example: software project management • classification function treated as task in its won right • apply classification method • add data abstraction inference

  41. Prediction • analytic task with some synthetic features • analyses current system behavior to construct description of a system state at future point in time. • example: weather forecasting • often sub-task in diagnosis • also found in knowledge-intensive modules of teaching systems e.g. for physics. • inverse: retrodiction: big-bang theory

  42. Synthesis • Given a set of requirements, construct a system description that fulfills these requirements

  43. “Ideal” synthesis method • Operationalize requirements • preferences and constraints • Generate all possible system structures • Select sub-set of valid system structures • obey constraints • Order valid system structures • based on preferences

  44. Synthesis:inference structure

  45. Design • synthetic task • system to be constructed is physical artifact • example: design of a car • can include creative design of components • creative design is too hard a nut to crack for current knowledge technology • sub-type of design which excludes creative design => configuration design

  46. Configuration design • given predefined components, find assembly that satisfies requirements + obeys constraints • example: configuration of an elevator; or PC • terminology: component, parameter, constraint, preference, requirement (hard & soft) • form of design that is well suited for automation • computationally demanding

  47. Elevator configuration: knowledge base reuse

  48. Configuration:propose & revise method • Simple basic loop: • Propose a design extension • Verify the new design, • If verification fails, revise the design • Specific domain-knowledge requirements • revise strategies • Method can also be used for other synthetic tasks • assignment with backtracking • skeletal planning

  49. Configuration: method decomposition

  50. Configuration: method control operationalize(requirements -> hard-reqs + soft-reqs); specify(requirements -> skeletal-design); whilenew-solution propose(skeletal-design + design +soft-reqs -> extension) do design := extension union design; verify(design + hard-reqs -> truth-value + violation); if truth-value = false then critique(violation + design -> action-list); repeat select(action-list -> action); modify(design + action -> design); verify(design + hard-reqs -> truth-value + violation); until truth-value = true; end while

More Related