1 / 32

Artificial Intelligence and Lisp #4

Explore the uses of decision trees in artificial intelligence and Lisp, including making action choices, classifying situations, identifying effects and causes. Detailed examples and evaluations provided.

jdutil
Télécharger la présentation

Artificial Intelligence and Lisp #4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decision Trees Causal Nets (beginning) Lab Assignment 2b Artificial Intelligence and Lisp #4

  2. Uses of Decision Trees • Making a choice of action (final or tentative) • Classifying a given situation • Identifying the likely effects of a given situation or action • (Using inverse operation) Identifying possible causes of a given situation

  3. A simple example a b b c c c c red green white white red green blue blue

  4. A simple example terms, features a b b c c c c red green white white red green blue blue outcomes

  5. Evaluation of the decision tree true a false b b true c c c c red green white white red green blue blue There will be five variations on this simple theme

  6. Notation for the decision tree {[: a true][: b false][: c true]} a b b c c c c red green white white red green blue blue [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]]

  7. Notation for the decision tree {[: a true][: b false][: c true]} [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] Range of a is <true false>, for b and c similarly. Range ordering may be specified explicitly like here, or implicitly e.g. if the range is a finite set of integers, but it must be specified somehow. Different terms may have different range. It is not necessary that decision elements on the same level use the same term. Continuous range is also possible (but little covered here). RR

  8. Probabilities in term assignments {[: a <0.65 0.35>] [: b <0.90 0.10>] [: c true]} -- same as <1.00 0.00> Range of a is <true false>, for b and c similarly. [a? [b? [c? red green] [c? green white]] [b? [c? white red] [c? green blue]]] RR 0.585 red 0.315 white 0.100 green Evaluation: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 white, 0.35 * 0.10 green

  9. Expected outcome {[: a <0.65 0.35>] [: b <0.90 0.10>] [: c true]} -- same as <1.00 0.00> Range of a is <true false>, for b and c similarly. [a? [b? [c? red green] [c? green white]] [b? [c? white red] [c? green blue]]] RR 0.585 red 0.315 white 0.100 green Evaluation: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 white, 0.35 * 0.10 green Assign values to outcomes: red 10.000, white 4.000, green 25.000 (or put these values directly into the tree instead of the colors) Expected outcome: 5.850 + 1.260 + 2.500 = 9.610

  10. Probabilities in terminal elements {[: a <0.65 0.35>] [: b <0.90 0.10>] [: c true]} -- same as <1.00 0.00> Range of a is <true false>, for b and c similarly. [a? [b? [c? red green] [c? green white]] [b? [c? <0.12 0.02 0.09 0.77> red] [c? green blue]]] Ordering of value domain is <red white green blue> RR Eval: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 * <0.12 0.02 0.09 0.77> 0.35 * 0.10 green 0.62280 red 0.00630 white 0.12835 green 0.24255 blue

  11. Hierarchical decision trees a grey subtree b c c c c red white white red green blue green blue b d e true false false true

  12. Notation for hierarchical dec. trees [?[a? [b? [c? red blue] [c? blue white]] [b? [c? white red] [c? red blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range <red blue white> ] specifies range order for subtree

  13. Expansion of hierarchical dec. trees [?[a? [b? [c? red blue] [c? blue white]] [b? [c? white red] [c? red blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range <red blue white> ] [?[a? [b? [c? [d? red-rose poppy pelargonia] blue] [c? blue white]] [b? [c? white [d? red-rose poppy pelargonia]] [c? [d? red-rose poppy pelargonia] blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range <red blue white> ]

  14. Partial evaluation of decision tree {[: a true][: c true]} [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] Range of a is <true false>, for b and c similarly. RR Value of b is not available -- partial evaluation is a way out: [b? red blue]

  15. Partial evaluation of decision treewith term probabilities and expected outcome {[: a <0.65 0.35>][: c true]} [a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]] Range of a is <true false>, for b and c similarly Value of b is not available -- partial evaluation is a way out: [b? red blue] Assign values to outcomes: red 10.000, white 4.000, green 25.000, blue 14.000 obtaining [b? 10.000 14.000] RR

  16. Summary of variations • Basic decision tree with definite (no probabilities) assignments of values • Probabilistic assignments to terms (features) • Continuous-valued outcome, expected outcome • Probabilistic assignments to terminal nodes • Hierarchical decision trees • Incomplete assignments to terms, suggesting partial evaluation • Combinations of these are also possible!

  17. Operations on decision trees • Plain evaluation • Partial evaluation • Inverse evaluation • Reorganization (for more efficient interpretation) • Acquisition: obtaining the discrete structure from reliable sources • Learning: using a training set of expected outcomes to adjust probabilities in the tree • Combining with other techniques, e.g. logic-based ones

  18. Decision trees in real life • In user manuals: error identification in cars, household machines, etc. • 'User help' in software systems • Telephone exchanges • Botanic schemata • Commercial decision making: insurance, finance

  19. Decision trees in A.I. and robotics • Current situation described as features/values • From current situation to suggested action(s) (for immediate execution, or to be checked out) • From current situation to extension of it (i.e., additional features/values) • From current situation to predicted future situation (causal reasoning) • From current situation to inferred earlier situation (reverse causal reasoning)(direct or inverse evaluation) • From inferred future or past situation, to action(s) • Learning is important for artificial intelligence

  20. Causal Nets • A causal net consists of: • A set of independent terms • A partially ordered set of dependent terms • An assignment of a dependency expression to each dependent term. (These may be decision trees) • The dependency expression for a term may use independent terms, and also dependent terms that are lower than the term at hand. This means the dependency graph is not cyclic.

  21. An example (due to Eugene Charniak) • When my wife leaves home, she often (not always) turns on the outside light • She may also turn it on when she expects a guest • When nobody is home, the dog is often outside • If the dog has stomach troubles, it is also often left outside • If the dog is outside, I will probably hear it barking when I approach home • However, possibly it does not bark, and possibly I hear another dog and think it's mine • Problem: given the information I obtain when I approach the house, what is the likelyhood of my wife being at home?

  22. Decision trees for dependent terms lights-are-on [noone-home? <70 30> <20 80>] dog-outside [noone-home? [dog-sick? <80 20> <70 30>] [dog-sick? <70 30> <30 70>] ] I-hear-dog [dog-outside? <80 20> <10 90>] Independent terms: noone-home, dog-sick Dependent terms: ligths-are-on, dog-outside < I-hear-dog Notation: integers represent percentages, 70 ~ 0.70 Interpretation: if no-one is home, then 70% chance that outside lights are on, 30% that they are not. If someone is home then 20% and 80% chance, respectively.

  23. Decision trees, concise notation lights-are-on [noone-home? <70 30> <20 80>] dog-outside [noone-home? [dog-sick? <80 20> <70 30>] [dog-sick? <70 30> <30 70>] ] I-hear-dog [dog-outside? <80 20> <10 90>] lights-are-on [noone-home? 70% 20%] dog-outside [noone-home? [dog-sick? 80% 70%] [dog-sick? 70% 30%] ] I-hear-dog [dog-outside? 80% 10%]

  24. Causal net using decision trees lights-are-on [noone-home? 70% 20%] dog-outside [noone-home? [dog-sick? 80% 70%] [dog-sick? 70% 30%] ] I-hear-dog [dog-outside? 80% 10%] This is simply a hierarchical causal net with probabilities in the terminal nodes! If the value assignments for noone-home and dog-sickare given, we can calculate the probabilities for the dependent variables. However, it is the inverse operation that we want.

  25. Inverse operation Consider this simple case first: lights-are-on [noone-home? <70 30> <20 80>] If it is known that lights-are-on is true, what is the probability for noone-home ? Possible combinations: lights-are-on noone-home 0.70 0.30 false 0.20 0.80 Suppose noone-home is true in 20% of overall cases, obtain: lights-are-on noone-home 0.14 0.06 false 0.16 0.64 Given lights-are-on, noone-home has 14/30 = 46.7% probability.

  26. Inverse operation • This will be continued at the next lecture • Read these slides (from the course webpage) and the associated lecture note before that lecture (especially if you are not so familiar with probability theory)

  27. Lab 2: Defining a Zoo Miniworld • Milestone 1: static structure in the zoo world • Milestone 2: defining actions in the zoo world, defining rules for precondition resolution, and seeing these rules in operation. • What to do for the lab: • Download and install the lab • Import a few files from milestone 1 of lab 2 • Complete the file zoo-actions with action defs • Complete the file zoosim-advise with precond rules • Test-run these and return logs of successful runs

  28. What to do - 1 • Download and install the lab - like for milestone 1 • Import a few files from milestone 1 of lab 2 - see lab instructions (course website + files included in the download)

  29. What to do - 2 • Complete the file zoo-actions with action definitions, e.g. ------------------------------------------- -- birthday [: type verb] [: latest-rearchived nil] @Verbdef [Do .t [birthday .a]] = [if (not [H- .t (the: .a has-age) nil]) [soact [H! .t (the: .a has-age) (+ 1 (cv (the: .a has-age))) ]] [coact]] Exampl

  30. What to do - 3 • Complete the file zoosim-advise with precond rules, e.g. • (but this is a 'cheating' rule, not to be used in your final solution to the assignment) ------------------------------------------------ -- precond-rules [: type entity] [: latest-rearchived nil] @Rules [to-achieve [H- now (the: .a .p) .v] [fput :a .a :f .p :v .v] :when [equal (get .a type) animal] ]

  31. What to do - 4 • Test-run these and return logs of successful runs • Use the 'episode' concept - each episode is a log of successive interactions, command executions, etc. • Work with your own episodes, then isolate the successful steps and do them again in fresh episodes containing as few extra steps as possible • Transfer successful episodes to the ef zoo2report, and upload.

  32. This was all for today! Don't forget to read the slides and the lecture note, besides doing the lab!

More Related