1 / 45

Review

Review. ECE457 Applied Artificial Intelligence Spring 2008 Lecture #14. What Is On The Final. Everything that has important! written next to it on the slides Everything that I said was important. What Might Be On The Final. Anything in the slides Except “What Is Not On The Final”

caden
Télécharger la présentation

Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review ECE457 Applied Artificial Intelligence Spring 2008 Lecture #14

  2. What Is On The Final • Everything that has important! written next to it on the slides • Everything that I said was important ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 2

  3. What Might Be On The Final • Anything in the slides • Except “What Is Not On The Final” • Anything in the required readings in the textbook ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 3

  4. What Is Not On The Final • Examples of real applications • Dune II • Traveling-wave tube • IBM Deep Blue • Pathfinder network • Weighted Naïve Bayes Classifier • Helicopter flight control • Fuzzy robot navigation • Neural network pixel classifier • WordNet • Additional material on website • Writing/debugging code ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 4

  5. Practice Material • Examples and exercises in slides • Problems at the end of each chapter in the textbook ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 5

  6. Material Allowed at the Exam • Pen or pencil, eraser, calculator • Not allowed: • Books, notes • Phones, blackberries, laptops, PDAs, iPods, iPhones, iAnything, computers built into glasses like in Mission Impossible, or anything else electronic • Talking to other students, writing notes, sign language, smoke signals, semaphores • Cheating in general ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 6

  7. Summary of Course • Lecture 1: Introduction to AI • Types of agents • Properties of the environment • Fully observable vs. partially observable • Deterministic vs. stochastic vs. strategic • Episodic vs. sequential • Static vs. dynamic vs. semi-dynamic • Discrete vs. continuous • Single agent vs. cooperative vs. competitive ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 7

  8. Lecture 1: Introduction to AI • Define the properties of the environment for these problems: • Robot soccer • Partially observable, stochastic, sequential, dynamic, continuous, multi-agent • Internet shopping (without eBay-style bidding) • Partially observable, deterministic, episodic, static, discrete, single-agent • Autonomous Mars rover • Partially observable, stochastic, sequential, dynamic, continuous, single-agent • Theorem-solving assistant to a mathematician • Fully observable, deterministic, sequential, semi-dynamic, discrete, cooperative multi-agent ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 8

  9. Summary of Course • Lecture 2: Uninformed Search • Well-defined problem • Properties of search algorithms • Uninformed search • Breath-first search • Uniform-cost search • Depth-first search • Depth-limited search • Iterative deepening search • Repeated states ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 9

  10. Lecture 2: Uninformed Search • You have a search tree with a branching factor of b and a maximum depth of m. The depth of the shallowest goal node is d. You are considering searching the tree using either a depth-first search agent or a breath-first search agent. • Which one will have the best space complexity? Explain. • Depth-First has the best space complexity. • BFS = O(bd+1) DFS = O(bm) • Depth-first search explores an entire branch then deletes it from memory, while breath-first search has to keep all expanded nodes in memory. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 10

  11. Lecture 2: Uninformed Search • You have a search tree with a branching factor of b and a maximum depth of m. The depth of the shallowest goal node is d. You are considering searching the tree using either a depth-first search agent or a breath-first search agent. • Which one will have the best time complexity? Explain. • Breath-First has the best time complexity. • BFS = O(bd+1) DFS = O(bm) • Breath-First checks level by level, and is guaranteed not to search beyond depth d. Depth-first explores branch by branch down to the maximum depth m. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 11

  12. Lecture 2: Uninformed Search • A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot-high ceiling. He would like to get the bananas as quickly as possible. The room contains two stackable, movable, climbable 3-foot-high crates. • Write this situation as a well-defined problem. • Initial state • Action • Goal test • Cost ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 12

  13. Lecture 2: Uninformed Search • Initial state • Monkey has no bananas, monkey is on the floor, crates are on the floor, bananas are hanging from the ceiling • Action • Stack crates, un-stack crates, move crate, climb on crate, climb off crate, walk, grab bananas (if high enough) • Goal test • Does the monkey have the bananas? • Cost • Each action costs 1 • Path cost is the sum of actions ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 13

  14. Summary of Course • Lecture 3: Informed Search • Informed search • Greedy best-first search • A* search • Heuristic functions • Iterative improvement • Hill Climbing • Simulated Annealing ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 14

  15. Lecture 3: Informed Search • Given the following tree, find the optimal path to the goal G using A* search. The value of the heuristic h is specified for each node. The costs of the edges are specified on the tree. Assume that children of a node are placed into the list in a left-to-right order, and that nodes of equal priority are extracted (for expansion) from the list in FIFO order. • Write a number inside the node indicating the order in which the nodes are expanded from the start node S, i.e. 1, 2, …. • For each node generated, write the total cost f in the appropriate location on the graph. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 15

  16. Lecture 3: Informed Search • Find the optimal path to the goal G using A* search, specifying the order in which nodes are expanded and the f-value of all generated nodes. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 16

  17. Summary of Course • Lecture 4: Constraint Satisfaction Problems • Constraints • Defining a CSP • CSP search • Backtracking search • Conflict-directed backjumping • Heuristics • Forward checking • AC-3 algorithm ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 17

  18. B A C E F D Lecture 4: CSP • Using the most-constrained-variable CSP heuristic, colour the adjacent map using the colours Blue, Red and Green. Show your reasoning at each step of the algorithm. Proceed in the following manner: • After assigning a colour to a region, and before choosing the next region to colour, apply the forward checking algorithm and show its results. Then choose the next region to colour using the most-constrained-variable heuristic, etc. • At each step, show the domains of each region and justify the choice of the next region to colour. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 18

  19. B A C E F D Lecture 4: CSP • Variables marked * have been assigned • A* = {Green}B* = {Red}C = {Red, Blue}D = {Red, Blue, Green}E = {Blue}F = {Blue, Green} • A* = {Green}B* = {Red}C = {Red}D = {Red, Blue, Green}E* = {Blue}F = {Green} • A* = {Green}B* = {Red}C* = {Red}D = {Red, Blue}E* = {Blue}F* = {Green} • A* = {Green}B* = {Red}C* = {Red}D* = {Red} or D* = {Blue}E* = {Blue}F* = {Green} ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 19

  20. Summary of Course • Lecture 5: Game Playing • Payoff functions • Minimax algorithm • Alpha-Beta pruning • Non-quiescent positions & horizon effect • Expectiminimax ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 20

  21. Lecture 5: Game Playing • Consider the following game tree. The payoff value of each leaf is written under that node. • Apply the Minimax algorithm to obtain the value of each non-leaf node. • Apply Alpha-Beta Pruning to the game tree. Find which nodes will be pruned. For each one, identify and explain the value of alpha and beta to show why it is pruned. ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 21

  22. A MAX B I MIN M C F J MAX N O D E G K L H -8 5 4 8 9 -2 2 -1 Lecture 5: Game Playing H. is pruned. At node F, beta is 8 (from B) and after checking G alpha is 9 1.[-, ] 9.[8, ] 10.[8, ] 2.[-, ] M is pruned. At node I, alpha is 8 (from A) and after checking J beta is 2. 14.[8, 2] 6.[-, 8] 3.[-, ] 7.[-, 8] 11.[-, ] 4.[4, ] 8.[9, 8] 12.[-2, ] 5.[8, ] 13.[2, ] ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 22

  23. Summary of Course • Lecture 6: Logical Agents • Language, syntax, semantics • Propositional logic • Propositional symbols and logical connectives • Inference with truth tables • Inference with Resolution • Conversion to CNF • Inference with Modus Ponens • Horn clauses • Forward chaining • Backward chaining ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 23

  24. Summary of Course • Lecture 7: First-Order Logic • First-Order Logic • Constants, predicates, functions • Universal and existential quantifiers • Converting English sentences • Inference with propositionalization • Inference with Generalized Modus Ponens • Unification algorithm • Inference with Resolution • Conversion to CNF ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 24

  25. Lecture 7: First-Order Logic • Represent the following sentences in FOL using: Take(s,c,t), Pass(s,c,t), Score(s,c,t), Student(s), French, Greek, Spring2001 • Some students took French in spring 2001 • x Student(x)  Take(x,French,Spring2001) • Every student who takes French passes it • x,y Student(x)  Take(x,French,y)  Pass(x,French,y) • Only one student took Greek in Spring 2001 • x Student(x)  Take(x,Greek,Spring2001)  y[Student(y)y≠x  Take(y,Greek,Spring2001)] • The best score in Greek is always higher than the best score in French • t x y Score(x,Greek,t) > Score(y,French,t) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 25

  26. Lecture 7: First-Order Logic • Convert this FOL sentences to Conjunctive Normal Form. Show all steps of the conversion. • x [y F(y)  G(x,y)]  y G(y,x) • x ¬[y F(y)  G(x,y)]  y G(y,x) • x y ¬[F(y)  G(x,y)]  y G(y,x)x y ¬F(y) ¬G(x,y)  y G(y,x) • x y ¬F(y) ¬G(x,y)  z G(z,x) • x ¬F(C1) ¬G(x,C1)  G(C2,x) • ¬F(C1) ¬G(x,C1)  G(C2,x) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 26

  27. Lecture 7: First-Order Logic • Find the most general unifier, if it exists. • p = F(A,B,B)q = F(x,y,z) •  = {x/A, y/B, z/B} • p = F(y,G(A,B))q = F(G(x,x),y) • Fail: x cannot be both A and B • p = F(G(y),y)q = F(G(x),A) •  = {x/A, y/A} • p = F(G(y),y)q = F(x,x) • Fail: occur check error ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 27

  28. Lecture 7: First-Order Logic • Given the following KB: • Faster(x,y)  Faster(y,z)  Faster(x,z) • Pig(x)  Slug(y)  Faster(x,y) • Buffalo(x)  Pig(y)  Faster(x,y) • Slug(Slimm) • Pig(Pat) • Buffalo(Bill) • Is Bill faster than Slimm, using forward chaining • Slug(Slimm), Pig(Pat), Pig(x)  Slug(y)  Faster(x,y),  = {x/Pat, y/Slimm} • Faster(Pat,Slimm) • Pig(Pat), Buffalo(Bill), Buffalo(x)  Pig(y)  Faster(x,y),  = {x/Bill, y/Pat} • Faster(Bill,Pat) • Faster(Pat,Slimm), Faster(Bill,Pat), Faster(x,y)  Faster(y,z)  Faster(x,z),  = {x/Bill, y/Pat, z/Slimm} • Faster(Bill,Slimm) ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 28

  29. Lecture 7: First-Order Logic • Given the following KB: • Slimy(x)  Creepy(x)  Slug(x) • Pig(x)  Slug(y)  Faster(x,y) • Slimy(Slimm) • Creepy(Slimm) • Pig(Pat) • Is Pat faster than Slimm, using backward chaining • Faster(Pat,Slimm) • Pig(x)  Slug(y)  Faster(x,y),  = {x/Pat, y/Slimm} • Pig(Pat) in KB, Slug(Slimm) • Slimy(x)  Creepy(x)  Slug(x),  = {x/Slimm} • Slimy(Slimm) in KB, Creepy(Slimm) in KB ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 29

  30. Ruler(Caesar) • Assasinate(Marcus, Caesar) Lecture 7: First-Order Logic • Given the following KB: • Person(Marcus) • Pompeian(Marcus) • ¬Pompeian(x1)  Roman(x1) • ¬Roman(x2)  Loyal(x2,Caesar)  Hate(x2, Caesar) • ¬Person(x3)  ¬Ruler (x4)  ¬Assasinate(x3, x4)  ¬Loyal(x3,x4) • Does Marcus hate Caesar, using resolution • ¬Hate(Marcus, Caesar), ¬Roman(x2)  Loyal(x2,Caesar)  Hate(x2, Caesar),  = {x2/Marcus} • ¬Roman(Marcus)  Loyal(Marcus,Caesar) • ¬Roman(Marcus)  Loyal(Marcus,Caesar), ¬Pompeian(x1)  Roman(x1),  = {x1/Marcus} • Loyal(Marcus,Caesar)  ¬Pompeian(Marcus) • Loyal(Marcus,Caesar)  ¬Pompeian(Marcus), Pompeian(Marcus), no substitution • Loyal(Marcus,Caesar) • Loyal(Marcus,Caesar), ¬Person(x3)  ¬Ruler (x4)  ¬Assasinate(x3, x4)  ¬Loyal(x3,x4),  = {x3/Marcus. X4/Caesar} • ¬Person(Marcus)  ¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar) • ¬Person(Marcus)  ¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar), Person(Marcus), no substitution • ¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar) • ¬Ruler (Caesar)  ¬Assasinate(Marcus, Caesar), Assasinate(Marcus, Caesar), no substitution • ¬Ruler (Caesar) • ¬Ruler (Caesar), Ruler(Caesar) , no substitution • Contradiction! ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 30

  31. Summary of Course • Lecture 8: Uncertainty • Marginalization • Bayes’ Theorem • Chain rule • Independence and conditional independence • Naïve Bayes Classifier ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 31

  32. Lecture 8: Uncertainty • You tested positive for a disease. The test’s results are accurate 99% of the time. However, the disease only strikes 1 out of 10000 people. What’s the probability that you have the disease? • P(+|disease) = 0.99P(-|¬disease) = 0.99P(disease) = 0.0001 • P(disease|+) = P(+|disease)P(disease)/P(+) • P(+) = P(+|disease)P(disease) + P(+|¬disease)P(¬disease)P(+) = 0.99*0.0001 + 0.01*0.9999 = 0.010098 • P(disease|+) = 0.99 * 0.0001 / 0.010098 = 0.009804 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 32

  33. Lecture 8: Uncertainty • P(S|C,T,O) = P(S)*P(C|S)*P(T|S)*P(O|S) • P(S) = 0.5P(C=red|S) = 0.6P(C=yellow|S) = 0.4P(T=sport|S) = 0.8P(T=SUV|S) = 0.2P(O=domestic|S) = 0.4P(O=imported|S) = 0.6 • P(S|red,SUV,domestic)= P(S)*P(red|S)*P(SUV|S)*P(domestic|S)= 0.5*0.6*0.2*0.4 = 0.024 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 33

  34. Summary of Course • Lecture 9: Probabilistic Reasoning • Bayesian Network • Connections and D-Separation • Inference ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 34

  35. Lecture 9: Probabilistic Reasoning • Consider this Bayesian network. • Write the factored expression for the joint probability distribution P(A, B, C, D, E, F) which is represented by this network. • P(A, B, C, D, E, F) = P(E|C)P(F|D)P(D|B)P(C|A,B)P(A)P(B) • Which variables are independent (d-separate) of C if: • B is known. D and F • A is known. None • D and E are both know. F ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 35

  36. Lecture 9: Probabilistic Reasoning • Given the following values, what is the posterior probability of F given that B is true? • P(D|B) = 0.8P(D|B) = 0.4P(F|D) = 0.75P(F|D) = 0.6 • P(F|B) = dP(F,d|B) P(F|B) = dP(F|d)P(d|B)P(F|B) = P(F|D)P(D|B) + P(F|D)P(D|B) P(F|B) = 0.75*0.8 + 0.6*0.2 = 0.72 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 36

  37. Summary of Course • Lecture 10: Decision Making • Maximum Expected Utility • Utility • Expected utility • Decision network • Optimal policy • Computing the optimal policy • Value of information ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 37

  38. U Lecture 10: Decision Making D A B C ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 38

  39. Lecture 10: Decision Making • First decision: C • Decision A has already been made, and random event B is already known • Random event D is not known • Case #1: random event B happened (i.e. B is true) • EU( c | B ) = d P(d|c)U(B,c,d) • EU( C | B ) = P(D|C)U(B,C,D) + P(D|C)U(B,C,D)EU( C | B ) = 0.8 * 0.1 + 0.2 * 0.7EU( C | B ) = 0.22 • EU( C | B ) = P(D|C)U(B,C,D) + P(D|C)U(B,C,D)EU( C | B ) = 0.5 * 0.2 + 0.5 * 0.8EU( C | B ) = 0.5 • Rational decision, if B is true, is to not do action C ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 39

  40. Lecture 10: Decision Making • Case #2: random event B is false • EU( c | B ) = d P(d|c)U(B,c,d) • EU( C | B ) = P(D|C)U(B,C,D) + P(D|C)U(B,C,D)EU( C | B ) = 0.8 * 0.4 + 0.2 * 1 EU( C | B ) = 0.52 • EU( C | B ) = P(D|C)U(B,C,D) + P(D|C)U(B,C,D)EU( C | B ) = 0.5 * 0.2 + 0.5 * 0.6EU( C | B ) = 0.4 • Rational decision, if B is false, is to do action C ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 40

  41. Lecture 10: Decision Making • Second decision: A • The decision of A will affect the probability of random event B being true or false • The decision to do C given whether or not B is true has already been made • Case #1: We do action A • EU( A ) = b,d P(b|A)P(d|c)U(b,c,d) • EU( A ) = P(B|A)P(D|C)U(B,C,D) + P(B|A)P(D|C)U(B,C,D) + P(B|A)P(D|C)U(B,C,D) + P(B|A)P(D|C)U(B,C,D) EU( A ) = 0.4 * 0.5 * 0.2 + 0.4 * 0.5 * 0.8 + 0.6 * 0.8 * 0.4 + 0.6 * 0.2 * 1EU( A ) = 0.512 ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 41

  42. Lecture 10: Decision Making • Case #2: We don’t do action A • EU( A ) = b,d P(b|A)P(d|c)U(b,c,d) • EU( A ) = P(B|A)P(D|C)U(B,C,D) + P(B|A)P(D|C)U(B,C,D) + P(B|A)P(D|C)U(B,C,D) + P(B|A)P(D|C)U(B,C,D) EU( A ) = 0.7 * 0.5 * 0.2 + 0.7 * 0.5 * 0.8 + 0.3 * 0.8 * 0.4 + 0.3 * 0.2 * 1EU( A ) = 0.506 • Rational decision is to do action A • Optimal policy • *A = T • *C (B) = F • *C (B) = T ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 42

  43. Summary of Course • Lecture 11: Introduction to Learning • For all learning algorithms • Training data • Objective of learning • Evaluation • General algorithm • Precision and recall • Overfitting and n-fold cross-validation • K-Means • Q-Learning • Exploration function ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 43

  44. Summary of Course • Lecture 12: Introduction to Soft Computing • Fuzzy logic • Fuzzy sets, fuzzy membership functions, membership degree • Fuzzy rules • Artificial neural networks • Artificial neuron • Perceptron network • Genetic algorithms • Individuals • Operators: crossover, mutation, selection • Search algorithm ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 44

  45. Summary of Course • Lecture 13: Introduction to Ontologies • Objects, Categories, Relations, Attributes • Inheritance • Problems ECE457 Applied Artificial Intelligence R. Khoury (2008) Page 45

More Related