1 / 53

CS 416 Artificial Intelligence

CS 416 Artificial Intelligence. Lecture 5 Finish Uninformed Searches Begin Informed Searches. Uniform-cost search (review). Always expand the lowest-path-cost node Don’t evaluate the node until it is “expanded”… not when it is the result of expansion. Uniform-cost search (review).

Télécharger la présentation

CS 416 Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 416Artificial Intelligence Lecture 5 Finish Uninformed Searches Begin Informed Searches

  2. Uniform-cost search (review) • Always expand the lowest-path-cost node • Don’t evaluate the node until it is “expanded”… not when it is the result of expansion

  3. Uniform-cost search (review) • Fringe = [S0] • Expand(S)  {A1, B5, C15} B

  4. Uniform-cost search (review) • Fringe = [A1, B5, C15] • Expand(A) = {G11}

  5. Uniform-cost search (review) • Fringe = [B5, G11, C15] • Expand(B)  {G10}

  6. Uniform-cost search (review) • Fringe = [G10, C15] • Expand(G)  Goal

  7. Bidirectional search • Search from goal to start • Search from start to goal • Two bd/2 searches instead of one bd

  8. Bidirectional search • Implementation • Each search checks nodes before expansion to see if they are on the fringe of the other search

  9. Bidirectional search • Example: Bidirectional BFS search with d=6 & b=10 • Worst case: both search trees must expand all but one element of the third level of the tree • 2* (1 + 10 + 100 + 1000 + 10000 - 10) node expansions • Versus 1 * (1 + 10 + 100 + … + 10000000) expansions

  10. Bidirectional search • Implementation • Checking fringe of other tree • At least one search tree must be kept in memory • Checking can be done in constant time (hash table) • Searching back from goal • Must be able to compute predecessors to node n: Pred (n) • Easy with 15-puzzle, but how about chess?

  11. Avoid repeated states • Search algorithms that forget their history are doomed to repeat it.Russell and Norvig • So remember where you’ve been… on a list • If you come upon a node you’ve visited before, don’t expand it • Let’s call this GRAPH-SEARCH

  12. GRAPH-SEARCH • Faster and smaller space requirements if many repeating states • Time-space requirements are a function of state space not depth/branching of tree to goal • At this point in class, repeated states will be thrown away – even if the new path to the state is better than the one explored previously

  13. Interesting problems • Exercise 3.9: • 3 cannibals and 3 missionaries and a boat that can hold one or two people are on one side of the river. Get everyone across the river (early AI problem, 1968) • 8-puzzle and 15-puzzle, invented by Sam Loyd in good ol’ USA in 1870s. Think about search space. • Rubik’s cube • Traveling Salesman Problem (TSP)

  14. Chapter 4 – Informed Search • INFORMED? • Uses problem-specific knowledge beyond the definition of the problem itself • selecting best lane in traffic • playing a sport • what’s the heuristic (or evaluation function)? www.curling.ca

  15. Best-first search • BFS/DFS/UCS differ in how they select a node to pull off the fringe • We want to pull off the fringe the node that’s on the optimal path • But if we know the “best” node to explore, we don’t have to search!!! We’re not certain we know the best node

  16. Best-first search • Use an evaluation function to select node to expand • f(n) = evaluation function = expected “cost” for a path from root to goal that goes through node n • Select the node n on fringe that minimizes f(n) • How do we build f(n)?

  17. Evaluation function: f(n) • Combine two costs • f(n) = g(n) + h(n) • g(n) = cost to get to n from start  We know this! • h(n) = cost to get to from n to goal

  18. Heuristics • A function, h(n), that estimates cost of cheapest path from node n to the goal • h(n) = 0 if n == goal node

  19. Greedy best-first search • Trust your heuristic and ignore path costs • evaluate node that minimizes h(n) • f(n) = h(n) • Example: getting from A to B • Explore nodes with shortest straight distance to B • Shortcomings of heuristic? • Greedy can be bad

  20. A* (A-star) Search • Don’t simply minimize the cost to goal… minimize the cost from start to goal… • f(n) = g(n) + h(n) • g(n) = cost to get to n from start • h(n) = cost to get from n to goal • Select node from fringe that minimizes f(n)

  21. A* is Optimal? • A* can be optimal if h(n) satisfies conditions • h(n) never overestimates cost to reach the goal • it is eternally optimistic • called anadmissible heuristic • f(n) never overestimates cost of a solution through n • Proof of optimality?

  22. A* is Optimal • We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal • Let G be a suboptimal goal node • f(G) = g(G) + h(G) • h(G) = 0 because G is a goal node and we cannot overestimate actual cost to reach G • f(G) = g(G) > C* (because G is suboptimal)

  23. A* is Optimal (cont.) • Let n be a node on the optimal path • because h(n) does not overestimate • f(n) = g(n) + h(n) <= C* • Therefore f(n) <= C* < f(G) • node n will be selected before node G A* is optimal if h() is admissible

  24. Repeated States and GRAPH-SEARCH • GRAPH-SEARCH always ignores all but the first occurrence of a state during search • Lower cost path may be tossed • So, don’t throw away subsequent occurrences • Or, ensure that the optimal path to any repeated state is always the first one followed • Additional constraint on heurisitic, consistency

  25. Consistent heuristic: h(n) • Heuristic function must be monotonic • for every node, n, and successor,n’, obtained with action a • estimated cost of reaching goal from n is no greater than cost of getting to n’ plus estimated cost of reaching goal from n’ • h(n) <= c(n, a, n’) + h(n’) • This implies f(n) along any path are nondecreasing

  26. Examples of consistent h(n) • h(n) <= c(n, a, nsucc) + h(nsucc) • recall h(n) is admissible • The quickest you can get there from here is 10 minutes • It may take more than 10 minutes, but not fewer • After taking an action and learning the cost • It took you two minutes to get here and you still have nine minutes to go • We cannot learn… it took you two minutes to get here and you have seven minutes to go 10 0 2 10 9

  27. Example of inconsistent h(n) • As a thought exercise for after class • Consider what happens when a heuristic is inconsistent • Consider how one could have a consistent but non-admissible heuristic

  28. Proof of monotonicity of f(n) • If h(n) is consistent (monotonic) • then f(n) along any path is nondecreasing • let n’ be a successor of n • g(n’) = g(n) + c (n, a, n’) for some a • f(n’) = g(n’) + h(n’) = g(n) + c(n, a, n’) + h(n’) >= g(n) + h(n) = f(n) monotonicity impliesh(n) <= c(n, a, n’) + h(n’)

  29. Contours • Because f(n) is nondecreasing • we can draw contours • If we know C* • We only need to explore contours less than C*

  30. Properties of A* • A* expands all nodes with f(n) < C* • A* expands some (at least one) of the nodes on the C* contour before finding the goal • A* expands no nodes with f(n) > C* • these unexpanded nodes can be pruned

  31. A* is Optimally Efficient • Compared to other algorithms that search from root • Compared to other algorithms using same heuristic • No other optimal algorithm is guaranteed to expand fewer nodes than A*(except perhaps eliminating consideration of ties at f(n) = C*)

  32. Pros and Cons of A* • A* is optimal and optimally efficient • A* is still slow and bulky (space kills first) • Number of nodes grows exponentially with the length to goal • This is actually a function of heuristic, but they all have errors • A* must search all nodes within this goal contour • Finding suboptimal goals is sometimes only feasible solution • Sometimes, better heuristics are non-admissible

  33. Memory-bounded Heuristic Search • Try to reduce memory needs • Take advantage of heuristic to improve performance • Iterative-deepening A* (IDA*) • Recursive best-first search (RBFS) • SMA*

  34. Iterative Deepening A* • Iterative Deepening • Remember from uninformed search, this was a depth-first search where the max depth was iteratively increased • As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f-cost of nodes expanded at last iteration • What happens when f-cost is real-valued?

  35. Recursive best-first search • Depth-first combined with best alternative • Keep track of options along fringe • As soon as current depth-first exploration becomes more expensive of best fringe option • back up to fringe, but update node costs along the way

  36. Recursive best-first search

  37. Recursive best-first search

  38. Recursive best-first search

  39. Quality of Iterative Deepening A* and Recursive best-first search • RBFS • O(bd) space complexity [if h(n) is admissible] • Time complexity is hard to describe • efficiency is heavily dependent on quality of h(n) • same states may be explored many times • IDA* and RBFS use too little memory • even if you wanted to use more than O(bd) memory, these two could not provide any advantage

  40. Simple Memory-bounded A* • Use all available memory • Follow A* algorithm and fill memory with new expanded nodes • If new node does not fit • free() stored node with worst f-value • propagate f-value of freed node to parent • SMA* will regenerate a subtree only when it is needed • the path through deleted subtree is unknown, but cost is known

  41. Thrashing • Typically discussed in OS w.r.t. memory • The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual search • time complexity will scale significantly if thrashing • So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation time

  42. Meta-foo • What does meta mean in AI? • Frequently it means step back a level from foo • Metareasoning = reasoning about reasoning • These informed search algorithms have pros and cons regarding how they choose to explore new levels • a metalevel learning algorithm may combine learn how to combine techniques and parameterize search

  43. Heuristic Functions • 8-puzzle problem Avg Depth=22 Branching = approx 3 322 states 170,000 repeated

  44. Heuristics • The number of misplaced tiles (h1) • Admissible because at least n moves required to solve n misplaced tiles • The distance from each tile to its goal position (h2) • No diagonals, so use Manhattan Distance • As if walking around rectilinear city blocks • also admissible

  45. Compare these two heuristics • Effective Branching Factor, b* • If A* creates N nodes to find the goal at depth d • b* = branching factor such that a uniform tree of depth d contains N+1 nodes (add 1 to N to account for root) • N+1 = 1 + b* + (b*)2 + … + (b*)d • b* close to 1 is ideal • because this means the heuristic guided the A* search linearly • If b* were 100, on average, the heuristic had to consider 100 children for each node • Compare heuristics based on their b*

  46. Compare these two heuristics

  47. Compare these two heuristics • h2 is always better than h1 • for any node, n, h2(n) >= h1(n) • h2dominates h1 • Recall all nodes with f(n) < C* will be expanded? • This means all nodes, h(n) + g(n) < C*, will be expanded • All nodes where h(n) < C* - g(n) will be expanded • All nodes h2 expands will also be expanded by h1 and because h1 is smaller, others will be expanded as well

  48. Inventing admissible heuristic funcs • How can you create h(n)? • Simplify problem by reducing restrictions on actions • Allow 8-puzzle pieces to sit atop on another • Call this a relaxed problem • The cost of optimal solution to relaxed problem is admissible heuristic for original problem • It is at least as expensive for the original problem

  49. Examples of relaxed problems • A tile can move from square A to square B if • A is horizontally or vertically adjacent to B • and B is blank • A tile can move from A to B if A is adjacent to B (overlap) • A tile can move from A to B if B is blank (teleport) • A tile can move from A to B (teleport and overlap) • Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to compute

  50. Multiple Heuristics • If multiple heuristics available: • h(n) = max {h1(n), h2(n), …, hm(n)}

More Related