1 / 34

Notes

Notes. Dijstra’s Algorithm Corrected syllabus. Tree Search Implementation Strategies. Require data structure to model search tree Tree Node: State (e.g. Sibiu) Parent (e.g. Arad) Action (e.g. GoTo (Sibiu)) Path cost or depth (e.g. 140 )

dewey
Télécharger la présentation

Notes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Notes • Dijstra’sAlgorithm • Corrected syllabus

  2. Tree Search Implementation Strategies • Require data structure to model search tree • Tree Node: • State (e.g. Sibiu) • Parent (e.g. Arad) • Action (e.g. GoTo(Sibiu)) • Path cost or depth (e.g. 140) • Children (e.g. Faragas, Oradea) (optional, helpful in debugging)

  3. Queue • Methods: • Empty(queue) • Returns true if there are no more elements • Pop(queue) • Remove and return the first element • Insert(queue, element) • Inserts element into the queue • InsertFIFO(queue, element) – inserts at the end • InsertLIFO(queue, element) – inserts at the front • InsertPriority(queue, element, value) – inserts sorted by value

  4. informed Search

  5. Search start goal Uninformed search Informed search

  6. Informed Search • What if we had an evaluation function h(n) that gave us an estimate of the cost associated with getting from n to the goal • h(n) is called a heuristic

  7. Romania with step costs in km h(n)

  8. Greedy best-first search • Evaluation function f(n) = h(n) (heuristic) • e.g., f(n) = hSLD(n) = straight-line distance from n to Bucharest • Greedy best-first search expands the node that is estimated to be closest to goal

  9. Romania with step costs in km

  10. Best-First Algorithm

  11. Performance of greedy best-first search • Complete? • Optimal?

  12. Failure case for best-first search

  13. Performance of greedy best-first search • Complete? • No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt • Optimal? • No

  14. Complexity of greedy best first search • Time? • O(bm), but a good heuristic can give dramatic improvement • Space? • O(bm) -- keeps all nodes in memory

  15. What can we do better?

  16. A* search • Ideas: • Avoid expanding paths that are already expensive • Consider • Cost to get here (known) – g(n) • Cost to get to goal (estimate from the heuristic) – h(n)

  17. A * Evaluation functions • Evaluation function f(n) = g(n) + h(n) • g(n) = cost so far to reach n • h(n) = estimated cost from n to goal • f(n) = estimated total cost of path through n to goal goal n start g(n) h(n) f(n)

  18. A* Heuristics • A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. • An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic • Example: hSLD(n) (never overestimates the actual road distance)

  19. What happens if heuristic is not admissible? • Will still find solution (complete) • But might not find best solution (not optimal)

  20. Properties of A* (w/ admissible heuristic) • Complete? • Yes (unless there are infinitely many nodes with f ≤ f(G) ) • Optimal? • Yes • Time? • Exponential, approximately O(bd)in the worst case • Space? • O(bm) Keeps all nodes in memory

  21. The heuristic h(x) guides the performance of A* • Let d(x) be the actual distance between S and G • h(x) = 0 : • A* is equivalent to Uniform-Cost Search • h(x) <= d (x) : • guarantee to compute the shortest path; the lower the value h(x), the more node A* expands • h(x) = d (x) : • follow the best path; never expand anything else; difficult to compute h(x) in this way! • h(x) > d(x) : • not guarantee to compute a best path; but very fast • h(x) >> g(x) : • h(n) dominates -> A* becomes the best first search

  22. Admissible heuristics

  23. Admissible heuristics E.g., for the 8-puzzle:

  24. Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = summed Manhattan distance for all tiles (i.e., no. of squares from desired location of each tile) • h1(S) = ? • h2(S) = ?

  25. Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18 Which is better?

  26. Dominance • If h2(n) ≥ h1(n) for all n (both admissible) • then h2dominatesh1 •  h2is better for search • What does better mean? • All searches we’ve discussed are exponential in time

  27. Comparison of algorithms(number of nodes expanded)

  28. Visually

  29. Where do heuristics come from? • From people • Knowledge of the problem • From computers • By considering a simpler version of the problem • Called a relaxation

  30. Relaxed problems • 8-puzzle • If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution • If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution • Consider the example of straight line distance (Romania navigation) • Is that a relaxation?

  31. Iterative-Deepening A* (IDA*) • Further reduce memory requirements of A* • Regular Iterative-Deepening: regulated by depth • IDA*: regulated by f(n)=g(n)+h(n)

  32. Questions?

More Related