1 / 27

CS 416 Artificial Intelligence

CS 416 Artificial Intelligence. Lecture 5 Informed Searches. Administrivia. Visual Studio If you’ve never been granted permission before, you should receive email shortly Assign 1 Set up your submit account password Visit: www.cs.virginia.edu/~cs416/submit.html Toolkit

livvy
Télécharger la présentation

CS 416 Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 416Artificial Intelligence Lecture 5 Informed Searches

  2. Administrivia • Visual Studio • If you’ve never been granted permission before, you should receive email shortly • Assign 1 • Set up your submit account password • Visit: www.cs.virginia.edu/~cs416/submit.html • Toolkit • You should see that you’re enrolled through Toolkit access to class gradebook

  3. Example h(n) A 100 c(n, n’) 10 5 B C 0 90 f(n) 10 95 Goal

  4. A* w/o Admissibility B = goal f(B) = 10 Are we done? No Must explore C A 100 10 5 B C 0 90 10 95 Goal

  5. A* w/ Admissibility B = goal f(B) = 10 Are we done? Yes h(C) indicatesbest path costof 95 A 10 10 5 B C 0 90 10 95 Goal

  6. A* w/o Consistency B 101 5 5 A D 0 100 4 C 5 200

  7. A* w/ Consistency B 101 5 5 A D 0 100 4 C 5 105

  8. Pros and Cons of A* • A* is optimal and optimally efficient • A* is still slow and bulky (space kills first) • Number of nodes grows exponentially with the length to goal • This is actually a function of heuristic, but they all make mistakes • A* must search all nodes within this goal contour • Finding suboptimal goals is sometimes only feasible solution • Sometimes, better heuristics are non-admissible

  9. Memory-bounded Heuristic Search • Try to reduce memory needs • Take advantage of heuristic to improve performance • Iterative-deepening A* (IDA*) • Recursive best-first search (RBFS) • SMA*

  10. Iterative Deepening A* • Iterative Deepening • Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increased • As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f-cost of nodes expanded at last iteration • What happens when f-cost is real-valued?

  11. Recursive best-first search • Depth-first combined with best alternative • Keep track of options along fringe • As soon as current depth-first exploration becomes more expensive of best fringe option • back up to fringe, but update node costs along the way

  12. Recursive best-first search • box contains f-value of best alternative path available from any ancestor • First, explore path to Pitesti • Backtrack to Fagaras and update Fagaras • Backtrack to Pitesti and update Pitesti

  13. Quality of Iterative Deepening A* and Recursive best-first search • RBFS • O(bd) space complexity [if h(n) is admissible] • Time complexity is hard to describe • efficiency is heavily dependent on quality of h(n) • same states may be explored many times • IDA* and RBFS use too little memory • even if you wanted to use more than O(bd) memory, these two could not provide any advantage

  14. Simple Memory-bounded A* • Use all available memory • Follow A* algorithm and fill memory with new expanded nodes • If new node does not fit • free() stored node with worst f-value • propagate f-value of freed node to parent • SMA* will regenerate a subtree only when it is needed • the path through deleted subtree is unknown, but cost is known

  15. Thrashing • Typically discussed in OS w.r.t. memory • The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual search • time complexity will scale significantly if thrashing • So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation time

  16. Meta-foo • What does meta mean in AI? • Frequently it means step back a level from foo • Metareasoning = reasoning about reasoning • These informed search algorithms have pros and cons regarding how they choose to explore new levels • a metalevel learning algorithm may combine learn how to combine techniques and parameterize search

  17. Heuristic Functions • 8-puzzle problem Avg Depth=22 Branching = approx 3 322 states 170,000 repeated

  18. Heuristics • The number of misplaced tiles • Admissible because at least n moves required to solve n misplaced tiles • The distance from each tile to its goal position • No diagonals, so use Manhattan Distance • As if walking around rectilinear city blocks • also admissible

  19. Compare these two heuristics • Effective Branching Factor, b* • If A* explores N nodes to find the goal at depth d • b* = branching factor such that a uniform tree of depth d contains N+1 nodes • N+1 = 1 + b* + (b*)2 + … + (b*)d • b* close to 1 is ideal • because this means the heuristic guided the A* search linearly • If b* were 100, on average, the heuristic had to consider 100 children for each node • Compare heuristics based on their b*

  20. Compare these two heuristics

  21. Compare these two heuristics • h2 is always better than h1 • for any node, n, h2(n) >= h1(n) • h2dominates h1 • Recall all nodes with f(n) < C* will be expanded? • This means all nodes, h(n) + g(n) < C*, will be expanded • All nodes where h(n) < C* - g(n) will be expanded • All nodes h2 expands will also be expanded by h1 and because h1 is smaller, others will be expanded as well

  22. Inventing admissible heuristic funcs • How can you create h(n)? • Simplify problem by reducing restrictions on actions • Allow 8-puzzle pieces to sit atop on another • Call this a relaxed problem • The cost of optimal solution to relaxed problem is admissible heuristic for original problem • It is at least as expensive for the original problem

  23. Examples of relaxed problems • A tile can move from square A to square B if • A is horizontally or vertically adjacent to B • and B is blank • A tile can move from A to B if A is adjacent to B (overlap) • A tile can move from A to B if B is blank (teleport) • A tile can move from A to B (teleport and overlap) • Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to compute

  24. Multiple Heuristics • If multiple heuristics available: • h(n) = max {h1(n), h2(n), …, hm(n)}

  25. Use solution to subproblem as heuristic • What is optimal cost of solving some portion of original problem? • subproblem solution is heuristic of original problem

  26. Pattern Databases • Store optimal solutions to subproblems in database • We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle • During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic

  27. Learning • Could also build pattern database while solving cases of the 8-puzzle • Must keep track of intermediate states and true final cost of solution • Inductive learning builds mapping of state -> cost • Because too many permutations of actual states • Construct important features to reduce size of space

More Related