1 / 23

Artificial Intelligence for Games Informed Search (2)

Artificial Intelligence for Games Informed Search (2). Patrick Olivier p.l.olivier@ncl.ac.uk. Heuristic functions. sample heuristics for 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance h 1 (S) = ? h 2 (S) = ?. Heuristic functions.

rowdy
Télécharger la présentation

Artificial Intelligence for Games Informed Search (2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence for GamesInformed Search (2) Patrick Olivier p.l.olivier@ncl.ac.uk

  2. Heuristic functions • sample heuristics for 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance • h1(S) = ? • h2(S) = ?

  3. Heuristic functions • sample heuristics for 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance • h1(S) = 8 • h2(S) = 3+1+2+2+2+3+3+2 = 18 • dominance: • h2(n) ≥ h1(n) for all n (both admissible) • h2 is better for search (closer to perfect) • less nodes need to be expanded

  4. Example of dominance • randomly generate 8-puzzle problems • 100 examples for each solution depth • contrast behaviour of heuristics & strategies

  5. A* enhancements & local search • Memory enhancements • IDA*: Iterative-Deepening A* • SMA*: Simplified Memory-Bounded A* • Other enhancements (next lecture) • Dynamic weighting • LRTA*: Learning Real-time A* • MTS: Moving target search • Local search (next lecture) • Hill climbing & beam search • Simulated annealing & genetic algorithms

  6. Improving A* performance • Improving the heuristic function • not always easy for path planning tasks • Implementation of A* • key aspect for large search spaces • Relaxing the admissibility condition • trading optimality for speed

  7. IDA*: iterative deepening A* • reduces the memory constraints of A* without sacrificing optimality • cost-bound iterative depth-first search with linear memory requirements • expands all nodes within a cost contour • store f-cost (cost-limit) for next iteration • repeat for next highest f-cost

  8. Start state Goal state 1 2 3 6 X 4 8 7 5 1 2 3 8 X 4 7 6 5 IDA*: exercise • Order of expansion: • Move space up • Move space down • Move space left • Move space right • Evaluation function: • g(n) = number of moves • h(n) = misplaced tiles • Expand the state space to a depth of 3 and calculate the evaluation function

  9. 0+3=3 1 2 3 6 X 4 8 7 5 1+4=5 1+3=4 1+4=6 1 3 6 2 4 8 7 5 1 2 3 6 7 4 8 5 1 2 3 6 4 4 8 7 5 1 2 3 X 6 4 8 7 5 1+3=4 IDA*: f-cost = 3 Next f-cost = 4 Next f-cost = 3 Next f-cost = 5

  10. 0+3=3 1 2 3 6 X 4 8 7 5 1 2 3 6 7 4 8 7 5 1+3=4 1+4=5 1 3 6 2 4 8 7 5 2+3=5 4+0=4 2+2=4 3+3=6 3+1=4 1 2 3 8 4 7 6 5 1 2 3 8 6 4 7 5 1 2 3 8 6 4 7 5 1 2 3 8 6 4 7 5 1 2 3 6 4 8 7 5 IDA*: f-cost = 4 Next f-cost = 4 Next f-cost = 5

  11. Simplified memory-bounded A* • SMA* • When we run out of memory drop costly nodes • Back their cost up to parent (may need them later) • Properties • Utilises whatever memory is available • Avoids repeated states (as memory allows) • Complete (if enough memory to store path) • Optimal (or optimal in memory limit) • Optimally efficient (with memory caveats)

  12. Simple memory-bounded A*

  13. Class exercise • Use the state space given in the example • Execute the SMA* algorithm over this state space • Be sure that you understand the algorithm!

  14. Simple memory-bounded A*

  15. Simple memory-bounded A*

  16. Simple memory-bounded A*

  17. Simple memory-bounded A*

  18. Simple memory-bounded A*

  19. Simple memory-bounded A*

  20. Simple memory-bounded A*

  21. Simple memory-bounded A*

  22. Trading optimality for speed… • The admissibility condition guarantees that an optimal path is found • In path planning a near-optimal path can be satisfactory • Try to minimise search instead of minimising cost: • i.e. find a near-optimal path (quickly)

  23. Weighting… fw(n) = (1 - w).g(n) + w.h(n) • w = 0.0 (breadth-first) • w = 0.5 (A*) • w = 1.0 (best-first, with f = h) • trading optimality for speed • weight towards h when confident in the estimate of h

More Related