1 / 24

“If I Only had a Brain” Search

“If I Only had a Brain” Search. Lecture 5-1 October 26 th , 1999 CS250. Blind Search. No information except Initial state Operators Goal test If we want worst-case optimality, need exponential time. “How long ‘til we get there?”. Add a notion of progress to search

marinel
Télécharger la présentation

“If I Only had a Brain” Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “If I Only had a Brain” Search Lecture 5-1 October 26th, 1999 CS250 CS250: Intro to AI/Lisp

  2. Blind Search • No information except • Initial state • Operators • Goal test • If we want worst-case optimality, need exponential time CS250: Intro to AI/Lisp

  3. “How long ‘til we get there?” • Add a notion of progress to search • Not just the cost to date • How far we have to go CS250: Intro to AI/Lisp

  4. Best-First Search • Next node in General-Search • Queuing function • Replace with evaluation function • Go with the most desirable path CS250: Intro to AI/Lisp

  5. Heuristic Functions • Estimate with a heuristic function, h(n) • Problem specific (Why?) • Information about getting to the goal • Not just where we’ve been • Examples • Route-finding? • 8 Puzzle? CS250: Intro to AI/Lisp

  6. Greedy Searching • Take the path that looks the best right now • Lowest estimated cost • Not optimal • Not complete • Complexity? • Time: O(bm) • Space: O(bm) CS250: Intro to AI/Lisp

  7. Best of Both Worlds? • Greedy • Minimizes total estimated cost to goal, h(n) • Not optimal • Not complete • Uniform cost • Minimizes cost so far, g(n) • Optimal & complete • Inefficient CS250: Intro to AI/Lisp

  8. Greedy + Uniform Cost • Evaluate with both criteria f(n) = g(n) + h(n) • What does this mean? • Sounds good, but is it: • Complete? • Optimal? CS250: Intro to AI/Lisp

  9. Admissible Heuristics • Optimistic: Never overestimate the cost of reaching the goal • A* Search = Best-first + Admissible h(n) CS250: Intro to AI/Lisp

  10. A* Search • Complete • Optimal, if: • Heuristic is admissible CS250: Intro to AI/Lisp

  11. Monotonicity • Monotonic heuristic functions are nondecreasing • Why might this be an important feature? • Non-monotonic? Use pathmax: • Given a node, n, and its child, n’ f(n’) = max(f(n), g(n’) + h(n’)) CS250: Intro to AI/Lisp

  12. A* in Action • Contoured state space • A* starts at initial node • Expands leaf node of lowest f(n) = g(n) + h(n) • Fans out to increasing contours CS250: Intro to AI/Lisp

  13. A* in Perspective • What if h(n) = 0 everywhere? • A* is uniform cost • What if h(n) is an exact estimate of the remaining cost? • A* runs in linear time! • Different errors lead to different performance factors • A* is the best (in terms of expanded nodes) of optimal best-first searches CS250: Intro to AI/Lisp

  14. A*’s Complexity • Depends on the error of h(n) • Always 0: Breadth-first search • Exactly right: Time O(n) • Constant absolute error: Time O(n), but more than exactly right • Constant relative error: Time O(nk), Space O(nk) • See Figure 4.8 CS250: Intro to AI/Lisp

  15. Branching Factors • Where f ’ is the next smaller cost, after f CS250: Intro to AI/Lisp

  16. Inventing Heuristics • Dominant heuristics: Bigger is better, if you don’t overestimate • How do you create heuristics? • Relaxed problem • Statistical approach • Constraint satisfaction • Most-constrained variable • Most-constraining variable • Least-constraining value CS250: Intro to AI/Lisp

  17. Improving on A* • Best of both worlds with DFID • Can we repeat with A*? • Successive iterations: • Increasing search depth (as with DFID) • Increasing total path cost CS250: Intro to AI/Lisp

  18. Iterative Deepening A* • Good stuff in A* • Limited memory CS250: Intro to AI/Lisp

  19. Iterative Improvements • Loop through, trying to “zero in” on the solution • Hill climbing • Climb higher • Problems? • Solution? Add a touch of randomness CS250: Intro to AI/Lisp

  20. Annealing an.neal vb [ME anelen to set on fire, fr. OE onaelan, fr. on + aelan to set on fire, burn, fr. al fire; akin to OE aeled fire, ON eldr] vt (1664) 1 a: to heat and then cool (as steel or glass) usu. for softening and making less brittle; also: to cool slowly usu. in a furnace b: to heat and then cool (nucleic acid) in order to separate strands and induce combination at lower temperature esp. with complementary strands of a different species 2: strengthen, toughen ~ vi: to be capable of combining with complementary nucleic acid by a process of heating and cooling CS250: Intro to AI/Lisp

  21. Simulated Annealing (defun simulated-annealing-search (problem &optional(schedule (make-exp-schedule))) (let* ((current (create-start-node problem)) (successors (expand current problem)) (best current) next temp delta) (for time = 1 to infinity do (setf temp (funcall schedule time)) (when (or (= temp 0) (null successors)) (RETURN (values (goal-test problem best) best))) (when (< (node-h-cost current) (node-h-cost best)) (setf best current)) (setf next (random-element successors)) (setf delta (- (node-h-cost next) (node-h-cost current))) (when (or (< delta 0.0) ; < because we are minimizing (< (random 1.0) (exp (/ (- delta) temp)))) (setf current next successors (expand next problem)))))) CS250: Intro to AI/Lisp

  22. Let* (let* ((current (create-start-node problem)) (successors (expand current problem)) (best current) next temp delta) BODY ) CS250: Intro to AI/Lisp

  23. The Body (for time = 1 to infinity do (setf temp (funcall schedule time)) (when (or (= temp 0) (null successors)) (RETURN (values (goal-test problem best) best))) (when (< (node-h-cost current) (node-h-cost best)) (setf best current)) (setf next (random-element successors)) (setf delta (- (node-h-cost next) (node-h-cost current))) (when (or (< delta 0.0) ; < because we are minimizing (< (random 1.0) (exp (/ (- delta) temp)))) (setf current next successors (expand next problem)))))) CS250: Intro to AI/Lisp

  24. Proof of A*’s Optimality • Suppose that G is an optimal goal state with with path cost f*, and G2 is a suboptimal goal state, where g(G2) > f*. • Suppose A* selects G2 from the queue, will A* terminate? • Consider a node n that is a leaf node on an optimal path to G • Since h is admissible, f*>=f(n), and since G2 was chosen over n: f(n) >= f(G2) • Together, they imply f* >= f(G2) • But G2 is a goal, so h(G2) = 0, f(G2) = g(G2) • Therefore, f* >= g(G2) CS250: Intro to AI/Lisp

More Related