1 / 67

Artificial Intelligence & Expert Systems

Artificial Intelligence & Expert Systems. Heuristic Search. Heuristics. Heuristic means “rule of thumb”. To quote Judea Pearl,

ally
Télécharger la présentation

Artificial Intelligence & Expert Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence & Expert Systems Heuristic Search

  2. Heuristics • Heuristic means “rule of thumb”. • To quote Judea Pearl, • “Heuristics are criteria, methods or principles for deciding which among several alternative courses of action promises to be the most effective in order to achieve some goal”. • In heuristic search or informed search, heuristics are used to identify the most promising search path.

  3. Heuristics . . . • “The study of the methods and rules of discovery and invention” defined by George Polya • Greek root word is euisco which means “I discover” • Archimedes emerged and shouted “Eureka” meaning “I have found it” • In state space search, heuristics are formalized as rules for choosing those branches in a state space that are most likely to lead to an acceptable solution

  4. AI problem solvers employ heuristics in two basic situation • A problem may not have an exact solution because of inherent ambiguities in the problem statement or available data. A given set of symptoms may have several possible causes; doctors use heuristics to choose the most likely diagnosis and formulate a plan of treatment

  5. (2) A problem may have an exact solution, but the computational cost of finding it may be prohibitive. In many problems (such as chess),state space growth is combinatorially explosive, with the number of possible states increasing exponentially or factorially with the depth of the search. In these cases, exhausive, bruce-force search techniques such as depth-first or breadth-first search may fail to find a solution within any practical length of time. Heuristics attack this complexity by guiding the search along the most “promising” path through the space. By eliminating unpromising states and their descendants from consideration, a heuristics algorithm can defeat this combinatorial explosion and find an acceptable solution.

  6. Inherent limitation of heuristic search • Heuristic is only an informed guess of the next step to be taken in solving a problem. • Because heuristics use limited information, they are seldom able to predict the exact behavior of the state space farther along in the search. • a heuristic can lead a search algorithm to a sub-optimal solution or fail to find any solution at all.

  7. Importance of heuristics • But heuristics and the design of algorithms to implement heuristic search have long been a core concern of artificial intelligence research • It is not feasible to examine every inference that can be made in a mathematics domain • heuristic search is often the only practical answer • more recently, expert systems research has affirmed the importance of heuristics as an essential component of problem solving

  8. Heuristic Search Hill Climbing Best first search A* Algo Algorithms for Heuristic Search

  9. Hill Climbing • To implementation of heuristic search is through a procedure called hill climbing • Hill climbing strategies expand the current state in the search and evaluate its children • the best child is selected for further expansion; neither its siblings nor its parent are retained • search halts when it reaches a state that is better than any of its children • go uphill along the steepest possible path until it can go no farther • because it keeps no history, the algorithm cannot recover from failures of its strategy

  10. Hill Climbing . . . • If the Node is better, only then you proceed to that Node. Algorithm: 1. Start with current-state (cs) = initial state 2. Until cs = goal-state or there is no change in the cs do: (a) Get the successor of cs and use the EVALUATION FUNCTION to assign a score to each successor (b) If one of the successor has a better score than cs then set the new state to be the successor with the best score.

  11. Example-1: Tic-Tac-Toe In tic-tac-toe problem, • the combinatorics for exhaustive search are high but not insurmountable • total number of states that need to be considered in an exhaustive search at 9x8x7x….or 9! • Symmetry reduction decrease the search space • Symmetry reductions on the second level further reduce the number of paths through the space to 12x7!

  12. First three levels of the tic-tac-toe state space reduced by symmetry.

  13. The “most wins” heuristic applied to the first children in tic-tac-toe.

  14. Which one to choose??? Heuristic: • calculate winning lines and move to state with most winning lines.

  15. Heuristically reduced state space for tic-tac-toe.

  16. Example-2:

  17. Devise any heuristic to reach goal: • Start : Library • Goal : University

  18. Evaluation Function: • Distance between two place • Adopt the one with minimum distance from goal Probable route will be: • Library  Hospital  Newsagent  University

  19. Suppose S2 < S1 Then what will happen? The algorithm will always go to park from hospital instead of going to newsagent. The algorithm will get stuck here.

  20. Drawback of Hill Climbing

  21. Ridge = sequence of local maxima difficult for hill climbing to navigate • Plateaux = an area of the state space where the evaluation function is flat. • GETS STUCK 86% OF THE TIME.

  22. Simulated Annealing • A method to get rid of the local minima/maxima problem. • It is an optimization method that employ certain techniques to take small steps in the direction indicated by the gradient, but occasionally large steps in the gradient direction / same other directional taken.

  23. Best-First Search • A major problem of hill climbing strategies is their tendency to become stuck at local maxima • if they reach a state that has a better evaluation than any of its children, the algorithm halts • hill climbing can be used effectively if the evaluation function is sufficiently informative to avoid local maxima and infinite paths • heuristic search requires a more flexible algorithm: this is provided by best-first search, where, with a priority queue, recovery from local maxima is possible

  24. Heuristic search of a hypothetical state space

  25. A trace of the execution of best-first search

  26. Heuristic search of a hypothetical state space with open and closed highlighted

  27. The best-first search algorithm always select the most promising state on open for further expansion • as it is using a heuristic that may prove erroneous, it does not abandon all the other states but maintains then on open • In the event a heuristic leads the search down a path that proves incorrect, the algorithm will eventually retrieve some previously generated, “next best” state from open and shift its focus to another part of the space • In best-first search, as in depth-first and breadth-first search algorithms, the open list allows backtracking from paths that fail to produce a goal

  28. 2 8 3 1 6 4 7 5 1 2 3 8 4 7 6 5 Evaluation Function 1. Count the no. of tiles out of place in each state when compared with the goal. Out of place tiles 2, 8, 1, 6 In place tiles 3, 4, 5, 7 Initial state Goal

  29. Heuristic Function f(n) = h(n) • Drawback: The heuristic defined does not take into account the distance each tile has to be moved to bring it to the correct place.

  30. 1 4 1 1 1 1 2 Sum the no. of boxes the tile has to be moved to reach the goal for example: Total places to be moved = ‘2’ No. of places to be moved = ‘3’

  31. 2 8 3 1 6 4 7 5 Total tiles out of place = 5 Sum of distances out of place = 6 Tiles Distance 1 1 2 1 8 2 6 1 7 1 total: 6

  32. Another Scenario The first picture shows the current state n, and the second picture the goal state. h(n) = 5 because the tiles 2, 8, 1, 6 and 7 are out of place.

  33. Manhattan Distance Heuristic: Another heuristic for 8-puzzle is the Manhattan distance heuristic. • This heuristic sums the distance that the tiles are out of place. • The distance of a tile is measured by the sum of the differences in the x-positions and the y-positions. • For the above example, using the Manhattan distance heuristic, • h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6

  34. 3 5 6 1 4 2 8 7 Total tiles out of place = 7 Sum of distances Tiles Distance 3 2 1 1 2 2 8 2 7 2 6 3 5 3 total: 15

  35. Complete Heuristic function (sum of two functions) f(n) = g(n) + h(n) where h(n) = Sum of the distances g(n) = tiles out of places Anther Heuristic Could be: g(n) = level of search h(n) = number of tiles out of place

  36. In general, • Best-first search is a general algorithm for heuristically search any state space graph (as were in the breadth-and depth-first algorithms) • It is equally appropriate data-and goal-driven searches and supports a variety of heuristic evaluation functions • Because of its generality, best-first search can be used with a variety of heuristics, ranging from subjective estimates of state’s “goodness” to sophisticated measure based on the probability of a state leading to a goal

  37. Drawbacks with BFS • Problems with Best First Search • It reduces the costs to the goal but • It is not optimal nor complete • Uniform cost

  38. Algorithm A • Consider the evaluation function f(n) = g(n) + h(n) where n is any state encountered during the search g(n) is the cost of n from the start state h(n) is the heuristic estimate of the distance n to the goal • If this evaluation algorithm is used with the best_first_search algorithm, the result is called algorithm A.

  39. Algorithm A* • If the heuristic function used with algorithm A is admissible, the result is called algorithm A* (pronounced A-star). • A heuristic is admissible if it never overestimates the cost to the goal. • The A* algorithm always finds the optimal solution path whenever a path from the start to a goal state exists.

  40. Monotonicity A heuristic function h is monotone if 1. For all states ni and nJ, where nJ is a descendant of ni, h(ni) - h(nJ)  cost (ni, nJ), where cost (ni, nJ) is the actual cost (in number of moves) of going from state ni tonJ. 2. The heuristic evaluation of the goal state is zero, or h(Goal) = 0.

  41. Informedness • For two A* heuristics h1 and h2, if h1 (n)  h2 (n), for all states n in the search space, heuristic h2 is said to be moreinformed than h1.

  42. This Condition Ensures Shortest Path h(n) <= h*(n) Actual Heuristic Cost Estimated Heuristic Cost Condition for Admissible Search

  43. A* Search Evaluation function: f(n) = g(n) +h(n) Path cost to node n + heuristic cost at n Constraints: h(n) <= h*(n) (Admissible) g(n) >= g*(n) (Coverage)

  44. g(n) g*(n) Coverage: g(n) >= g*(n) Goal will never be reached

  45. Example of A* A:10 2 2 B:8 C:9 5 3 D:6 G:3 3 E:4 2 4 F:0 Path: (P1): Best First/Hill Climbing ABDEF: Cost P1 = 14 (not optimal) For A* algorithm F(A)=0+10=10, F(B)=2+8=10, f(C) = 2+9=11, Expand B F(D)=(2+5)+6=13, f(C)=11, Expand C F(G)=(2+3)+3=8, f(D)=13, Expand G F(f)=(2+3+2)+0=7, GOAL achieved Path ACGF: Cost P2=7 (Optimal) Path Admissibility Cost P2 < Cost P1 hence P2 is admissible Path

More Related