1 / 51

Informed Search Methods

Informed Search Methods. Artificial Intelligence Second Term Fourth Year (08 CS). Heuristic. Origin: from Greek Word ‘ heuriskein ’, means, “to discover”. Webster’s New World Dictionary defines Heuristic as, “helping to discover or learn” As an adjective, means, serving to discover

semah
Télécharger la présentation

Informed Search Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Informed Search Methods Artificial Intelligence Second Term Fourth Year (08 CS)

  2. Heuristic • Origin: from Greek Word ‘heuriskein’, means, “to discover”. • Webster’s New World Dictionary defines Heuristic as, “helping to discover or learn” • As an adjective, means, serving to discover • As noun, a heuristic is an aid to discovery. A heuristic is a method to help solve a problem, commonly informal. It is particularly used for a method that often rapidly leads to a solution that is usually reasonably close to the best possible answer. Heuristics are "rules of thumb", educated guesses, intuitive judgments or simply common sense. A heuristic contributes to the reduction of search in a problem-solving activity.

  3. Heuristic Search • Uses domain-dependent (heuristic) information in order to search the space more efficiently. Ways of using heuristic information: • Deciding which node to expand next, instead of doing the expansion in a strictly breadth-first or depth-first order; • In the course of expanding a node, deciding which successor or successors to generate, instead of blindly generating all possible successors at one time; • Deciding that certain nodes should be discarded, or pruned, from the search space.

  4. Heuristic Search • A moment's reflection will show ourselves constantly using heuristics in the course of our everyday lives. • If the sky is grey we conclude that it would be better to put on a coat before going out. • We book our holidays in August because that is when the weather is best.

  5. Heuristic Function • It is a function that maps from problem state description to measure of desirability, usually represented as number. • Which aspects of the problem state are considered, how those aspects are evaluated, and the weights given to individual aspects are chosen in such a way that the value of the heuristic function at a given node in the search process gives as good an estimate as possible of whether that node is on the desired path.

  6. Major Benefits of Heuristics • Heuristic approaches have an inherent flexibility, which allows them to be used on ill-structured and complex problems. • These methods by design may be simpler for the decision maker to understand, especially when it is composed of qualitative analysis. Thus chances are much higher for implementing proposed solution. • These methods may be used as part of an iterative procedure that guarantees the finding of an optimal solution.

  7. Major Disadvantages and Limitations of Heuristics • The inherent flexibility of heuristic methods can lead to misleading or even fraudulent manipulations and solutions. • Certain heuristics may contradict others that are applied to the same problem, which generates confusion and lack of trust in heuristic methods. • Heuristics are not as general as algorithms;

  8. Generate and Test • Generate a possible solution. For some problems, this means generating a particular point in the problem space. For others, it means generating a path from a start state. • Test to see if this is actually a solution by comparing the chosen point or the endpoint of the chosen path to the set of acceptable goal states. • If a solution has been found, Quit, otherwise return to step1.

  9. Best First Search • Depth first search is good because is allows a solution to be found without all competing branches to be expanded. • Breadth first search is good because it does not get trapped on dead-end paths. • Best First Search combines the advantages of the two. • At each step of the best first search process, we select the most promising of the nodes we have generated so far. • This is done by applying an appropriate heuristic function to each of them.

  10. Best First Search: Algorithm • Start with OPEN containing just the initial state. • Until a goal is found or there are no nodes left on OPEN do: • Pick the best node on OPEN • Generate its successors. • For each successor do: • If it has not been generated before, evaluate it, add it to OPEN, and record its parent. • If it has been generated before, change the parent if this new path is better than the previous one. In that case, update the cost of getting to this node and to any successors that this node may already have.

  11. Best First Search: Example 1 A A 3 1 5 B B D D C 5 4 6 6 E E G H F 2 1 I J J An OR Graph

  12. Best First Search: Example 2 A A Open closed D 6 B B 4 C C 4 4 E 5 F 6 G 4 H H 3 I 6 J K 7 L 8 M 5 N 4 O 2 P 3 Q 7 R 4 Note: P is the goal T 1 S 6 H.W: Repeat the same example when T is the goal.

  13. Greedy Best First Search • This is like DFS, but picks the path that gets you closest to the goal. • Needs a measure of distance from the goal. h(n) = estimated cost of cheapest path from n to goal. h(n) is a heuristic. • Analysis • Greed tends to work quite well (despite being one of the sins) • But, it does not always find the shortest path. • Susceptible to false starts. • May go down an infinite path with no way to reach goal. • The algorithm is incomplete without cycle checking. The algorithm is also not optimal .

  14. Greedy Best Fit Search F h = 20 B h = 20 D h = 7 D h = 7 A h = 10 A h = 10 G h = 12 Start H = 1 Start h = 1 E h = 5 E h = 5 C h = 8 Goal h = 0 Goal h = 0 The path from start to goal is: Start  A  D  E  Goal.

  15. The A* Search • A search algorithm to find the shortest path through a search space to a goal state using a heuristic. f(n) = g(n) + h(n) • f(n) - function that gives an evaluation of the state • g(n) - the cost of getting from the initial state to the current state • h(n) - the cost of getting from the current state to a goal state

  16. A D C B 7 + 3 = 10 6 + 2 = 8 9 + 1 = 10 E F B F E 6+5=11 8+2=10 9+4=13 6+2=8 4+5=9 D G E A 8 B 9 C 6 D 7 E 5 F 2 G 0 Distance to destination

  17. A* Search: An Example Distance Travelled = g Distance to be covered = h f = g + h A 8 B 9 C 6 D 7 E 5 F 2 G 0 Distance to destination

  18. An 8-Puzzle game Goal Start State Let f(n) = g(n) + h(n) where g(n) = actual distance from n to the start state. h(n) = number of tiles out of place.

  19. State A f(A) = 4 State D f(D) = 1+5 = 6 State B f(B) = 1+5 = 6 State C f(C) = 1+3 = 4 State E f(E) = 2+3 = 5 State F f(F) = 2+3 = 5 State G f(G) = 2+4 = 6 State H f(H) = 3+3 = 6 State I f(I) = 3+4 = 7 State J f(J) = 3+2 = 5 State K f(K) = 3+4=7 State L f(L) = 4+1 = 5 State M f(M) = 5+0 = 5 State N f(N) = 5+2 = 7

  20. Hill Climbing • Searching for a goal state = Climbing to the top of a hill • Generate-and-test + direction to move. • Heuristic function to estimate how close a given state is to a goal state.

  21. Simple Hill Climbing Algorithm • Evaluate the initial state. • Loop until a solution is found or there are no new operators left to be applied: - Select and apply a new operator -Evaluate the new state: goal  quit better than current state  new current state

  22. Simple Hill Climbing Evaluation function as a way to inject task specific knowledge into the control process.

  23. Steepest-Ascent Hill Climbing (Gradient Search) • Considers all the moves from the current state. • Selects the best one as the next state.

  24. Steepest-Ascent Hill Climbing (Gradient Search) Algorithm • Evaluate the initial state. • Loop until a solution is found or a complete iteration produces no change to current state: - SUCC = a state such that any possible successor of the current state will be better than SUCC (the worst state). - For each operator that applies to the current state, evaluate the new state: goal  quit better than SUCC  set SUCC to this state - SUCC is better than the current state  set the current state to SUCC.

  25. Hill Climbing: Disadvantages Local Maximum A state that is better than all of its neighbours, but not better than some other states far away.

  26. Hill Climbing: Disadvantages Plateau A flat area of the search space in which all neighbouring states have the same value.

  27. Hill Climbing: Disadvantages Ridge The orientation of the high region, compared to the set of available moves, makes it impossible to climb up. However, two moves executed serially may increase the height.

  28. Hill Climbing: Disadvantages Ways Out • Backtrack to some earlier node and try going in a different direction. • Make a big jump to try to get in a new section. • Moving in several directions at once.

  29. Hill Climbing: Disadvantages • Hill climbing is a local method: Decides what to do next by looking only at the “immediate” consequences of its choices. • Global information might be encoded in heuristic functions.

  30. Hill Climbing: Example 1 A A 8 B B 13 C 11 D D 4 E E 3 F 7 G G 5 H H 2 I I Goal 1 J J

  31. Hill Climbing: Example 2

  32. Simulated Anealing Annealing refers to a physical process that proceeds as follows: • A solid in a heat bath is heated by raising the temperature to a maximum value at which all particles of the solid arrange themselves randomly in the liquid phase. • Then the temperature of the heat bath is lowered, pertaining all particles of the solid arrange themselves in the low energy ground state of a corresponding lattice. It is presumed that the maximum temperature in phase 1 is sufficiently high, and the cooling in phase 2 is carried-out sufficiently slowly. However, if the cooling is too rapid, that is if the solid is not allowed enough time to reach thermal equilibrium at each temperature value – the resulting crystal will have many defects.

  33. Simulated Anealing in AI Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency. • Picks random moves • Keeps executing the move if the situation is actually improved; otherwise, makes the move of a probability less than 1 • Number of cycles in the search is determined according to probability • The search behaves like hill-climbing when approaching the end

  34. Problem Reduction: AND-OR GRAPHS • AND-OR Graph is useful for representing the solution of problems that can be solved by decomposing them into a set of smaller problems, all of which must then be solved. • This decomposition or reduction generates arcs that we call AND arcs. One AND arc may point to any number of successor nodes, all of which must be solved in order for the arc to point to a solution. • AND arcs are represented by line connecting all the components or states. • In order to find solutions in an AND-OR graph, we need an algorithm similar to best first search but with the ability to handle the AND arcs appropriately.

  35. P Q P1 P2 P3 Q1 Q2 Q3 Areas of Usage: • Route finding • Design • Symbolic integration • Game playing • Theorem proving OR and AND Relations OR Graph AND Graph b) To solve Q, solve all Q1, Q2 and Q3 • To solve P, solve any one of P1 or P2 or P3

  36. a 3 1 c b 2 1 1 1 f g d e 2 3 6 h i a a 1 (iii) (ii) c b 1 1 f g d e 6 Cost = 8 h h i Cost = 9 AND-OR Graphs: Example Achieve goal h via node b. Achieve the goal h via node c. (i)

  37. a b c a-z e d a-z via f g j f a-f f-z i k a-f via d f-z via i h z a-d d-f f-i i-z 2 3 a-d via b 3 1 2 2 a-b b-d 4 2 1 1 5 2 3 RIVER 2 3 3 Route Finding Example:Finding a route from a to z in a road mapTo find a path between a and z, find either(1) a path from a to z via f, or(2) a path from a to z via g. 3

  38. AND-OR Graph • In the following tree, the number at each node represents the heuristic value. Also assume that every operation has a uniform cost of 1. A A A A 9 38 B B C D B B C D 5 3 4 9 17 27 E E F F G H I J 5 10 3 4 15 10

  39. Decisions in Two-Persons Games A game may be formally defined as a kind of search problem with the following components: • Initial State: the board position and an indication of whose move it is. • Operators: These define the legal moves that a player can make. • Terminal Test: This test determines when the game is over. States where the game has ended are called terminal states. • Utility Function or Pay-off Function: It gives a numerical value for the outcome of a game. • Example: In chess, the outcome is a win, loss or draw, which we can represent by the values +1, -1 and 0 respectively.

  40. Two Player Game: Max and Min • Objective of both Max and Min is to optimize winnings • Max must reach a terminal state with the highest utility • Min must reach a terminal state with the lowest utility • Game ends when either Max and Min have reached a terminal state • upon reaching a terminal state points maybe awarded or sometimes deducted

  41. Two Player Game: Max and Min • Simple problem is to reach a favorable terminal state • Problem Not so simple... • Max must reach a terminal state with as high a utility as possible regardless of Min’s moves • Max must develop a strategy that determines best possible move for each move Min makes.

  42. Tic Tac Toe Game

  43. Min Max Algorithm • Minmax Algorithm determines optimum strategy for Max: • Generate entire search tree • Apply utility function to terminal states • use utility value of current layer to determine utility of each node for the upper layer • continue when root node is reached • Minmax Decision - maximizes the utility for Max based on the assumption that Min will attempt to Minimize this utility.

  44. Minmax Algorithm: Example 3 3 2 2 3 12 8 2 4 6 14 5 2

  45. An Analysis • This algorithm is only good for games with a low branching factor, Why? • In general, the complexity is: O(bd) where: b = average branching factor d = number of plies

  46. Alpha-Beta Pruning • What is pruning? • The process of eliminating a branch of the search tree from consideration without examining it. • Why prune? • To eliminate searching nodes that are potentially unreachable. • To speedup the search process.

  47. Alpha-Beta Pruning • A particular technique to find the optimal solution according to a limited depth search using evaluation functions. • Returns the same choice as minimax cutoff decisions, but examines fewer nodes. • Gets its name from the two variables that are passed along during the search which restrict the set of possible solutions.

  48. Alpha-Beta Pruning: Definitions • Alpha – the value of the best choice so far along the path for MAX. • Beta – the value of the best choice (lowest value) so far along the path for MIN.

  49. Implementation • Set root node alpha to negative infinity and beta to positive infinity. • Search depth first, propagating alpha and beta values down to all nodes visited until reaching desired depth. • Apply evaluation function to get the utility of this node. • If parent of this node is a MAX node, and the utility calculated is greater than parents current alpha value, replace this alpha value with this utility.

  50. Implementation (Cont’d) • If parent of this node is a MIN node, and the utility calculated is less than parents current beta value, replace this beta value with this utility. • Based on these updated values, it compares the alpha and beta values of this parent node to determine whether to look at any more children or to backtrack up the tree. • Continue the depth first search in this way until all potentially better paths have been evaluated.

More Related