1 / 30

Artificial Intelligence Presentation

Artificial Intelligence Presentation. Chapter 4 – Informed Search and Exploration. Overview. Defining a problem Types of solutions The different algorithms to achieve these solutions Conclusion Questions and Answers Session. Defining a problem.

ava-harmon
Télécharger la présentation

Artificial Intelligence Presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence Presentation Chapter 4 – Informed Search and Exploration

  2. Overview • Defining a problem • Types of solutions • The different algorithms to achieve these solutions • Conclusion • Questions and Answers Session

  3. Defining a problem A problem is well defined for an agent to solve if: • There exists a state space, this is a set of all possible states an agent can be in. • Within the state space there exists an initial state and a goal state. • There exists a set of actions which an agent can take to progress from one state to another • There exists at least one path from the initial state to the goal state, that is to say, there exists a sequence of actions by which the agent, parting from the initial state, can assume a number of states that lead to the goal state. (Implicit from points 1 to 3) • There exists a goal test, this is, a means which allows the agent to know it has achieved, or not, the goal state • There exists a cost associated to each path, this is, a numeric value which allows the agent to compare the optimality between two, or more, paths to the goal state. • There exists a cost associated with each action, from these in a sequence of actions, one derives the path cost (For problems with more than one solution)

  4. Types of solutions There are two types of solutions: • A solution in which, alongside the goal, the path is also a constituent of the solution. Ex: What is the shortest path between reuter A and reuter B in network X? • A solution which is only the goal, that is to say, the path which leads to the solution is irrelevant. Ex: What is the minimum number of moves needed to win a chess match?

  5. Types of solutions Solutions of the first kind, the ideal algorithms are path finding algorithms, these are algorithms which explore the state-space systematically, keeping points along the path in memory. Solutions of the second kind, are typically solutions to optimization problems and have solution searching algorithms based simply on the current state. They occupy less memory and can, given enough time, find solutions which would not be possible in path finding algorithms, due to memory constraints.

  6. Path Finding algorithms There are 2 types of path finding algorithms: • Uniformed search algorithms These search strategies just generate successors and analyze whether or not the new state is the goal state. • Informed search algorithms These search strategies have a former knowledge of which non-goal states are more promising.

  7. Greedy Best-First Search This algorithm has the following basic process: • Each node has an f(n) = h(n). • Select the node with the lowest f(n) • If f(n) > 0 then expand the node repeat the process • Else if f(n) = h(n) == 0, then it is the goal-node

  8. Greedy Best-First Search

  9. A* Search A* search is similar the the best-first algorithms however f(n) is not h(n) but g(n) + h(n), where: • g(n) is the cost to get to n • h(n) is the cost from n to the the goal

  10. A* Search

  11. A* Search A* search is optimal if h(n) is an admissible heuristic, that is to say, it never overestimates the cost of the solution.

  12. A* Search Disadvantages of A* Search • Exponential growth in the number of nodes (memory can fill up quick • A* must search all the nodes within the goal contour • Due to memory or time limitations, suboptimal goals may be the only solution • Sometimes a better heuristic may not be admissable

  13. Memory bounded heuristic search In order to reduce the memory footprint of the previous algorithms, some algorithms attempt to take further advantages of Heuristics to improve performance: • Iterative-Deepening A* (IDA*) Search • Recursive Best-First Search (RBFS) • SMA*

  14. Memory bounded heuristic search To deal with the issue of exponential memory growth in A*, Iterative deepening A * (IDA*) was created. This practically the same as the normal iterative deepening algorithm, except that it

  15. IDA* Search The IDA* is basically the iterative deepening first depth search, but with the cutoff at f = g+h

  16. SMA* Search It follows like A* search, however when memory reaches it’s limit, the algorithm drops the worst node.

  17. Recursive Best-First Search (RBFS) The Recursive best-first search works by: • Keeping track of options along the fringe • If the current depth-first exploration becomes more expensive of best fringe option, back up to fringe and but update node costs along the way

  18. Recursive Best-First Search (RBFS)

  19. Effective Branching Factor, b* The branching factor is such that if a uniform tree of depth d contains N+1 nodes, then: N+1 = 1 + b* + (b*)2 + … + (b*)d The closer b* is to 1, the better the heuristic.

  20. How to come up with new Admissible Heuristics • Simplify problem by reducing restrictions on actions. • This is called a relaxed problem • The cost of optimal solution to relaxed problem is an admissible heuristic for original problem, because it is always less expensive than the solution to the original problem

  21. Pattern Databases Pattern databases made by storing patterns which have actions that are statistically favorable. Ex: Chess plays in certain states of the board

  22. Local Search algorithms They only keep track of the current solution (state) Utilize methods to generate alternate solution candidates They use a small amount of memory Can find acceptable solutions in infinite search spaces

  23. Hill Climbing

  24. Simulated Annealing • Select some initial guess of evaluation function parameters: x0 • Evaluate evaluation function, E(x0)=v • Compute a random displacement, x’0 • The Monte Carlo event • Evaluate E(x’0) = v’ • If v’ < v; set new state, x1 = x’0 • Else set x1 = x’0 with Prob(E,T) • This is the Metropolis step • Repeat with updated state and temp

  25. Genetic Algorithms • Reproduction • Reuse • Crossover • Mutation

  26. Genetic Algorithms

  27. Online Searches • States and Actions are unknown apriori • States are difficult to change • States can be or impossible difficult to reverse

  28. Learning in Online Search • Explorethe world • Build a map • Mapping of (state, action) to results also called a model relating (state, action) to results

  29. Conclusion

  30. Questions and Answers

More Related