1 / 30

Problem Solving and Search

Problem Solving and Search. Andrea Danyluk September 9 , 2013. Announcements. Progamming Assignment 0: Python Tutorial Should hav e received my email about it Due Wednesday at 10am Should not take long to complete Talk to me if the due date/time is a real problem

yoland
Télécharger la présentation

Problem Solving and Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Problem Solving and Search Andrea Danyluk September9, 2013

  2. Announcements • Progamming Assignment 0: Python Tutorial • Should have received my email about it • Due Wednesday at 10am • Should not take long to complete • Talk to me if the due date/time is a real problem • Turnin should be fixed today • Grading for this one is mainly a sanity check • Any trouble with accounts: let me know!

  3. Today’s Lecture • Agents • Goal-directed problem-solving and search • Uninformed search • Breadth-first • Depth-first and variants • Uniform-cost search (Wednesday)

  4. Rational Agents • An agent perceives and acts. • “Doing the right thing” captured by a performance measure that evaluates a given sequence of environment states. A rational agent: selects an action that is expected to maximize the performance measure, given evidence provided by the percept sequence and whatever built-in knowledge the agent has.

  5. Rational ≠ omniscient • Percepts may not supply all relevant information • Rational ≠ clairvoyant • Action outcomes may not be as expected • Rational ≠ successful [ Adapted from Russell]

  6. Reflex Agents • Act on the basis of the current percept (and possibly what they remember from the past). • May have memory or a model of the world’s state. • Do not consider future consequences of their actions. [Adapted from CS 188 UC Berkeley]

  7. Goal-based Agents • Plan ahead • Ask “what if” • Decisions based on (hypothesized) consequences of actions • Have a model of how the world evolves in response to actions [Adapted from CS 188 UC Berkeley]

  8. Building a goal-based agent • Determine the percepts available to the agent • Select/devise a representation for world states • Determine the task knowledge the agent will need • Clearly articulate the goal • Select/devise a problem-solving technique so that the agent can decide what to do

  9. Search as a Fundamental Problem-Solving Technique • Originated with Newell and Simon’s work on problem solving in the late 60s.

  10. Search Problems A search problem consists of • A state space • A set of states • As set of actions • A transition model that specifies results of applying actions to states • Successor function: Result(s, a) • An initial state • A goal test

  11. An Example: the 8-Puzzle Initial State Goal State # possible distinct states = 9! = 362,880 (but only 9!/2 reachable)

  12. Real world examples • Navigation • Vehicle parking • Parsing (natural and artificial languages) • The old dog slept on the porch • The old dog the footsteps of the young

  13. And back to the 8-Puzzle • States: • Puzzle configurations • Actions: • Move blank N, S, E, or W • Start state: • As given • Goal test: • Is current state = specified goal state?

  14. State Space Graph S G

  15. Finding a solution in a problem graph • Solving the puzzle = finding a path through the graph from initial state to goal state • Simple graph search algorithms: • Breadth-first search • Depth-first search

  16. Breadth-first Graph Search:Review f a b Start: a Goal: i g e c Explore vertices (really edges closest to the start state first. d h Fringe is a FIFO queue i

  17. Depth-first Graph Search:Review f a b Start: a Goal: i g e c Explore deepest vertex (really edge) first. d h Fringe is a LIFO stack i

  18. State Space Search in the AI World • Rarely given a graph to begin with • We don’t build the graph before doing the search • Our search problems are BIG

  19. Search Trees {N, E, S, W} Search tree for BFS White nodes = explored (i.e., expanded) Orange nodes = frontier Maintains an “explored” set of states to avoid redundant search

  20. Search Tree • A “what if” tree of plans and outcomes • Start state at the root node • Children correspond to successors • Nodes contain states; correspond to plans to those states • Aim to build as little as possible • Because we build the tree “on the fly” the representations of states and actions matter! [Adapted from CS 188 UC Berkeley]

  21. Formalizing State Space Search • A state space is a graph (V, E), where V is the set of states and E is a set of directed edges between states. The edges may have associated weights (costs). • Our exploration of the state space using search generates a search tree. [Adapted from Eric Eaton]

  22. Formalizing State Space Search • A node in a search tree typically contains: • A state description • A reference to the parent node • The name of the operator that generated it from its parent • The cost of the path from the initial state to itself • The node that is the root of the search tree typically represents the initial state

  23. Formalizing State Space Search • Child nodes are generated by applying legal operators to a node • The process of expanding a node means to generate all of its successor nodes and to add them to the frontier. • A goal test is a function applied to a state to determine whether its associated node is a goal node

  24. Formalizing State Space Search • A solution is either • A sequence of operators that is associated with a path from start state to goal or • A state that satisfies the goal test • The cost of a solution is the sum of the arc costs on the solution path • If all arcs have the same (unit) cost, then the solution cost is just the length of the solution (i.e., the length of the path)

  25. Evaluating Search Strategies • Completeness • Is the strategy guaranteed to find a solution when there is one? • Optimality • Does the strategy find the highest-quality solution when there are several solutions? • Time Complexity • How long does it take (in the worst case) to find a solution? • Space Complexity • How much memory is required (in the worst case)?

  26. Evaluating BFS • Complete? • Yes • Optimal? • Not in general. When is it optimal? • Time Complexity? • How do we measure it? • Space Complexity?

  27. Time and Space Complexity Let • b = branching factor (i.e., max number of successors) • m = maximum depth of the search tree • d = depth of shallowest solution For BFS Time: O(bd) If we check for goal as we generate a node! Not if we check as we get ready to expand! Space: O(bd)

  28. Evaluating DFS • Complete? • Not in general • Can be if space is finite and we modify tree search to account for loops and redundant paths • Optimal? • No • Time Complexity? • O(bm) • Space Complexity? • O(mb)

  29. Depth-Limited DFS? Let lbe the depth limit • Complete? • No • Optimal? • No • Time Complexity? • O(bl) • Space Complexity? • O(ml)

  30. Iterative Deepening? • Complete? • Yes (if b is finite) • Optimal? • Yes (if costs of all actions are the same) • Time Complexity? • O(bd) • Space Complexity? • O(bd)

More Related