1 / 131

Computer Science CPSC 502 Lecture 2 Search ( textbook Ch: 3)

Computer Science CPSC 502 Lecture 2 Search ( textbook Ch: 3). Representational Dimensions for Intelligent Agents. We will look at Deterministic and stochastic domains Static and Sequential problems We will see examples of representations using

matia
Télécharger la présentation

Computer Science CPSC 502 Lecture 2 Search ( textbook Ch: 3)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Science CPSC 502 Lecture 2 Search (textbook Ch: 3)

  2. Representational Dimensions for Intelligent Agents We will look at • Deterministic and stochastic domains • Static and Sequential problems • We will see examples of representations using • Explicit state or features or relations Would like most general agents possible, but in this course we need to restrict ourselves to: • Flat representations (vs. hierarchical) • Knowledge given and intro to knowledge learned • Goals and simple preferences (vs. complex preferences) • Single-agent scenarios (vs. multi-agent scenarios)

  3. Course Overview Representation Reasoning Technique Stochastic Deterministic • Environment Problem Type Arc Consistency Constraint Satisfaction Vars + Constraints Search Static Belief Nets Logics Variable Elimination Query Search Approximate Inference Temporal Inference Sequential Decision Nets STRIPS Planning Variable Elimination Search First Part of the Course Markov Processes Value Iteration

  4. Course Overview Representation Reasoning Technique Stochastic Deterministic • Environment Problem Type Arc Consistency Constraint Satisfaction Vars + Constraints Search Static Belief Nets Logics Variable Elimination Query Search Approximate Inference Temporal Inference Sequential Decision Nets STRIPS Planning Variable Elimination Search Today we focus on Search Markov Processes Value Iteration

  5. (Adversarial) Search: Checkers • Early work in 1950s byArthur Samuel at IBM • Chinook program by Jonathan Schaeffer (UofA) • Search explore the space of possible moves and their consequence • 1994: world champion • 2007: declared unbeatable Source: IBM Research

  6. (Adversarial) Search: Chess • In 1997, Gary Kasparov, the chess grandmaster and reigning world champion played against Deep Blue, a program written by researchers at IBM Source: IBM Research

  7. (Adversarial) Search: Chess • Deep Blue’s won 3 games, lost 2, tied 1 • 30 CPUs + 480 chess processors • Searched 126.000.000 nodes per sec • Generated 30 billion positions per move reaching depth 14 routinely

  8. Today’s Lecture • Simple Search Agent • Uniformed Search • Informed Search

  9. Simple Search Agent • Deterministic, goal-driven agent • Agent is in a start state • Agent is given a goal (subset of possible states) • Environment changes only when the agent acts • Agent perfectly knows: • actions that can be applied in any given state • the state it is going to end up in when an action is applied in a given state • The sequence of actions (and appropriate ordering) taking the agent from the start state to a goal state is the solution

  10. Definition of a search problem • Initial state(s) • Set of actions(operators) available to the agent: for each state, define the successor state for each operator • i.e., which state the agent would end up in • Goal state(s) • Search space: set of states that will be searched for a path from initial state to goal, given the available actions • statesarenodes and actions are links between them. • Not necessarily given explicitly (state space might be too large or infinite) • Path Cost (we ignore this for now)

  11. Three examples • Vacuum cleaner world • Solving an 8-puzzle • The delivery robot planning the route it will take in a bldg. to get from one room to another (see textbook)

  12. Example: vacuum world • States • Two rooms: r1, r2 • Each room can be either dirty or not • Vacuuming agent can be in either in r1 or r2 Feature-based representation Possible start state Possible goal state • Features?

  13. Example: vacuum world • States • Two rooms: r1, r2 • Each room can be either dirty or not • Vacuuming agent can be in either in r1 or r2 • Features? Feature-based representation: how many states?

  14. ….. • Suppose we have the same problem with k rooms. • The number of states is…. k * 2k

  15. Cleaning Robot • States – one of the eight states in the picture • Operators –left, right, suck • Possible Goal – no dirt

  16. Search Space • Operators – left, right, suck • Successor states in the graph describe the effect of each action applied to a given state • Possible Goal – no dirt

  17. Search Space • Operators – left, right, suck • Successor states in the graph describe the effect of each action applied to a given state • Possible Goal – no dirt

  18. Search Space • Operators – left, right, suck • Successor states in the graph describe the effect of each action applied to a given state • Possible Goal – no dirt

  19. Search Space • Operators – left, right, suck • Successor states in the graph describe the effect of each action applied to a given state • Possible Goal – no dirt

  20. Search Space • Operators – left, right, suck • Successor states in the graph describe the effect of each action applied to a given state • Possible Goal – no dirt

  21. Eight Puzzle States: each state specifies which number/blank occupies each of the 9 tiles HOW MANY STATES ? Operators: blank moves left, right, up down Goal: configuration with numbers in right sequence 9!

  22. Search space for 8puzzle

  23. How can we find a solution? • How can we find a sequence of actions and their appropriate ordering that lead to the goal? • Need smart ways to search the spacegraph • Search: abstract definition • Start at the start state • Evaluate where actions can lead us from states that have been encountered in the search so far • Stop when a goal state is encountered To make this more formal, we'll need to review the formal definition of a graph...

  24. Graphs • A directed graph consists of a set N of nodes(vertices) and a set A of ordered pairs of nodes, called edges (arcs). • Node n2 is a neighbor of n1 if there is an arc from n1 to n2. That is, if  n1, n2 A. • A path is a sequence of nodes n0, n1,..,nk such that  ni-1, ni A. • A cycle is a non-empty path such that the start node is the same as the end node. • Search graph • Nodes are search states • Edges correspond to actions • Given a set of start nodes and goal nodes, a solution is a path from a start node to a goal node: a plan of actions.

  25. Graph Searching • Generic search algorithm: • given a graph, start nodes, and goal nodes, incrementally explore paths from the start nodes. • Maintain a frontier of paths from the start node that have been explored. • As search proceeds, the frontier expands into the unexplored nodes until a goal node is encountered. • The way in which the frontier is expanded defines the search strategy.

  26. Problem Solving by Graph Searching

  27. Generic Search Algorithm • Input: a graph • a set of start nodes • Boolean procedure goal(n) testing if n is a goal node • frontier:= [<s>: s is a start node]; • While frontier is not empty: • select and remove path <no,….,nk> from frontier; • Ifgoal(nk) • return <no,….,nk>; • For every neighbornofnk, add <no,….,nk, n> to frontier; • end

  28. Branching Factor • The forward branching factorof a node is the number of arcs going out of the node • The backward branching factorof a node is the number of arcs going into the node • If the forward branching factor of a node is b and the graph is a tree, there are nodes that are n steps away from a node

  29. Comparing Searching Algorithms: Will it find a solution? the best one? Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Def.: A search algorithm is optimal if when it finds a solution, it is the best one

  30. Comparing Searching Algorithms: Complexity Branching factorbof a node is the number of arcs going out of the node • Def.: The time complexityof a search algorithm is • the worst- caseamount of time it will take to run, • expressed in terms of • maximum path length m • maximum branching factor b. Def.: The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use (i.e., the maximum number of nodes on the frontier), also expressed in terms of mandb.

  31. Today’s Lecture • Simple Search Agent • Uniformed Search • Informed Search

  32. Illustrative Graph: DFS Shaded nodes represent the end of paths on the frontier • DFS explores each path on the frontier until its end (or until a goal is found) before considering any other path.

  33. DFS as an instantiation of the Generic Search Algorithm Input: a graph a set of start nodes Boolean procedure goal(n) testing if n is a goal node frontier:= [<s>: s is a start node]; While frontier is not empty: select and remove path <no,….,nk> from frontier; Ifgoal(nk) return <no,….,nk>; For every neighbornofnk, add <no,….,nk, n> to frontier; end • In DFS, the frontier is alast-in-first-out stack Let’s see how this works in

  34. DFS in AI Space • Go to: http://www.aispace.org/mainTools.shtml • Click on “Graph Searching” to get to the Search Applet • Select the “Solve” tab in the applet • Select one of the available examples via “File -> Load Sample Problem (good idea to start with the “Simple Tree” problem) • Make sure that “Search Options -> Search Algorithms” in the toolbar is set to “Depth-First Search”. • Step through the algorithm with the “Fine Step” or “Step” buttons in the toolbat • The panel above the graph panel verbally describes what is happening during each step • The panel at the bottom shows how the frontier evolves • See available help pages and video tutorials for more details on how to use the Search applet (http://www.aispace.org/search/index.shtml)

  35. Depth-first Search: DFS • Example: • the frontier is [p1, p2, …, pr] - each pk is a path • neighbors of last node of p1 are {n1, …, nk} • What happens? • p1 is selected, and its last node is tested for being a goal. If not • K new paths are created by adding each of {n1, …, nk} to p1 • These K new paths replace p1 at the beginning of the frontier. • Thus, the frontier is now: [(p1, n1), …, (p1, nk), p2, …, pr] . • You can get a much better sense of how DFS works by looking at the Search Applet in AI Space • NOTE: p2 is only selected when all paths extending p1 have been explored.

  36. Analysis of DFS • Is DFS complete? • . • Is DFS optimal? • What is the time complexity, if the maximum path length is m and the maximum branching factor is b ? • What is the space complexity? • We will look at the answers in AISpace (but see next few slides for a summary of what we do)

  37. Analysis of DFS Def. : A search algorithm is completeif whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. No Is DFS complete? • If there are cycles in the graph, DFS may get “stuck” in one of them • see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree” • e.g., click on “Create” tab, create a new edge from N7 to N1, go back to “Solve” and see what happens

  38. Analysis of DFS Def.: A search algorithm is optimal if when it finds a solution, it is the best one (e.g., the shortest) Is DFS optimal? No • It can “stumble” on longer solution paths before it gets to shorter ones. • E.g., goal nodes: red boxes • see this in AISpace by loading “Extended Tree Graph” and set N6 as a goal • e.g., click on “Create” tab, right-click on N6 and select “set as a goal node”

  39. Analysis of DFS • Def.: The time complexity of a search algorithm is • the worst-case amount of time it will take to run, • expressed in terms of • maximum path length m • maximum forward branching factor b. • What is DFS’s time complexity, in terms of m and b? O(bm) • In the worst case, must examine every node in the tree • E.g., single goal node -> red box

  40. Analysis of DFS • Def.: The space complexity of a search algorithm is the • worst-case amount of memory that the algorithm will use • (i.e., the maximum number of nodes on the frontier), expressed in terms of • maximum path length m • maximum forward branching factor b. • What is DFS’s space complexity, in terms of m and b ? • for every node in the path currently explored, DFS maintains a path to its unexplored siblings in the search tree • Alternative paths that DFS needs to explore • The longest possible path is m, with a maximum of b-1 alterative paths per node O(bm) See how this works in

  41. Analysis of DFS: Summary • Is DFS complete? NO • Depth-first search isn't guaranteed to halt on graphs with cycles. • However, DFS is complete for finite acyclic graphs. • Is DFS optimal? NO • It can “stumble” on longer solution paths before it gets to shorter ones. • What is the time complexity, if the maximum path length is m and the maximum branching factor is b ? • O(bm) : must examine every node in the tree. • Search is unconstrained by the goal until it happens to stumble on the goal. • What is the space complexity? • O(bm) • the longest possible path is m, and for every node in that path must maintain a fringe of size b.

  42. Analysis of DFS (cont.) • DFS is appropriate when. • Space is restricted • Many solutions, with long path length • It is a poor method when • There are cycles in the graph • There are sparse solutions at shallow depth • There is heuristic knowledge indicating when one path is better than another

  43. Why DFS need to be studied and understood? • It is simple enough to allow you to learn the basic aspects of searching • It is the basis for a number of more sophisticated useful search algorithms

  44. Breadth-first search (BFS) • BFS explores all paths of length l on the frontier, before looking at path of length l + 1

  45. BFS as an instantiation of the Generic Search Algorithm • Input: a graph • a set of start nodes • Boolean procedure goal(n) testing if n is a goal node • frontier:= [<s>: s is a start node]; • While frontier is not empty: • select and remove path <no,….,nk> from frontier; • Ifgoal(nk) • return <no,….,nk>; • Else • For every neighbornofnk, add <no,….,nk, n> to frontier; • end In BFS, the frontier is afirst-in-first-out queue • Let’s see how this works in AIspace • in the Search Applet toolbar , set “Search Options -> Search Algorithms” to “Breadth-First Search”.

  46. Breadth-first Search: BFS • Example: • the frontier is [p1,p2, …, pr] • neighbors of the last node of p1 are {n1, …, nk} • What happens? • p1 is selected, and its end node is tested for being a goal. If not • New k paths are created attaching each of {n1, …, nk} to p1 • These follow pr at the end of the frontier. • Thus, the frontier is now [p2, …, pr, (p1, n1), …, (p1, nk)]. • p2 is selected next. • As for DFS, you can get a much better sense of how BFS works by • looking at the Search Applet in AI Space CPSC 322, Lecture 5

  47. Analysis of BFS • Is BFS complete? • . • Is BFS optimal? • What is the time complexity, if the maximum path length is m and the maximum branching factor is b ? • What is the space complexity?

  48. Analysis of BFS Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Yes Is BFS complete? • If a solution exists at level l, the path to it will be explored before any other path of length l + 1 • impossible to fall into an infinite cycle • see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree”

  49. Analysis of BFS Def.: A search algorithm is optimal if when it finds a solution, it is the best one Yes Is BFS optimal? • E.g., two goal nodes: red boxes • Any goal at level l (e.g. red box N 7) will be reached before goals at lower levels

  50. Analysis of BFS • Def.: The time complexity of a search algorithm is • the worst-case amount of time it will take to run, • expressed in terms of • maximum path length m • maximum forward branching factor b. • What is BFS’s time complexity, in terms of m and b? O(bm) • Like DFS, in the worst case BFS must examine every node in the tree • E.g., single goal node -> red box

More Related