1 / 103

IAIP Week 2 Search I: Uninformed and Adversarial Search

IAIP Week 2 Search I: Uninformed and Adversarial Search. Last Week. AI overview Agent Architectures. Hey days many “cognitive” tasks “solved” AI close to cognitive science. Subfields founded: Planning Vision Constraint Satisfaction. First industrial applications: Expert systems Neural nets.

erika
Télécharger la présentation

IAIP Week 2 Search I: Uninformed and Adversarial Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IAIP Week 2Search I: Uninformed and Adversarial Search

  2. Last Week • AI overview • Agent Architectures Hey daysmany “cognitive” tasks “solved”AI close to cognitive science Subfields founded:Planning VisionConstraint Satisfaction First industrial applications:Expert systemsNeural nets Machine learning Agents Widespread application Maturing field 60 70 80 90 00 RMJ VeCoS ITU

  3. This Week: Search I • Uninformed search [9:00-9:50, 10-10:50, 11-?] • State spaces • Search trees • General tree search • BFS, Uniform cost search, DFS, DLS, IDS, bidirectional search • General graph search • Adversarial search [?-11:50] • Game trees • Minimax • Alpha-beta pruning RMJ VeCoS ITU

  4. Uninformed searchRN Chapter 3 (Except 3.6) RMJ VeCoS ITU

  5. Search as a problem solving technique • Search is a last resort, if we have an analytical solution, we should apply it. Ex. • find minima of f(x,y) = x2 + y2 – x for -1 ≤x≤ 1 and -1 ≤y≤ 1 • Solve an instance of Rubiks cube • What can we learn from humans? RMJ VeCoS ITU

  6. Problem-space theory [Newell & Simon 72] • People’s problem solving behavior can be viewed as the production of knowledge states by the application of mental operators, moving from an initial state to a goal state • Operators encode legal moves • For a given problem there are a large number of alternative paths from initial state to goal state; the total set of such states is called a basic problem space RMJ VeCoS ITU

  7. Missionaries and Cannibals Problem • Task: Transfer 3 missionaries and 3 cannibals across a river in a boat • Max 2 people in the boat at any time • Someone must always accompany the boat to the other side • At any time there can not be more cannibals than missionaries left on one bank MMMCCC river RMJ VeCoS ITU

  8. Source: Eysenck & Kean 90 RMJ VeCoS ITU

  9. Hard: many options Hard: moving away from goal state Source: Eysenck & Kean 90 RMJ VeCoS ITU

  10. Domain Independent Heuristics • Means-end analysis • note the difference between the current state and the goal state; • create a sub-goal to reduce this difference; • Select an operator that will solve this sub-goal(continue recursively) • Anti-looping AI exploits such heuristics in automated problem solving RMJ VeCoS ITU

  11. Informed and Uninformed search • Uninformed search algorithms (the topic of this week): • Are only given the problem definition as input • Informed search algorithms (the topic of next week): • Are given information about the problem in addition to its definition (typical an estimate of the distance to a goal state) RMJ VeCoS ITU

  12. Search Problem Definition • A search problem consists of: • An initial states0 • A successor functionSUCCESSOR-FN(S) that for a state s returns a set of action-state pairs <a,s’> where a is applicable in s and s’ is reached from s by applying a • A goal testG(s) that returns true iff s is a goal state • A step cost functionc(s,a,s’) that returns the step cost of taking action a to go from state s to state s’. (must be additive to define path cost) • A solution is a path from the initial state s0 to a goal stateg • An optimal solution is a solution with minimum path cost 1+2 form a state space RMJ VeCoS ITU

  13. Search Problem Definition • How can we define the state space as a graph G = (V,E)? • Why not use an explicitly defined search graph as input to the algorithms? RMJ VeCoS ITU

  14. Search Problem Definition • Notice • For most problems, the size of the state space grows exponentially with the size of the problem (e.g., with the number of objects that can be manipulated) • For these problems, it is intractable to compute an explicit graph representation of the search space • For that reason search-spaces are defined implicitly in AI. • Further, the complexity of an AI algorithm is normally given in terms of the size problem description rather than the size of the search space RMJ VeCoS ITU

  15. Selecting a state space 1) Choose abstraction • Real world is absurdly complex • state space must be abstracted for problem solving • (Abstract) state = set of real states • (Abstract) action = complex combination of real actions • e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability of actions, any real state "in Arad“ must get to some real state "in Zerind" • (Abstract) solution = • set of real paths that are solutions in the real world • Each abstract action should be "easier" than the original problem 2) Define the elements of the search problem RMJ VeCoS ITU

  16. Ex: Romania Route Planning RMJ VeCoS ITU

  17. Ex: Romania Route Planning • Initial state: e.g., s0 = "at Arad" • Goal test function:G(s) : s= "at Bucharest" • Successor function: SUCCESSOR-FN(Arad) = {<Arad  Zerind, Zerind>, <Arad  Timisoara,Timisoara>, … } • Size of state space: 20 • Edge cost function: • E.g., c(s,a,s’) = dist(s,s’) RMJ VeCoS ITU

  18. Ex: Missionaries and Cannibals • Initial state? • Goal test function? • Successor function definition? • Size of state space? • Cost function? MMMCCC river RMJ VeCoS ITU

  19. Ex: Missionaries and Cannibals MMMCCC river MMMCCC river CC MCCC river Sail(M,M) Sail(C,C) MM Sail(M) Sail(C) Sail(M,C) MMCCC river M MMMCCC river MC MMMCCC river C RMJ VeCoS ITU

  20. Ex: Missionaries and Cannibals • Initial state: <[{M,M,M,C,C,C,B},{}]> • Goal test function:G(s) : • Successor function: • Actions: Sail(M,M), Sail(M,C), Sail(C,C), Sail(M), Sail(C) • SUCCESSOR-FN(S) is defined by the rules: • Max 2 people in the boat at any time • Someone must always accompany the boat to the other side • At any time there can not be more cannibals than missionaries left on one bank • Size of state space: 42*2 = 32 (but only some of these are reachable) • Cost function: we want solutions with minimum # of steps, so c(s,a,s’) = 1 RMJ VeCoS ITU

  21. Ex: 8-puzzle • Initial state? • Goal test function? • Successor function definition? • Size of state space? • Cost function? 1 2 3 4 5 6 7 8 1 RMJ VeCoS ITU

  22. Ex: 8-puzzle 1 2 3 1 2 3 Up Down 4 2 6 4 8 5 6 7 8 5 7 5 1 2 3 4 5 6 7 8 5 1 2 3 1 2 3 4 6 4 6 Left Right 7 8 5 7 8 5 RMJ VeCoS ITU

  23. Ex: 8-puzzle • Initial state: Any reachable state • Goal test function: s = <1,2,3,4,5,6,8,*> • Actions set: {Up, Down, Left, Right} • Successor function: • Given by the rules: 1: Up (Down): applicable if some tile t above (below) * 2: Left (Right): applicable if some tile t left (right) side of * 3: The effect of actions is to swap t and * • Size of state space: 9!/2 • Cost function: c(s,a,s’) = 1 RMJ VeCoS ITU

  24. Ex: 8-puzzle Q: why is the size of the state-space 9!/2 and not 9! ? A: Only half of the possible configurations can reach the goal state • If the tiles are read from top to bottom, left to right, they form a permutation • e.g. the permutation of the state below is <1,2,3,4,5,7,8,6> 1 2 3 4 5 7 8 6 RMJ VeCoS ITU

  25. Ex: 8-puzzle • Inversion: a pair of numbers contained in the permutation, for which the bigger one is before the smaller one • Number of inversions in <1,2,3,4,5,7,8,6>: 7 • A permutation with an even (odd) number is called an even permutation (odd permutation)[or it said to have even (odd) parity] RMJ VeCoS ITU

  26. Ex: 8-puzzle • The actions in the 8-puzzle preserves the parity of the permutation • Left, right: obvious, no changes • Down (Up): tile moved 2 positions to the right (left) in the permutation: • both, smaller or larger: # of inversions increased or decreased with 2 • one is smaller and one is larger: # of inversions is unchanged <A,B,C,D,E,F,G,H> RMJ VeCoS ITU

  27. Ex: 4-queens • Initial state? • Goal test function? • Successor function definition? • Size of state space? • Cost function? Q Q Q Q RMJ VeCoS ITU

  28. Ex: 4-queens • Initial state: no queens on the board • Goal test function: All queens on the board, none can attack each other • Successor function: add a queen to an empty square • Size of state space: 16*15*14*13 / 4! = 1820 • Cost function: irrelevant! Can we do better? RMJ VeCoS ITU

  29. Tree Search Algorithms • Basic idea: • Assumes the state-space forms a tree (otherwise already-explored states may be regenerated) • Builds a search tree from the initial state and the successor function RMJ VeCoS ITU

  30. Tree search ex.: 4-queens • A state is given by the assignment of 4 variables r1, r2, r3, and r4 denoting the row number of 4 queens placed in column 1 to 4 • ri = 0 indicates that no column number has been assigned to queen i(e.g., queen i has not been placed on the board yet) • The queens are assigned in order r1 to r4 RMJ VeCoS ITU

  31. Tree search ex.: 4-queens Initial state s0: root of the search tree [0,0,0,0] Search fringeor frontieror open list RMJ VeCoS ITU

  32. Tree search ex.: 4-queens Expansion of [0,0,0,0] [0,0,0,0] [1,0,0,0] [2,0,0,0] [3,0,0,0] [4,0,0,0] Children of [0,0,0,0] The given search strategy chooses this leaf node on the fringe to expand next RMJ VeCoS ITU

  33. Tree search ex.: 4-queens Expansion of [2,0,0,0] [0,0,0,0] [1,0,0,0] [2,0,0,0] [3,0,0,0] [4,0,0,0] [2,1,0,0] [2,2,0,0] [2,3,0,0] [2,4,0,0] Etc… RMJ VeCoS ITU

  34. Tree search ex.: 4-queens Expansion of [2,0,0,0]Forward checking [0,0,0,0] [1,0,0,0] [2,0,0,0] [3,0,0,0] [4,0,0,0] X X X X X [2,1,0,0] [2,2,0,0] [2,3,0,0] [2,4,0,0] X Q X Etc… Q RMJ VeCoS ITU

  35. Tree search ex.: Routing RMJ VeCoS ITU

  36. Tree search ex.: Routing RMJ VeCoS ITU

  37. Tree search ex.: Routing RMJ VeCoS ITU

  38. Implementation: general tree search RMJ VeCoS ITU

  39. States versus Nodes • A state is a (representation of) a physical configuration • A node is a data structure constituting part of a search tree includes state, parentnode, action, path costg(s), depth • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states RMJ VeCoS ITU

  40. Search strategies • A search strategy is defined by picking the order of node expansion (=sorting criteria of the fringe priority queue) • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: number of nodes generated • space complexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? What about soundness of a strategy? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the least-cost solution • m: maximum depth of the state space (may be infinite) RMJ VeCoS ITU

  41. Uninformed search strategies • Uninformed search strategies use only the information available in the problem definition • Breadth-first search (BFS) • Uniform-cost search • Depth-first search (DFS) • Backtracking search • Depth-limited search (DLS) • Iterative deepening search (IDS) • Bidirectional search RMJ VeCoS ITU

  42. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end RMJ VeCoS ITU

  43. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end RMJ VeCoS ITU

  44. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end RMJ VeCoS ITU

  45. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end RMJ VeCoS ITU

  46. Properties of breadth-first search • Complete? • Time? • Space? • Optimal? RMJ VeCoS ITU

  47. Properties of breadth-first search b0 … b1 … bd … … … bd+1-b RMJ VeCoS ITU

  48. Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = (1-bd+2)/(1-b) – b = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step), all nodes at depth i will be expanded before nodes at depth i+1, so an optimal solution will not be overlooked. • Space is the bigger problem (more than time) RMJ VeCoS ITU

  49. Uniform-cost search • Expand least-cost unexpanded node • Implementation: • fringe = queue ordered by increasing path cost • Equivalent to breadth-first if step costs all equal RMJ VeCoS ITU

  50. Uniform-cost search g=0 75 140 118 g=140 g=118 g=75 RMJ VeCoS ITU

More Related