1 / 75

Artificial Intelligence

Artificial Intelligence. 3. Solving Problems By Searching. Goal Formulation Given situations we should adopt the “ goal ” 1 st step in problem solving! It is a set of world states, only those in which the goal is satisfied Action causes transition between world states. Problem Formulation

dung
Télécharger la présentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence 3. Solving Problems By Searching

  2. Goal Formulation Given situations we should adopt the “goal” 1st step in problem solving! It is a set of world states, only those in which the goal is satisfied Action causes transition between world states Problem Formulation Process of deciding what actions and states to consider, and follows goal formulation Definition

  3. Function SIMPLE-PROB-SOLV-AGENT(p) returns an action inputs: p; //percept Static: seq; // action sequence, initially empty state; //description of the current world g; //goal, initially null problem; //problem formulation state <- Update-State (state, p); If seq is empty then g <- Formulate-Goal(state) problem <- Formulate-Problem(state, g); seq <- Search(problem); action <- First(seq, state); seq <- Rest(seq); Return action Formulation of simple problem-solving agent

  4. Examples (1) traveling On holiday in Taiif • Formulate Goal • Be after two days in Paris • Formulate Problem • States: various cities • Actions: drive/fly between cities • Find Solution • Sequence of cities: e.g., Taiif, Jeddah, Riyadh, Paris

  5. Examples (2) Vacuum World • 8 possible world states • 3 possible actions: Left/Right/ Suck • Goal: clean up all the dirt= state(7) or state(8) • world is accessible  agent’s sensors give enough information about which state it is in (so, it knows what each of its action does), then it calculate exactly which state it will be after any sequence of actions. Single-State problem • world is inaccessible  agent has limited access to the world state, so it may have no sensors at all. It knows only that initial state is one of the set {1,2,3,4,5,6,7,8}. Multiple-States problem

  6. Problem Definition • Initial state • Operator: description of an action • State space: all states reachable from the initial state by any sequence action • Path: sequence of actions leading from one state to another • Goal test: which the agent can apply to a single state description to determine if it is a goal state • Path cost function: assign a cost to a path which the sum of the costs of the individual actions along the path.

  7. Vacuum World S1 S2 S3 S6 S5 S4 S7 S8 • States:S1 , S2 , S3 , S4 , S5 , S6 , S7 ,S8 • Operators: Go Left , Go Right , Suck • Goal test: no dirt left in both squares • Path Cost: each action costs 1.

  8. Real-world problems • Routine finding • Routing in computer networks • Automated travel advisory system • Airline travel planning system • Goal: the best path between the origin and the destination • Travelling Salesperson problem (TSP) • Is a famous touring problem in which each city must be visited exactly once. • Goal: shortest tour

  9. (a) The initial state Mecca (b) After expanding Jeddah Taiif Medina Al-Qassim Riyadh Taiif Riyadh Riyadh

  10. Data Structure for Search Tree • DataType Node: data structure with 5 components • Components: • State • Parent-node • Operator • Depth • Path-cost

  11. Search Strategies The strategies are evaluated based on 4 criteria: • Completeness: always find solution when there is one • Time Complexity: how long does it take to find a solution • Space Complexity: how much memory does it need to perform the search • Optimality: does the strategy find the highest-quality solution

  12. Example: vacuum world • Single-state, start in #5. Solution?

  13. Example: vacuum world • Single-state, start in #5. Solution?[Right, Suck] • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?

  14. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?

  15. Example: vacuum world • Sensorless, start in {1,2,3,4,5,6,7,8}e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] • Contingency • Nondeterministic: Suck may dirty a clean carpet • Partially observable: location, dirt at current location. • Percept: [L, Clean], i.e., start in #5 or #7Solution?[Right, if dirt then Suck]

  16. Vacuum world state space graph • states? • actions? • goal test? • path cost?

  17. Vacuum world state space graph • states?integer dirt and robot location • actions?Left, Right, Suck • goal test?no dirt at all locations • path cost?1 per action

  18. Example: The 8-puzzle • states? • actions? • goal test? • path cost?

  19. Example: The 8-puzzle • states?locations of tiles • actions?move blank left, right, up, down • goal test?= goal state (given) • path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

  20. Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute

  21. Tree search algorithms • Basic idea: • offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

  22. Example: Romania

  23. Single-state problem formulation A problem is defined by four items: • initial state e.g., "at Arad" • actions or successor functionS(x) = set of action–state pairs • e.g., S(Arad) = {<Arad  Zerind, Zerind>, … } • goal test, can be • explicit, e.g., x = "at Bucharest" • implicit, e.g., Checkmate(x) • path cost (additive) • e.g., sum of distances, number of actions executed, etc. • c(x,a,y) is the step cost, assumed to be ≥ 0 • A solution is a sequence of actions leading from the initial state to a goal state

  24. Selecting a state space • Real world is very complex  state space must be abstracted for problem solving • (Abstract) state = set of real states • (Abstract) action = complex combination of real actions • e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc. • For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" • (Abstract) solution = • set of real paths that are solutions in the real world • Each abstract action should be "easier" than the original problem

  25. Tree search example

  26. Tree search example

  27. Tree search example

  28. Implementation: general tree search

  29. Implementation: states vs. nodes • A state is a (representation of) a physical configuration • A node is a data structure constituting part of a search tree includes state, parent node, action, path costg(x), depth • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

  30. Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: • completeness: does it always find a solution if one exists? • time complexity: number of nodes generated • space complexity: maximum number of nodes in memory • optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of • b: maximum branching factor of the search tree • d: depth of the least-cost solution • m: maximum depth of the state space (may be ∞)

  31. Uninformed search strategies • Uninformed search strategies use only the information available in the problem definition Breadth-first search Uniform-cost search Depth-first search Depth-limited search Iterative deepening search

  32. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  33. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  34. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  35. Breadth-first search • Expand shallowest unexpanded node • Implementation: • fringe is a FIFO queue, i.e., new successors go at end

  36. Properties of breadth-first search • Complete?Yes (if b is finite) • Time?1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) • Space?O(bd+1) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step) • Space is the bigger problem (more than time)

  37. Breadth-first search tree sample Branching factor:number of nodes generated by a node parent (we called here “b”)  Here after b=2 0 expansion 1 expansion 2 expansions 3 expansions

  38. Breadth First Complexity • The root  generates (b) new nodes • Each of which  generates (b) more nodes • So, the maximum number of nodes expended before finding a solution at level “d”, it is : 1+b+b2+b3+….+bd • Complexity is exponential = O(bd)

  39. Breadth First Algorithm Void breadth () { queue=[]; //initialize the empty queue state = root_node; // initialize the start state while (! Is_goal( state ) ) { if !visited(state) do add_to_back_of_queue(successors(state)); markVisited(state); if queue empty return FAILURE; state = queue[0]; //state=first item in queue remove_first_item_from (queue); } return SUCCESS }

  40. Time and memory requirement in Breadth-first Assume branching factor b=10; 1000 nodes explored/sec and 100 bytes/node

  41. A 10 1 S G B 5 5 5 15 C Example (Routing Problem)

  42. S S S S Sol= {S,A,G} & Cost = 11 C C C C 1 5 15 A A A A B B B B 1 1 5 5 15 15 G 11 G 11 G 20 G 10 New solution found but not better than what we have Sol= {S,B,G} & Cost = 10 New solution found better than the current Sol= {S,B,G} & Cost = 10 1 5 15 G 11 G 10 Solution of the Routing problem using Breadth-first Sol= empty & Cost = infinity

  43. S S S S C C C A A A A B B B B 1 5 15 G 11 1 5 15 G 11 G 10 Solution of the Routing problem using Uniform Cost search 0 Sol= {S,A,G} & Cost = 11 1 5 15 Sol= empty & Cost = infinity C will not be expanded as its cost is greater than the current solution New solution found better than the current Sol= {S,B,G} & Cost = 10 C 1 5 15 G 11 G 10

  44. Uniform-cost search • Expand least-cost unexpanded node • Implementation: • fringe = queue ordered by path cost • Equivalent to breadth-first if step costs all equal Complete? Yes, if step cost ≥ εTime? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution • Space? # of nodes with g≤ cost of optimal solution, O(bceiling(C*/ ε))Optimal? Yes – nodes expanded in increasing order of g(n)

  45. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  46. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  47. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  48. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  49. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

  50. Depth-first search • Expand deepest unexpanded node • Implementation: • fringe = LIFO queue, i.e., put successors at front

More Related