150 likes | 827 Vues
State-Space representation and Production Systems Introduction: what is State-space representation? (E.Rich, Chapt.2) Basis search methods. (Winston, Chapt.4 + Russel&Norvig) Optimal-path search methods. (Winston, Chapt.5 + Russel&Norvig)
E N D
State-Space representationand Production Systems Introduction: what is State-space representation? (E.Rich, Chapt.2) Basis search methods. (Winston, Chapt.4 + Russel&Norvig) Optimal-path search methods. (Winston, Chapt.5 + Russel&Norvig) Advanced variants. (Rich, Chapt.3 + Russel&Norvig + Nilsson) Game Playing (Winston, Chapt.6 + Rich + Russel&Norvig )
State-space representation:Introduction • What is state-space representation? • Which are the technical issues that arise in that context?
Example: the 8-puzzle. • Given: a board situation for the 8-puzzle: 1 3 8 2 7 5 4 6 • Problem: find a sequence of moves (allowed under the rules of the 8-puzzle game) that transform this board situation in a desired goal situation: 1 2 3 8 4 7 6 5
State-space representation:general outline: • Select some way to represent states in the problem in an unambiguous way. • Formulate all actions that can be preformed in states: • including their preconditions and effects • == PRODUCTION RULES • Represent the initial state (s). • Formulate precisely when a state satisfies the goal of our problem. • Activate the production rules on the initial state and its descendants, until a goal state is reached.
How to represent states? (repr.1) • Ex.: using a 3 X 3 matrix 1 3 8 2 7 5 4 6 Initial issues to solve: • How to formulate production rules?(repr. 2) • Ex.: • express how/when squares may be moved? • Or: express how/when the blank space is moved? • When is a rule applicable to a state? (matching) • How to formulate when the goal criterion is satified and how to verify that it is? • How/which rules to activate? (control)
1 3 8 1 3 8 9!/2 nodes! 2 7 2 7 5 5 5 5 4 4 4 4 6 6 6 6 1 3 8 1 8 1 3 8 goal 4 2 7 3 2 7 2 7 5 6 The (implicit) search tree • Each state-space representation defines a search tree: • But this tree is only IMPLICITLY available !!
1. A way to represent board situations in an unambiguous way: • Ex.: 8 List: (( king_black, 8, C), ( knight_black, 7, B), ( pawn_black, 7, G), ( pawn_black, 5, F), ( pawn_white, 2, H), ( king_white, 1, E)) 7 6 5 4 3 2 1 A B C D E F G H A second example: Chess • Problem: develop a program that plays chess (well).
4 3 2 1 A B C D E F G H ( (pawn_white,2,x) , (blank, 3, x), (blank, 4, x) ) add( ( pawn_white, 4, x) ), remove( (pawn_white, 2, x) ) Chess (2): • 2. Describe the rules that represent allowed moves: • Ex.:
List: (( king_black, 8, C), ( knight_black, 7, B), ( pawn_black, 7, G), ( pawn_black, 5, F), ( pawn_white, 2, H), ( king_white, 1, E)) 8 7 6 5 4 3 2 1 A B C D E F G H ( (pawn_white,2,x) , (blank, 3, x), (blank, 4, x) ) add( ( pawn_white, 4, x) ), remove( (pawn_white, 2, x) ) Matching mechanism !! Chess (3): • 3. Provide a way to check whether a rule isapplicable to some state: • Ex.:
win( black ) attacked( king_white ) and no_legal_move( king_white ) + similar definitions for: attacked( Piece ) … no_legal_move( Piece ) ... Chess (4): • 4. How to specify a state in which the goal is reached (= a winning state): • Ex.:
?- win( black ). win( black ) attacked( king_white ) and no_legal_move( king_white ) … Need a theorem prover ( e.g. Prolog) to verify that the state is a winning one. Chess (5): • 5. A way to verify whether a winning state is reached. • Ex.:
A program that is ABLE to play chess. • 7. A mechanism that selects in each state an appropriate rule to apply. The control problem ! Main focus of this entire chapter of the course !! Chess (6). • 6. The initial state.
Implicit search tree ~15 Move 1 Move 2 ~ (15)2 ~ (15)3 Move 3 Chess (7). Need very efficient search techniques to find good paths in such combinatorial trees.
Very many issues and trade-offs: 1. How to choose the rules? 2. Should we search through the implicit tree or through an implicit graph? 3. Do we need an optimal solution, or just any solution? ‘optimal path problems’ 4. Can we decompose states into components on which simple rules can in an independent way? Problem reduction or decomposability 5. Should we search forwards from the initial state, or backwards from a goal state?