1 / 59

Review & Finish CSPs Begin Logical Agents

Review & Finish CSPs Begin Logical Agents. Constraint Satisfaction Problems. What is a CSP? Finite set of variables X 1 , X 2 , …, X n Nonempty domain of possible values for each variable D 1 , D 2 , …, D n Finite set of constraints C 1 , C 2 , …, C m

ltim
Télécharger la présentation

Review & Finish CSPs Begin Logical Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review & Finish CSPsBegin Logical Agents

  2. Constraint Satisfaction Problems • What is a CSP? • Finite set of variables X1, X2, …, Xn • Nonempty domain of possible values for each variable D1, D2, …, Dn • Finite set of constraints C1, C2, …, Cm • Each constraint Ci limits the values that variables can take, • e.g., X1 ≠ X2 • Each constraint Ci is a pair <scope, relation> • Scope = Tuple of variables that participate in the constraint. • Relation = List of allowed combinations of variable values. May be an explicit list of allowed combinations. May be an abstract relation allowing membership testing and listing. • CSP benefits • Standard representation pattern • Generic goal and successor functions • Generic heuristics (no domain specific expertise).

  3. CSPs --- what is a solution? • A state is an assignment of values to some or all variables. • An assignment is complete when every variable has a value. • An assignment is partial when some variables have no values. • Consistent assignment • assignment does not violate the constraints • A solution to a CSP is a complete and consistent assignment. • Some CSPs require a solution that maximizes an objective function.

  4. CSP as a standard search problem • A CSP can easily be expressed as a standard search problem. • Incremental formulation • Initial State: the empty assignment {} • Actions (3rd ed.), Successor function (2nd ed.): Assign a value to an unassigned variable provided that it does not violate a constraint • Goal test: the current assignment is complete (by construction it is consistent) • Path cost: constant cost for every step (not really relevant) • Can also use complete-state formulation • Local search techniques (Chapter 4) tend to work well

  5. Improving CSP efficiency • Previous improvements on uninformed search  introduce heuristics • For CSPS, general-purpose methods can give large gains in speed, e.g., • Which variable should be assigned next? • In what order should its values be tried? • Can we detect inevitable failure early? • Can we take advantage of problem structure? Note: CSPs are somewhat generic in their formulation, and so the heuristics are more general compared to methods in Chapter 4

  6. Minimum remaining values (MRV) var SELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp) • A.k.a. most constrained variable heuristic • Heuristic Rule: choose variable with the fewest legal moves • e.g., will immediately detect failure if X has no legal values

  7. Degree heuristic for the initial variable • Heuristic Rule: select variable that is involved in the largest number of constraints on other unassigned variables. • Degree heuristic can be useful as a tie breaker. • In what order should a variable’s values be tried?

  8. Least constraining value for value-ordering • Least constraining value heuristic • Heuristic Rule: given a variable choose the least constraining value • leaves the maximum flexibility for subsequent variable assignments

  9. Forward checking • Assign {Q=green} • Effects on other variables connected by constraints with WA • NT can no longer be green • NSW can no longer be green • SA can no longer be green • MRV heuristic would automatically select NT or SA next

  10. Forward checking • If V is assigned blue • Effects on other variables connected by constraints with WA • NSW can no longer be blue • SA is empty • FC has detected that partial assignment is inconsistent with the constraints and backtracking can occur.

  11. Arc consistency • An Arc X  Y is consistent if for every value x of X there is some value y consistent with x (note that this is a directed property) • Consider state of search after WA and Q are assigned: SA  NSW is consistent if SA=blue and NSW=red

  12. Arc consistency • X  Y is consistent if for every value x of X there is some value y consistent with x • NSW  SA is consistent if NSW=red and SA=blue NSW=blue and SA=???

  13. Arc consistency • Can enforce arc-consistency: Arc can be made consistent by removing blue from NSW • Continue to propagate constraints…. • Check V  NSW • Not consistent for V = red • Remove red from V

  14. Arc consistency • Continue to propagate constraints…. • SA  NT is not consistent • and cannot be made consistent • Arc consistency detects failure earlier than FC

  15. Arc consistency algorithm (AC-3) function AC-3(csp) return the CSP, possibly with reduced domains inputs: csp, a binary csp with variables {X1, X2, …, Xn} local variables: queue, a queue of arcs initially the arcs in csp while queue is not empty do (Xi, Xj)  REMOVE-FIRST(queue) if REMOVE-INCONSISTENT-VALUES(Xi, Xj)then for each Xkin NEIGHBORS[Xi] do add (Xi, Xj) to queue function REMOVE-INCONSISTENT-VALUES(Xi, Xj) returntrue iff we remove a value removed false for eachxin DOMAIN[Xi] do if no value y inDOMAIN[Xj]allows (x,y) to satisfy the constraints between Xi and Xj then delete x from DOMAIN[Xi]; removed true return removed (from Mackworth, 1977)

  16. K-consistency • Arc consistency does not detect all inconsistencies: • Partial assignment {WA=red, NSW=red} is inconsistent. • Stronger forms of propagation can be defined using the notion of k-consistency. • A CSP is k-consistent if for any set of k-1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. • E.g. 1-consistency = node-consistency • E.g. 2-consistency = arc-consistency • E.g. 3-consistency = path-consistency • Strongly k-consistent: • k-consistent for all values {k, k-1, …2, 1}

  17. Local search for CSP function MIN-CONFLICTS(csp, max_steps) return solution or failure inputs: csp, a constraint satisfaction problem max_steps, the number of steps allowed before giving up current an initial complete assignment for csp for i= 1 to max_stepsdo ifcurrent is a solution for csp then return current var a randomly chosen, conflicted variable from VARIABLES[csp] value the value v for var that minimize CONFLICTS(var,v,current,csp) set var = value in current return failure

  18. Graph structure and problem complexity • Solving disconnected subproblems • Suppose each subproblem has c variables out of a total of n. • Worst case solution cost is O(n/c dc), i.e. linear in n • Instead of O(d n), exponential in n • E.g. n= 80, c= 20, d=2 • 280 = 4 billion years at 1 million nodes/sec. • 4 * 220= .4 second at 1 million nodes/sec

  19. Tree-structured CSPs • Theorem: • if a constraint graph has no loops then the CSP can be solved in O(nd 2) time • linear in the number of variables! • Compare difference with general CSP, where worst case is O(d n)

  20. Algorithm for Solving Tree-structured CSPs • Choose some variable as root, order variables from root to leaves such that every node’s parent precedes it in the ordering. • Label variables from X1 to Xn) • Every variable now has 1 parent • Backward Pass • For j from n down to 2, apply arc consistency to arc [Parent(Xj), Xj) ] • Remove values from Parent(Xj) if needed • Forward Pass • For j from 1 to n assign Xj consistently with Parent(Xj )

  21. Tree CSP Example G B

  22. Tree CSP Example Backward Pass (constraint propagation) B R G B R G B G R G B

  23. Tree CSP Example Backward Pass (constraint propagation) B R G B R G B G R G B Forward Pass (assignment) B G R R G B

  24. Tree CSP complexity • Backward pass • n arc checks • Each has complexity d2 at worst • Forward pass • n variable assignments, O(nd) • Overall complexity is O(nd 2) Algorithm works because if the backward pass succeeds, then every variable by definition has a legal assignment in the forward pass

  25. What about non-tree CSPs? • General idea is to convert the graph to a tree 2 general approaches • Assign values to specific variables (Cycle Cutset method) • Construct a tree-decomposition of the graph - Connected subproblems (subgraphs) form a tree structure

  26. Cycle-cutset conditioning • Choose a subset S of variables from the graph so that graph without S is a tree • S = “cycle cutset” • For each possible consistent assignment for S • Remove any inconsistent values from remaining variables that are inconsistent with S • Use tree-structured CSP to solve the remaining tree-structure • If it has a solution, return it along with S • If not, continue to try other assignments for S

  27. Finding the optimal cutset • If c is small, this technique works very well • However, finding smallest cycle cutset is NP-hard • But there are good approximation algorithms

  28. Tree Decompositions

  29. Rules for a Tree Decomposition • Every variable appears in at least one of the subproblems • If two variables are connected in the original problem, they must appear together (with the constraint) in at least one subproblem • If a variable appears in two subproblems, it must appear in each node on the path connecting those subproblems.

  30. Summary • CSPs • special kind of problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values • Backtracking=depth-first search with one variable assigned per node • Heuristics • Variable ordering and value selection heuristics help significantly • Constraint propagation does additional work to constrain values and detect inconsistencies • Works effectively when combined with heuristics • Iterative min-conflicts is often effective in practice. • Graph structure of CSPs determines problem complexity • e.g., tree structured CSPs can be solved in linear time.

  31. Logical Agents (Part I) Chapter 7 Propositional Logic

  32. Why Do We Need Logic? • Problem-solving agents were very inflexible: hard code every possible state. • Search is almost always exponential in the number of states. • Problem solving agents cannot infer unobserved information. • We want an algorithm that reasons in a way that resembles reasoning in humans.

  33. Knowledge-Based Agents • KB = knowledge base • A set of sentences or facts • e.g., a set of statements in a logic language • Inference • Deriving new sentences from old • e.g., using a set of logical statements to infer new ones • A simple model for reasoning • Agent is told or perceives new evidence • E.g., A is true • Agent then infers new facts to add to the KB • E.g., KB = { A -> (B OR C) }, then given A and not C we can infer that B is true • B is now added to the KB even though it was not explicitly asserted, i.e., the agent inferred B

  34. Types of Logics • Propositional logic deals with specific objects and concrete statements that are either true or false • E.g., John is married to Sue. • Predicate logic (first order logic) allows statements to contain variables, functions, and quantifiers • For all X, Y: If X is married to Y then Y is married to X. • Fuzzy logic deals with statements that are somewhat vague, such as this paint is grey, or the sky is cloudy. • Probability deals with statements that are possibly true, such as whether I will win the lottery next week. • Temporal logic deals with statements about time, such as John was a student at UC Irvine for four years. • Modal logic deals with statements about belief or knowledge, such as Mary believes that John is married to Sue, or Sue knows that search is NP-complete.

  35. Performance measure gold: +1000, death: -1000 -1 per step, -10 for using the arrow Environment Squares adjacent to wumpus are smelly Squares adjacent to pit are breezy Glitter iff gold is in the same square Shooting kills wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square Sensors: Stench, Breeze, Glitter, Bump, Scream Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot Wumpus World PEAS description Would DFS work well? A*?

  36. Exploring a wumpus world

  37. Exploring a wumpus world

  38. Exploring a wumpus world

  39. Exploring a wumpus world

  40. Exploring a Wumpus world If the Wumpus were here,stench should be here. Therefore it is here. Since, there is no breeze here, the pit must be there, and it must be OK here We need rather sophisticated reasoning here!

  41. Exploring a wumpus world

  42. Exploring a wumpus world

  43. Exploring a wumpus world

  44. Logic • We used logical reasoning to find the gold. • Logics are formal languages for representing information such that conclusions can be drawn • Syntax defines the sentences in the language • Semantics define the "meaning” or interpretation of sentences; • connects symbols to real events in the world, • i.e., define truth of a sentence in a world • E.g., the language of arithmetic • x+2 ≥ y is a sentence; x2+y > {} is not a sentence syntax • x+2 ≥ y is true in a world where x = 7, y = 1 • x+2 ≥ y is false in a world where x = 0, y = 6 semantics

  45. Entailment • Entailment means that one thing follows from another: KB ╞α • Knowledge base KB entails sentence α if and only if α is true in all worlds where KB is true • E.g., the KB containing “the Giants won and the Reds won” entails “The Giants won”. • E.g., x+y = 4 entails 4 = x+y • E.g., “Mary is Sue’s sister and Amy is Sue’s daughter” entails “Mary is Amy’s aunt.”

  46. Models • Logicians typically think in terms of models, which are formally structured worlds with respect to which truth can be evaluated • We say mis a model of a sentence α if α is true in m • M(α) is the set of all models of α • Then KB ╞ α iff M(KB)  M(α) • E.g. KB = Giants won and Redswon α = Giants won • Think of KB and α as collections of constraints and of models m as possible states. M(KB) are the solutions to KB and M(α) the solutions to α. Then, KB ╞ α when all solutions to KB are also solutions to α.

  47. Wumpus models All possible models in this reduced Wumpus world.

  48. Wumpus models • KB = all possible wumpus-worlds consistent with the observations and the “physics” of the Wumpus world.

  49. Wumpus models α1 = "[1,2] is safe", KB ╞ α1, proved by model checking

More Related