1 / 36

Heuristics for Efficient SAT Solving

Heuristics for Efficient SAT Solving. As implemented in GRASP , Chaff and GSAT. Formulation of famous problems as SAT: Bounded Model Checking (1/4). Given a property p : (e.g. “ always signal_a = signal_b”) Is there a state reachable within k cycles, which satisfies  p ?. p. p.

quigleyv
Télécharger la présentation

Heuristics for Efficient SAT Solving

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Heuristics forEfficient SAT Solving As implemented in GRASP, Chaff and GSAT.

  2. Formulation of famous problems as SAT:Bounded Model Checking (1/4) Given a property p: (e.g. “always signal_a = signal_b”) Is there a state reachable within k cycles, which satisfies p ? p p p p p . . . s0 s1 s2 sk-1 sk

  3. Formulation of famous problems as SAT:Bounded Model Checking (2/4) The reachable states in k steps are captured by: The property p fails in one of the cycles 1..k:

  4. Formulation of famous problems as SAT:Bounded Model Checking (3/4) The safety property pis valid up to cycle k iff W(k)is unsatisfiable: p p p p p . . . s0 s1 s2 sk-1 sk

  5. 11 00 10 01 Formulation of famous problems as SAT:Bounded Model Checking (4/4) Example: a two bit counter Initial state: Transition: Property:always(l  r). For k = 2: For k = 2, W(k) is unsatisfiable. For k = 4 W(k) is satisfiable

  6. What is SAT? Given a propositional formula in CNF, find an assignment to Boolean variables that makes the formula true: 1 = (x2  x3) 2 = (x1  x4) 3 = (x2  x4) A = {x1=0, x2=1, x3=0, x4=1} SATisfying assignment!

  7. Why SAT? • Fundamental problem from theoretical point of view • Numerous applications: • CAD, VLSI • Optimization • Bounded Model Checking and other type of formal verification • AI, planning, automated deduction

  8. X  X X X X A Basic SAT algorithm Given  in CNF: (x,y,z),(-x,y),(-y,z),(-x,-y,-z) Decide() Deduce() Resolve_Conflict()

  9. A Basic SAT algorithm Choose the next variable and value. Return False if all variables are assigned While (true) { if (!Decide()) return (SAT); while (!Deduce()) if (!Resolve_Conflict()) return (UNSAT); } Apply unit clause rule. Return False if reached a conflict Backtrack until no conflict. Return False if impossible

  10. Basic Backtracking Search • Organize the search in the form of a decision tree • Each node corresponds to a decision • Depth of the node in the decision tree decision level • Notation:x=v@dx2 {0,1} is assigned to v at decision level d

  11. x1 x1 = 0@1 x2 x2 = 0@2 Backtracking Search in Action 1 = (x2  x3) 2 = (x1  x4) 3 = (x2  x4) x1 = 1@1  x4 = 0@1  x2 = 0@1  x3 = 1@1  x3 = 1@2 {(x1,1), (x2,0), (x3,1) , (x4,0)} {(x1,0), (x2,0), (x3,1)} No backtrack in this example!

  12. x1 x1 = 0@1 x1 = 1@1 x2 x2 = 0@2  x3 = 1@2 {(x1,0), (x2,0), (x3,1)} Backtracking Search in Action Add a clause 1 = (x2  x3) 2 = (x1  x4) 3 = (x2  x4) 4 = (x1  x2  x3)  x4 = 0@1  x2 = 0@1  x3 = 1@1 conflict

  13. Decision heuristics • DLIS (Dynamic Largest Individual Sum) For a given variable x: • Cx,p – # unresolved clauses in which x appears positively • Cx,n - # unresolved clauses in which x appears negatively • Let x be the literal for which Cx,p is maximal • Let ybe the literal for which Cy,n is maximal • If Cx,p > Cy,n choose x and assign it TRUE • Otherwise choose y and assign it FALSE • Requires l (#literals) queries for each decision. • (Implemented in e.g. Grasp)

  14. Decision heuristics Jeroslow-Wang method Compute for every clause w and every variable l (in each phase): • J(l) := • Choose a variable l that maximizes J(l). • This gives an exponentially higher weight to literals in shorter clauses.

  15. Decision heuristics MOM (Maximum Occurrence of clauses of Minimum size). • Let f*(x) be the # of unresolved smallest clauses containing x. Choose x that maximizes: ((f*(x) + f*(!x)) * 2k + f*(x) * f*(!x) • k is chosen heuristically. • The idea: • Give preference to satisfying small clauses. • Among those, give preference to balanced variables (e.g. f*(x) =3,f*(!x) = 3 is better than f*(x) = 1, f*(!x) = 5).

  16. Decision heuristics VSIDS (Variable State Independent Decaying Sum) 1. Each variable in each polarity has a counter initialized to 0. 2. When a clause is added, the counters are updated. 3. The unassigned variable with the highest counter is chosen. 4. Periodically, all the counters are divided by a constant. (Implemented in Chaff)

  17. Decision heuristics • VSIDS (cont’d) • Chaff holds a list of unassigned variables sorted by the counter value. • Updates are needed only when adding conflict clauses. • Thus - decision is made in constant time.

  18. Decision heuristics VSIDS is a ‘quasi-static’ strategy: - static because it doesn’t depend on current assignment - dynamic because it gradually changes. Variables that appear in recent conflicts have higher priority. This strategy is a conflict-driven decision strategy. “..employing this strategy dramatically (i.e. an order of magnitude) improved performance ... “

  19. Variable ordering(Abstract dependency graphs) A (CNF) dependency graph D (V,E): A partitioning C1..Cn: An abstract dependency graph D’(V’, E’):

  20. V0 V1 V2 V3 Vk-1 Vk ... C0 C1 C2 C3 Ck-1 Ck Variable ordering(The natural order of W(k)) For W(k) there exists a partition C1..Cn s.t. the abstract dependency graph is linear

  21. W(k) should satisfy  Pk Riding on legal executions... Pk I0 Variable ordering(simple static ordering) W(k)should satisfy I0 Pk Riding on unreachable states... I0

  22. 4 x2=1@6 x5=1@6 1 4 3 6 x4=1@6  conflict 6 3 2 5 2 5 x6=1@6 x3=1@6 Implication graphs and learning Current truth assignment: {x9=0@1 ,x10=0@3, x11=0@3, x12=1@2, x13=1@2} Current decision assignment: {x1=1@6} x10=0@3 1 = (x1  x2) 2 = (x1  x3  x9) 3 = (x2  x3  x4) 4 = (x4  x5  x10) 5 = (x4  x6  x11) 6 = (x5   x6) 7 = (x1  x7  x12) 8 = (x1 x8) 9 = (x7  x8   x13) x1=1@6 x9=0@1 x11=0@3 We learn the conflict clause10 : (: x1Ç x9Ç x11Ç x10)

  23. x13=1@2 x8=1@6 9 8 9 ’ 9 7 x7=1@6 7 x12=1@2 Implication graph, flipped assignment 1 = (x1  x2) 2 = (x1  x3  x9) 3 = (x2  x3  x4) 4 = (x4  x5  x10) 5 = (x4  x6  x11) 6 = (x5  x6) 7 = (x1  x7  x12) 8 = (x1 x8) 9 = (x7  x8   x13) 10 : (: x1Ç x9Ç x11Ç x10) x9=0@1 10 x10=0@3 x1=0@6 10 10 x11=0@3 Due to theconflict clause

  24. Non-chronological backtracking 3 Decision level Which assignments caused the conflicts ? x9= 0@1 x10= 0@3 x11= 0@3 x12= 1@2 x13= 1@2 Backtrack to decision level 3 4 These assignments Are sufficient for Causing a conflict. 5 x1 6  ’ Non-chronological backtracking

  25. More engineering aspects of SAT solvers Observation: More than 90% of the time SAT solvers perform Deduction(). Deduction() allocates new implied variables and conflicts. How can this be done efficiently ?

  26. Grasp implements Deduction() with counters Hold 2 counters for each clause : val1( ) - # of negative literals assigned 0 in  + # of positive literals assigned 1 in . val0() - # of negative literals assigned 1 in  + # of positive literals assigned 0 in .

  27. Grasp implements Deduction() with counters  is satisfied iff val1()> 0  is unsatisfied iff val0()=||  is unit iff val1()= 0 val0() = || - 1  is unresolved iff val1()= 0  val0() < ||- 1 . . Every assignment to a variable x results in updating the counters for all the clauses that contain x. Backtracking: Same complexity.

  28. Chaff implements Deduction() with a pair of observers • Observation: during Deduction(), we are only interested in newly implied variables and conflicts. • These occur only when the number of literals in  with value ‘false’ is greater than || - 2 • Conclusion: no need to visit a clause unless (val0() >|| - 2) • How can this be implemented ?

  29. Chaff implements Deduction() with a pair of observers • Define two ‘observers’: O1(), O2(). • O1() and O2() point to two distinct  literals which are not ‘false’. •  becomes unit if updating one observer leads to O1() = O2(). • Visit clause  only if O1() or O2() become ‘false’.

  30. O1 O2 V[2]=0 V[1]=0 V[5]=0, v[4]= 0 Backtrack v[4] = v[5]= X v[1] = 1 Unit clause Chaff implements Deduction() with a pair of observers Both observers of an implied clause are on the highest decision level present in the clause. Therefore, backtracking will un-assign them first. Conclusion: when backtracking, observers stay in place. Backtracking: No updating. Complexity = constant.

  31. Chaff implements Deduction() with a pair of observers The choice of observing literals is important. Best strategy is - the least frequently updated variables. The observers method has a learning curve in this respect: 1. The initial observers are chosen arbitrarily. 2. The process shifts the observers away from variables that were recently updated (these variables will most probably be reassigned in a short time). In our example: the next time v[5] is updated, it will point to a significantly smaller set of clauses.

  32. GSAT: A different approach to SAT solving Given a CNF formula , choose max_tries and max_flips for i = 1 to max_tries { T := randomly generated truth assignment for j = 1 to max_flips { if T satisfies  return TRUE choose v s.t. flipping v’s value gives largest increase in the # of satisfied clauses (break ties randomly). T := T with v’s assignment flipped. } }

  33. Improvement # 1: clause weights Initial weight of each clause: 1 Increase by k the weight of unsatisfied clauses. Choose v according to max increase in weight Clause weights is another example of conflict-driven decision strategy.

  34. Improvement # 2: Averaging-in Q: Can we reuse information gathered in previous tries in order to speed up the search ? A: Yes! Rather than choosing T randomly each time, repeat ‘good assignments’ and choose randomly the rest.

  35. Improvement # 2: Averaging-in (cont’d) Let X1, X2 and X3 be equally wide bit vectors. Define a function bit_average : X1X2 X3 as follows: b1ib1i=b2i random otherwise (where bji is the i-th bit in Xj, j{1,2,3}) b3i :=

  36. Improvement # 2: Averaging-in (cont’d) Let Tiinitbe the initial assignment (T) in cycle i. Let Tibestbe the assignment with highest # of satisfied clauses in cycle i. T1init:= random assignment. T2init:= random assignment. i > 2, Tiinit:= bit_average(Ti-1best, Ti-2best)

More Related