html5-img
1 / 117

The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*

The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*. Karl J. Lieberherr Northeastern University Boston. joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart. Title inspired by a paper by Carla Gomes / David Shmoys. Where we are. Introduction

munin
Télécharger la présentation

The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques*

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Techniques* Karl J. Lieberherr Northeastern University Boston joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart Title inspired by a paper by Carla Gomes / David Shmoys

  2. Where we are • Introduction • Look-forward (look-ahead polynomials) • Look-backward (superresolution) • SPOT: how to use the look-ahead polynomials (look-forward) together with superresolution (look-backward).

  3. Problem Snapshot • SAT: classic problem in complexity theory • SAT & MAX-SAT Solvers: working on CNFs (a multi-set of disjunctions). • CSP: constraint satisfaction problem • Each constraint uses a Boolean relation. • e.g. a Boolean relation 1in3(x y z) is satisfied iff exactly one of its parameters is true. • CSP & MAX-CSP Solvers: working on CSP instances (a multi-set of constraints).

  4. Related Work • C. P. Gomes and D. B. Shmoys. The Promise of LP to Boost CSP Techniques for Combinatorial Problems. In Proceedings of the 4th International Workshop on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CP-AI-OR'02), pages 291--305, 2002.

  5. Gomes/Shmoys • They use LP relaxation to derive probabilities how to set the variables. • We use averaging relaxation to derive probabilities how to set the variables.

  6. Gomes/Shmoys • A central feature of their algorithm is that they maintain two different formulations: the CSP formulation and the LP formulation. • A central feature of our algorithm (SPOT) is that it maintains two different formulations: the CSP formulation and the polynomial formulation.

  7. Gomes/Shmoys • The hybrid nature of their algorithm results from the combination of strategies for variable and value assignment. • The hybrid nature of our algorithm (SPOT) results from the combination of strategies: the polynomial formulation is used for variable and value ordering and the CSP formulation for propagation and clause learning.

  8. Gomes/Shmoys • They use randomized restarts to reduce the variance in the search behavior. • We restart after each conflict.

  9. Gomes/Shmoys differences • The CSP and LP formulations are comparable in length. • The polynomial formulation is significantly shorter (log) than the CSP formulation.

  10. Gomes/Shmoys differences • The LP formulation must be suitably manually constructed from the CSP formulation. • The polynomial formulation is derived automatically from the CSP formulation.

  11. Introduction • Boolean MAX-CSP(G) for rank d, G = set of relations of rank d • Input • Input = Bag of Constraint = CSP(G) instance • Constraint = Relation + Set of Variable • Relation = int. // Relation number < 2 ^ (2 ^ d) in G • Variable = int • Output • (0,1) assignment to variables which maximizes the number of satisfied constraints. • Example Input: G = {22} of rank 3. H = • 22:1 2 3 0 • 22:1 2 4 0 • 22:1 3 4 0 1in3 has number 22 M = {1 !2 !3 !4} satisfies all

  12. Variation MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): H = 22:1 2 3 0 22:1 2 4 0 in MAX-CSP({22},?). Highest value for ? 22:1 3 4 0 22: 2 3 4 0

  13. The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. • The players use P1 to choose a set G of m relations of rank r. • Player 1 constructs a CSP(G) instance H with 1000 variables and gives it to player 2 (1 second limit). • Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit). • Take 10 turns (go to 1). How would you play this game intelligently?

  14. Our approach by Example:SAT Rank 2 example 14 : 1 2 014 :       3 4 014 :           5 6 0  7 : 1    3            0 7 : 1        5      0 7 :       3   5      0 7 :   2    4         0 7 :   2         6   0 7 :         4   6   0 14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3)

  15. excellent peripheral vision 0 1 2 3 4 5 6 = k 8/9 7/9 Blurry vision • What do we learn from the abstract representation? • set 1/3 of the variables to true (maximize). • the best assignment will satisfy at least 7/9 constraints. • very useful but the vision is blurry in the “middle”. appmean = approximation of the mean (k variables true)

  16. Our approach by Example • Given a CSP(G)-instance H and an assignment N which satisfies fraction f in H. • Is there an assignment that satisfies more than f? • YES (we are done), absH(mb) > f • MAYBE, The closer absH() comes to f, the better • Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f • YES (we are done), H1 = Hk=1, absH1(mb1) > f • MAYBE, the closer absH1(mb1) comes to f, the better • NO, UP or clause learning absH= abstract representation of H

  17. 8/9 7/9 abstract representation 14 : 1 2 014 :       3 4 014 :           5 6 0  7 : 1    3            0 7 : 1        5      0 7 :       3   5      0 7 :   2     4          0 7 :   2         6   0 7 :         4   6   0 H 3/9 0 1 2 3 4 5 6 14 : 2 014 :       3 4 014 :           5 6 0  7 : 1    3            0 7 : 1        5      0 7 :       3   5      0 7 :   2     4          0 7 :   2         6   0 7 :         4   6   0 6/7 = 8/9 5/7=7/9 H0 3/7=5/9 maximum assignment away from max bias: blurry 0 1 2 3 4 5

  18. 8/9 7/9 14 : 1 2 014 :       3 4 014 :           5 6 0  7 : 1    3            0 7 : 1        5      0 7 :       3   5      0 7 :   2     4          0 7 :   2         6   0 7 :         4   6   0 3/8 H 0 1 2 3 4 5 6 clearly above 3/4 14 : 1 2 014 :       3 4 014 :           5 6 0 7 :     3            0 7 :         5      0 7 :       3   5      0 7 :   2     4          0 7 :   2         6   0 7 :         4   6   0 7/8=8/9 6/8=7/9 H1 maximum assignment away from max bias: blurry 2/7=3/8 0 1 2 3 4 5

  19. 14 : 1 2 014 :       3 4 014 :           5 6 0  7 : 1    3            0 7 : 1        5      0 7 :       3   5      0 7 :   2     4          0 7 :   2         6   0 7 :         4   6   0 8/9 7/9 abstract representation guarantees 7/9 H 7/8 = 8/9 6/7=8/9 6/8 = 7/9 5/7=7/9 abstract representation guarantees 7/9 abstract representation guarantees 8/9 H0 H1 NEVER GOES DOWN: DERANDOMIZATION

  20. rank 2 10: 1 = or(1) 7: 1 2 = or(!1 !2) 4/6 4/6 10 : 1 0 10 : 2 0 10 : 3 0 7 : 1 2 0 7 : 1 3 0 7 : 2 3 0 3/6 3/6 abstract representation guarantees 0.625 * 6 = 3.75: 4 satisfied. 0 1 2 3 4/6 4/6 4/6 4/6 5 : 1 0 10 : 2 0 10 : 3 0 13 : 1 2 0 13 : 1 3 0 7 : 2 3 0 3/6 3/6 rank 2 5: 1 = or(!1) 13: 1 2 = or(1 !2) The effect of n-map

  21. First Impression • The abstract representation = look-ahead polynomials seems useful for guiding the search. • The look-ahead polynomials give us averages: the guidance can be misleading because of outliers. • But how can we compute the look-ahead polynomials?

  22. Where we are • Introduction • Look-forward • Look-backward • SPOT: how to use the look-ahead polynomials together with superresolution.

  23. Look Forward • Why? • To make informed decisions • How? • Abstract representation based on look-ahead polynomials

  24. Look-ahead Polynomial(Intuition) • The look-ahead polynomial computes the expected fraction of satisfied constraints among all random assignments that are produced with bias p.

  25. Consider an instance: 40 variables,1000 constraints (1in3) 1, … ,40 22: 6 7 9 0 22: 12 27 38 0 Abstract representation: reduce the instance to look-ahead polynomial 3p(1-p)2 = B1,3(p) (Bernstein)

  26. 3p(1-p)2 for MAX-CSP({22})

  27. Look-ahead Polynomial(Definition) • H is a CSP(G) instance. • N is an arbitrary assignment. • The look-ahead polynomial laH,N(p) computes the expected fraction of satisfied constraints of H when each variable in N is flipped with probability p.

  28. The general case MAX-CSP(G) G = {R1, … }, tR(F) = fraction of constraints in F that use R. appSATR(x) over all R is a super set of the Bernstein polynomials (computer graphics, weighted sum of Bernstein polynomials) x = p

  29. Rational Bezier Curves

  30. Bernstein Polynomials http://graphics.idav.ucdavis.edu/education/CAGDNotes/Bernstein-Polynomials.pdf

  31. all the appSATR(x) polynomials

  32. Look-ahead Polynomial in Action • Focus on purely mathematical question first • Algorithmic solution will follow • Mathematical question: Given a CSP(G) instance. For which fractions f is there always an assignment satisfying fraction f of the constraints? In which constraint systems is it impossible to satisfy many constraints?

  33. Remember? MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H. Example: G = {22} of rank 3 MAX-CSP({22},f): 22:1 2 3 0 22:1 2 4 0 22:1 3 4 0 22: 2 3 4 0

  34. Mathematical Critical Transition Point MAX-CSP({22},f): For f ≤ u: problem has always a solution For f≥ u + e: problem has not always a solution, e>0. 1 not always (solid) u = critical transition point always (fluid) 0

  35. The Magic Number • u = 4/9

  36. 3p(1-p)2 for MAX-CSP({22})

  37. Produce the Magic Number • Use an optimally biased coin • 1/3 in this case • In general: min max problem

  38. The 22 reductions:Needed for implementation 1,0 2,0 22 60 240 3,0 2,1 3,1 1,1 2,0 3,0 3 15 255 3,1 2,1 22 is expanded into 6 additional relations. 0

  39. The 22 N-Mappings:Needed for implementation 1 41 134 2 2 0 0 1 1 22 73 146 104 2 2 0 0 97 148 1 22 is expanded into 7 additional relations.

  40. The 22 N-Mappings:Needed for implementation • N-mapped vars Relation#2    1    0   |------------------------0    0    0  |    220    0    1  |    410    1    0  |    731    0    0  |    97 0    1    1  |    1341    0    1  |    146 1    1    0  |    1481    1    1  |    104

  41. General Dichotomy Theorem MAX-CSP(G,f): For each finite set G of relations there exists an algebraic number tG For f ≤ tG: MAX-CSP(G,f) has polynomial solution For f≥ tG+ e: MAX-CSP(G,f) is NP-complete, e>0. 1 hard (solid) NP-complete polynomial solution: Use optimally biased coin. Derandomize. P-Optimal. tG = critical transition point easy (fluid) Polynomial 0 due to Lieberherr/Specker (1979, 1982)

  42. Context • Ladner [Lad 75]: if P !=NP, then there are decision problems in NP that are neither NP-complete, nor they belong to P. • Conceivable that MAX-CSP(G,f) contains problems of intermediate complexity.

  43. General Dichotomy Theorem(Discussion) MAX-CSP(G,f): For each finite set G of relations there exists an algebraic number tG For f≤ tG: MAX-CSP(G,f) has polynomial solution For f≥ tG+ e: MAX-CSP(G,f) is NP-complete, e>0. 1 hard (solid), NP-complete exponential, super-polynomial proofs ??? relies on clause learning tG = critical transition point easy (fluid), Polynomial (finding an assignment) constant proofs (done statically using look-ahead polynomials) no clause learning 0

  44. The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0 Two players: They agree on a protocol P1 to choose a set of m relations of rank r. • The players use P1 to choose a set G of m relations of rank r. • Player 1 constructs a CSP(G) instance H with 1000 variables and gives it to player 2 (1 second limit). • Player 2 gets paid the fraction of constraints she can satisfy in H (100 seconds limit). • Take turns (go to 1).

  45. Evergreen(3,2) • Rank 3: Represent relations by the integer corresponding to the truth table in standard sorted order 000 – 111. • choose relations between 1 and 254 (exclude 0 and 255). • Don’t choose two odd numbers: All false would satisfy all constraints. • Don’t choose both numbers above 128: All true would satisfy all constraints.

  46. For Evergreen(3,2)

  47. min max problem sat(H,M) = fraction of satisfied constraints in CSP(G)-instance H by assignment M tG = min max sat(H,M) all CSP(G) instances H all (0,1) assignments M

  48. Problem reductions are the key • Solution to simpler problem implies solution to original problem.

  49. min max problem sat(H,M,n) = fraction of satisfied constraints in CSP(G)-instance H by assignment M with n variables. tG = lim min max sat(H,M,n) all SYMMETRIC constraint systems H with n variables all (0,1) assignments M to n variables n to infinity

  50. Reduction achieved • Instead of minimizing over all constraint systems it is sufficient to minimize over the symmetric constraint systems.

More Related