1 / 38

Exact Model Counting: limitations of SAT-solver based methods

Exact Model Counting: limitations of SAT-solver based methods. Paul Beame Jerry Li Sudeepa Roy Dan Suciu University of Washington [UAI 13], [ICDT 14]. Model Counting. Model Counting Problem: Given a Boolean formula/circuit F ,

dawn
Télécharger la présentation

Exact Model Counting: limitations of SAT-solver based methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exact Model Counting: limitations of SAT-solver based methods Paul Beame Jerry Li Sudeepa Roy Dan Suciu University of Washington [UAI 13], [ICDT 14]

  2. Model Counting • Model Counting Problem: Given a Boolean formula/circuit F, compute #F = #Models (satisfying assignments) of F Traditional cases of interest: F is CNF or DNF Recent: F is given by a small circuit from a class of simple circuits • Probability Computation Problem: Given F, and independent Pr(x), Pr(y), Pr(z), …, compute Pr(F)

  3. Model Counting • #P-hard • Even for formulas where satisfiability is easy to check • Practical model counters can compute #F or Pr(F) for many CNF formulas of 100’s-10,000’s of variables.

  4. Exact Model Counters for CNF [Birnbaumet. al.’99] • CDP • Relsat • Cachet • SharpSAT • c2d • Dsharp • … Search-based/DPLL-based (explore the assignment-space and count the satisfying ones) [Bayardo Jr. et. al. ’97, ’00] [Sang et. al. ’05] [Thurley ’06] [Darwiche ’04] Knowledge Compilation-based (compile F into a “computation-friendly” form) [Muiseet. al. ’12] [Survey by Gomes et. al. ’09] • Both techniques explicitly or implicitly • use DPLL-based algorithms • produce FBDD or Decision-DNNFcompiled forms • [Huang-Darwiche’05, ’07]

  5. Compiled size vs Search time Desiderata • Compiled format makes model counting simple • Compiled format is concise • Compiled format is easy to find • Compiled size ≤ Search time • Even if construction of compiled form is only implicit • Can be exponential gap in terms of # of variables • e.g. an UNSAT formula has constant compiled size

  6. Model Counters Use Extensions to DPLL • Caching Subformulas • Cachet, SharpSAT, c2d, Dsharp • Component Analysis • Relsat, c2d, Cachet , SharpSAT, Dsharp • Conflict Directed Clause Learning • Cachet, SharpSAT, c2d, Dsharp • Traces of • DPLL + caching + (clause learning)  FBDD • DPLL + caching + components + (clause learning)  Decision-DNNF • How muchdoescomponent analysishelp? • i.e. how muchsmaller are decision-DNNFsthanFBDDs?

  7. Outline • Review of DPLL-based algorithms for #SAT • Extensions (Caching & Component Analysis) • Knowledge Compilation (FBDD & Decision-DNNF) • Decision-DNNF to FBDD conversion theorem • Implications of the conversion • Applications • Probabilistic databases • Separation between Lifted vs Grounded Inference • Proof sketch for Conversion Theorem • Open Problems

  8. DPLL Algorithms 5/8 • F: (xy)  (xuw) (xuwz) x // basic DPLL: Function Pr(F): if F= false then return 0 if F = true then return 1 select a variable x, return • ½ Pr(FX=0)+ ½ Pr(FX=1) 1 0 uwz y(uw) 7/8 3/8 z y 1 0 1 0 uw uw ¾ ¾ 0 1 u u 0 1 0 0 1 1 w 1 1 ½ w ½ w 1 w 1 1 0 1 0 0 1 0 1 1 0 1 0 Assume uniform distribution for simplicity

  9. DPLL Algorithms 5/8 • F: (xy)  (xuw) (xuwz) x 1 0 uwz y(uw) 7/8 3/8 z y Decision-Node • The trace is a • Decision-Tree for F 1 0 1 0 uw uw ¾ ¾ 0 1 u u 0 1 0 0 1 1 w 1 1 ½ w ½ w 1 w 1 1 0 1 0 0 1 0 1 1 0 1 0

  10. Caching • F: (xy)  (xuw) (xuwz) // basic DPLL: Function Pr(F): if F= false then return 0 if F = true then return 1 select a variable x, return • ½ Pr(FX=0)+ ½ Pr(FX=1) x 1 0 uwz y(uw) z y 1 1 0 0 uw uw u u 0 1 • // DPLL with caching: • Cache F and Pr(F);look it up before computing 0 0 1 1 w w w 1 w 1 1 0 1 0 1 0 1 0

  11. Caching & FBDDs • F: (xy)  (xuw) (xuwz) x 1 0 • The trace is a decision-DAG for F • Everyvariable istested at most once on anypath • FBDD(Free BinaryDecisionDiagram) • or • 1-BP(Read Once Branching Program) uwz y(uw) z y 1 1 0 0 uw u 0 1 0 1 w w 1 1 0 1 0

  12. Component Analysis • F: (xy)  (xuw) (xuwz) // basic DPLL: Function Pr(F): if F= false then return 0 if F = true then return 1 select a variable x, return • ½ Pr(FX=0)+ ½ Pr(FX=1) x 1 0 uwz y  (uw) z y 1 1 0 0 uw // DPLL with component analysis (and caching): if F = G  H where G and H have disjoint sets of variables Pr(F) = Pr(G)× Pr(H) u 0 1 0 1 w w 1 1 0 1 0

  13. Components & Decision-DNNF Decision Node • F: (xy)  (xuw) (xuwz) x AND Node 1 0 uwz y  (uw)  • The trace is a Decision-DNNF • [Huang-Darwiche ’05, ’07] • FBDD + “Decomposable” AND-nodes • (Twosub-DAGs do not share variables) z 1 0 y uw y u 1 1 0 0 1 w 1 0 w 1 1 0 How much power do they add? 1 0

  14. New Conversion Theorem • Theorem: • decision-DNNFfor F of size N •  FBDD for F of size Nlog N + 1 • If Fis a k-DNF or k-CNF, thenFBDDis of size Nk • Conversion algorithmruns in linear time in the size of its output

  15. Decomposable Logic Decision Diagrams (DLDDs) • Generalization of Decision-DNNFs: • not just decomposable AND nodes • Also NOT nodes, decomposable binary OR, XOR, etc • sub-DAGs for each node are labelled by disjoint sets of variables Theorem: Conversion works even for DLDDs

  16. Implications • Many previous exponential lower bounds for 1-BPs/FBDDs • 2(n) lower bounds for certain 2-DNF formulas based on combinatorial designs [Bollig-Wegener 00] [Wegener 02] • Our conversion theorem implies 2(n)bounds for decision-DNNF size and hence for SAT-solver based exact model counters

  17. Outline • Review of DPLL-based algorithms for #SAT • Extensions (Caching & Component Analysis) • Knowledge Compilation (FBDD & Decision-DNNF) • Decision-DNNF to FBDD conversion theorem • Implications of the conversion • Applications • Probabilistic databases • Separation between Lifted vs Grounded Inference • Proof sketch for Conversion Theorem • Open Problems

  18. Applications of exact model counters • Finite model theory: • First order formulas over finite domains • Bayesian inference • Statistical relational models • Combinations of logic and probability • Probabilistic databases • Monotone restrictions of statistical relational models

  19. Relational Databases Boolean query Q:  x  y AsthmaPatient(x)  Friend (x, y)  Smoker(y)

  20. Probabilistic Databases • Tuples are probabilistic (and independent) • “Ann” is present with probability 0.3 • What is the probability that Q is true on D? • Assign unique variables to tuples • Boolean formula FQ,D = (x1y1z1)  (x1y2z2)  (x2y3z2) • Q is true on D FQ,D is true y1 x1 0.5 z1 0.9 0.3 y2 1.0 0.5 x2 z2 0.1 y3 0.7 Boolean query Q:  x  y AsthmaPatient(x)  Friend (x, y)  Smoker(y) Pr(x1) = 0.3

  21. Probabilistic Databases • Query Probability Computation = Model Counting: • Compute Pr(FQ,D) given Pr(x1), Pr(x2), … • Monotone database query Q monotone k-DNFFQ,D • Boolean formula FQ,D = (x1y1z1)  (x1y2z2)  (x2y3z2) • Q is true on D FQ,D is true

  22. A class of DB examples H1(x,y)=R(x)S(x,y)  S(x,y)T(y) Hk(x,y)=R(x)S1(x,y) ... Si(x,y)Si+1(x,y) ... Sk(x,y)T(y) Dichotomy Theorem [Dalvi, Suciu 12] Model counting a Boolean combination of hk0,...,hkkis either • #P-hard, e.g. Hk, or • Polynomial time computable using “lifted inference” (inclusion-exclusion), e.g. (h30h32) (h30h33) (h31h33) • and there is a simple condition to tell which case holds hk0 hki hkk

  23. New Lower Bounds Theorem: Any Boolean function f ofhk0,...,hkkthat depends on all of them requires FBDD(f) = 2(𝑛) which implies Decision-DNNF(f) = 2() Decision-DNNF(f) =2(𝑛/𝑘) iffis monotone. Corollary: SAT-solver based exact model counting requires 2(𝑛) time even on probabilistic DB instances with time algorithms using “lifted inference”.

  24. “Lifted” vs “Grounded” Inference • “Grounded” inference • Work with propositional groundings of the first-order expressions given by the model • “Lifted” inference • Work with the first-order formulas and do higher level calculations Folklore sentiment: Lifted inference is strictly stronger than grounded inference Our examples give a first clear proof of this

  25. Outline • Review of DPLL-based algorithms for #SAT • Extensions (Caching & Component Analysis) • Knowledge Compilation (FBDD & Decision-DNNF) • Decision-DNNF to FBDD conversion theorem • Implications of the conversion • Applications • Probabilistic databases • Separation between Lifted vs Grounded Inference • Proof sketch for Conversion Theorem • Open Problems

  26. Proof of Simulation Efficient construction Decision-DNNF FBDD Size N Size Nlog N+1 Size Nk • Decision-DNNF • that represents a k-DNF

  27. Decision-DNNF  FBDD ConvertdecomposableAND-nodes to decision-nodes whilerepresenting the same formula F

  28. First attempt  • G 1 G • H • H FBDD 0 0 1 1 0 0 1 Decision-DNNF FBDD • G and H do not share variables, so every variable is still • tested at most once on any path

  29. But, what if sub-DAGs are shared?    • Conflict! • G • H G g ’ h • H h g ’ 0 0 • H • G 1 0 0 1 1 1 0 0 Decision-DNNF

  30. Obvious Solution: Replicate Nodes    G G g ’ h • H • H No conflict  can apply original idea 0 1 0 1 But, may need recursive replication Can have exponential blowup!

  31. Main Idea: Replicate Smaller Sub-DAG  Edges coming from other nodes in the decision-DNNF Smaller sub-DAG Each AND-node creates a private copy of its smaller sub-DAG Larger sub-DAG

  32. Light and Heavy Edges  Light Edge Heavy Edge Smaller sub-DAG Larger sub-DAG • Each AND-node creates a private copy of its smaller sub-DAG • Recursively, • each node u is replicated #times it is in a smaller sub-DAG • #Copies of u =#sequences of light edges leading to u

  33. Quasipolynomial Conversion L = Max #light edgeson any path L ≤ log N N = Nsmall + Nbig ≥ 2 Nsmall ≥ ... ≥ 2L #Copies of each node ≤ NL ≤ Nlog N   #Nodes in FBDD ≤ N. Nlog N  We also show that our analysis is tight

  34. Polynomial Conversion for k-DNFs • L = #Max light edges on any path ≤ k – 1 • #Nodes in FBDD ≤ N. NL = Nk

  35. Summary • Quasi-polynomial conversion of any decision-DNNF or DLDD into an FBDD (polynomial for k-DNF or k-CNF) • Exponential lower bounds on model counting algorithms • Applications in probabilistic databases involving simple 2-DNF formulas where lifted inference is exponentially better than propositional model counting

  36. Separation Results Exponential Separation Poly-size AND-FBDD or d-DNNF exists Exponential lower bound on decision-DNNF size • FBDD:Decision-DAG, each variable is tested once along any path • Decision-DNNF:FBDD + decomposable AND-nodes (disjoint sub-DAGs) Decision-DNNF d-DNNF AND-FBDD FBDD • AND-FBDD:FBDD + AND-nodes (not necessarily decomposable) • [Wegener’00] • d-DNNF: Decomposable AND nodes + OR-nodes with sub-DAGs not simultaneously satisfiable[Darwiche ’01, Darwiche-Marquis ’02]

  37. Open Problems • A polynomial conversion of decision-DNNFs to FBDDs? • We have some examples we believe require quasipolynomial blow-up • What about SDDs [Darwiche 11] ? • Other syntactic subclasses of d-DNNFs? • Approximate model counting?

  38. Thank You Questions?

More Related