1 / 18

A SAT-Based Approach to Abstraction Refinement in Model Checking

A SAT-Based Approach to Abstraction Refinement in Model Checking. Bing Li, Chao Wang and Fabio Somenzi University of Colorado at Boulder. Background. Symbolic Model Checking BDD-based fix-point, good for prove and disprove [Burch et al. 1990]

plato
Télécharger la présentation

A SAT-Based Approach to Abstraction Refinement in Model Checking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A SAT-Based Approach to Abstraction Refinement in Model Checking Bing Li, Chao Wang and Fabio Somenzi University of Colorado at Boulder BMC 2003

  2. Background • Symbolic Model Checking • BDD-based fix-point, good for prove and disprove [Burch et al. 1990] • CNF-based BMC, only good for disprove [Biere et al. 1999] • Use CNF/SAT to prove properties • Replace BDD with CNF in fix-point computation • [Abdulla et al. 2000] [Williams et al. 2000] [McMillan 2002] • Develop better termination criteria for BMC • Simple path [Sheeran et al. 2000] , reverse seq. depth [McMillan CAV’03] • Abstraction and Refinement [Kurshan 1994] • BDD + BDD …[Clarke et al. 2000] [Barner et al. 2002]… • BDD + ATPG/SAT [Wang et al. 2001] [Clarke et al. 2002] [Chauhan et al. 2002] [Wang et al. 2003] [McMillan and Amla 2003] • SAT + SAT ? In this paper BMC 2003

  3. BMC is good at bug-hunting, but not good at proving Longest simple path [Sheeran et al.] (n+1) for forward (n/2) for backward Reverse seq. depth [McMillan] (n/2) How may abstraction help? On the abstract model Longest simple path is 3 Reverse sequential depth is 1 (No guarantee, though!) B0 D0 A C Bn-1 Dn-1 b0 d0 a c b1 d1 An Example BMC 2003

  4. What to expect ? • - Win on large/complex abstract models • Complement BDD+SAT (not beat it) • (These conjectures are supported by our experimental results) • Eventually, • - Switch between PureSAT and BDD+SAT, based on what kind of model we are dealing with and what stage of the proof we are in start Initial Abstraction no simple path True SAT on Abstraction CEX CEX False SAT on Concrete Refinement Our Approach -- PureSAT BMC 2003

  5. Preliminaries • Model as an open system = V,W,I,T  • I(V): initial states predicate • T(V,W,V’): transition relation (conjunction of gate relations) • P(V): invariant property  linear-time safety property • Important concepts • S is reachable in k steps from S’ iff • S and S’ are connected by a simple path of length k iff BMC 2003

  6. I I Prove/disprove Invariants • For each kN, try to • Disprove find such a path • Prove  termination criteria • by checking longest simple path ¬p States are pair-wise disjoint BMC 2003

  7. Prove/disprove Invariants (cont’d) • For each kN, try to • Disprove find such a path [Biere et al.] • (path from I to ¬P exists) • Prove  termination criteria • by checking longest simple path[Sheeran et al.] • (simple path from I exists) • (simple path to ¬P exists) BMC 2003

  8.  bounded abstract model Abstraction • Bounded concrete model BMC 2003

  9. Abstraction (cont’d) • (Over-approximated) abstraction • . • . • . • . • Conservative results • True positive • False negative BMC 2003

  10. PureSAT Algorithm • boolean PureSAT(,P ) { • L = 0; • = CreateInitialAbstraction(,P ) • while ( ) { • if (!ExistSimplePath( )) • return TRUE; • if (ExistCex( )) { • if (ExistCex( ,P,L )) • return FALSE; • refinement = GetRefinementFromCA( ); • = AddRefinementToAbsModel( , refinement); • } • L=L+1; • } BMC 2003

  11. Satisfiable (abstract) Un-satisfiable (refined) Refinement set Refinement – problem statement Un-satisfiable (concrete) BMC 2003

  12. v4 v11 Refinement – UNSAT proof • Related algorithms • Compute UNSAT proof/core[Goldberg and Novikov 2003] [Zhang and Malik 2003] • - Traverse conflict dependency graph[Chauhan et al. 2002] • Our approach • Find the state variables appearing in the conflict dependency graph • Be cautious: Not all of them are necessary for the refinement • For example: Add “V4”; don’t add “V11”! BMC 2003

  13. Refinement – gradually adding variables • UNSAT core is neither minimum, or minimal • But we want the refinement set as small as possible (heuristics) • Gradually adding variables to the refinement set, until it becomes “sufficient” • Add v4 and v5 • If still not sufficient, add v6 • Then, greedily minimize the refinement set v4 v5 v6 BMC 2003

  14. V4 V5 within one time step Refinement Minimization • Greedily dropping redundant variables, one at a time • - Drop a variable v, and check again • (abs. Counter-examples? ) • - If still UNSAT, v is redundant. Otherwise, add v back. • - The order of this testing is important; we rank the variables first • Relative correlation of v to the abstract model • where • Ncommon is the number of gates under v that • are also in the abstract model • Nv is the total number of gates under v BMC 2003

  15. Comparison to Existing Methods • Comparison to [Chauhan et al. 2002] • - Common • Traversal of the conflict dependency graph • Refinement minimization • - Difference • Length-L Cex vs. Prefix of a single Cex (up to the “failure index” step) • Vrefinement from all time steps vs. from the failure index time step • Minimization based on “relative correlation” of each variable vs. didn’t • Comparison to [McMillan and Amla 2003] • - Common • Both kill all the Cex in the (unconstrained) BMC instance • - Difference • A refinement set (incremental) vs. a whole new abstraction (from scratch) • - Length-L Cex vs. Cex with (potentially) multiple lengths ( L) • - Refinement Minimization to control size vs. didn’t • SAT+SAT vs. BDD+SAT BMC 2003

  16. Experimental Setup • We compared PureSAT to the following algorithms • BMC: An implementation of BMC [Biere et al. 1999] • SSS: BMC extended with the checks for simple path [Sheeran et al. 2000] • Grab: An Abstraction Refinement algorithm with BDD+SAT [Wang et al. 2003] • All are implemented in VIS-2.0, with CUDD and zChaff • Run on an 1.7GHz Pentium 4 / 2GB of RAM • 26 test cases (verilog models + safety properties) • - 19 from industry • - 6 from VIS verification benchmarks [http://vlsi.colorado.edu/~vis] • - 1 model, called “lsp” (with a true property) • - 12-latch model, 1057 reachable states (longest simple path is of length 1056) • BMC and SSS failed to prove it (as expected) • Grab proved it in 1 second (as expected) • PureSAT proved it in 1 second BMC 2003

  17. Experimental Results BMC 2003

  18. Conclusions and Future Work • Conclusions • *PureSAT is competitive and promising • - For passing properties, PureSAT is better than both BMC and SSS • - For failing properties, BMC is the best, PureSAT is better than Grab • - PureSAT tends to win on large/complex abstract models • * For PureSAT and Grab, the two sets of failures are disjoint • Future Work • * The major problem is still on the termination detection • - Use an incremental SAT solver to carry more information from the abstraction to the concrete model • - Adopt techniques like [Kang and Park 2003] [McMillan CAV’2003]on the abstraction BMC 2003

More Related