 Download Download Presentation Optimized L*-based Assume-Guarantee Reasoning

# Optimized L*-based Assume-Guarantee Reasoning

Télécharger la présentation ## Optimized L*-based Assume-Guarantee Reasoning

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
##### Presentation Transcript

1. Optimized L*-based Assume-Guarantee Reasoning Sagar Chaki, Ofer Strichman March 27, 2007

2. Motivation: Reasoning by Decomposition • Let M1 and M2 be two NFAs • Let p be a property expressed as an NFA • Is L(M1£ M2) µL(p) ? • (Our notation: M1£ M2²p) • Q: What if this is too hard to compute ? • A: Decompose

3. Assume-Guarantee Reasoning • An Assume-Guarantee rule: • M1 and M2 are NFAs with alphabets 1 and 2 • This rule is sound and complete • For ¹ being trace containement, simulation etc. • There always exists such an assumption A (e.g. M2) • Need to find A such that M1£ A is easier to compute than M1£ M2 • A £ M1¹p • M2¹ A • ---------------- • M1£ M2¹p • A £ (M1 £:p) ¹? • M2¹ A • --------------------- • (M1£:p) £ M2¹? ≡

4. Learning the Assumption • Q: How can we find such an assumption A ? • A: Learn it with L* • The L* algorithm is by Angluin  • Later improved by Rivest & Schapire  – this is what we use.

5. yes/no s 2 U ? L(A) = U ? L* (U) L* Negative feedback No. 2L(A) - U yes Positive feedback No. 2 U - L(A) DA A s.t. L(A) = U Membership query Candidate query L* finds the minimal A such that L(A) = U

6. L (M1£:p) L (M2) M1£ M2²p • M1£ M2²p is the same as (M1£:p) £ M2²? • Trying to distinguish between: L (M1£:p) L (M2) M1£ M22p

7. A £ (M1 £:p) ²? • M2² A • ---------------- • (M1£:p) £ M2²? On the way we can … * A L (M1£:p) L (M2) A is an ‘acceptable’ assumption. • Find an assumption A such that • L(M2)µL(A) µ* - L(M1£:p) • OurHOPE: A is ‘simpler’ to represent than M2 • i.e., |M1£:p £ A| << |M1£:p £ M2|

8. A £ (M1 £:p) ²? • M2² A • ---------------- • (M1£:p) £ M2²? How ? • Learn the language U = * - L(M1£:p) • Well defined. We can construct a teacher for it. • Membership query is answered by simulating  on M1£:p. * L (M1£:p) L (M2)  A = U

9. A £ (M1 £:p) ²? • M2² A • ---------------- • (M1£:p) £ M2²? L* - when M1£ M2²p • A conjecture query: is A acceptable ? A L (M1£:p) L (M2)  • Check 2L(M2) …

10. A £ (M1 £:p) ²? • M2² A • ---------------- • (M1£:p) £ M2²? L* - when M1£ M2²p • Check 2L(M2) … L (M2) A L (M1£:p)  • If yes,  is a real counterexample. Otherwise …

11. A £ (M1 £:p) ²? • M2² A • ---------------- • (M1£:p) £ M2²? L* - when M1£ M2²p L (M2) A L (M1£:p)  • L* receives a negative feedback:  should be removed from A A matter of luck!

12. M1£M2 2p Negative feedback  Model Checking N L* false,  Y • ² M2 false,  Y • ² M1£:p N Positive feedback  A-G with Learning A A £ M1² p true M2² A true M1£M2² p

13. This work • In this work we improve the A-G framework with three optimizations • Feedback reuse: reduce the number of candidate queries • Lazy Learning: reduce the number of membership queries • Incremental Alphabet - reduce the size of A, the number of membership queries and conjectures. • As a result: reduced overall verification time of component-based systems. • We will talk in details about the third optimization only

14. Optimization 3: Incremental Alphabet • Choosing  = (1[p) \2 always works • We call (1[p) \2 the “full interface alphabet” • But there may be a smaller  that also works • We wish to find a small such  using iterative refinement • Start with  = ; • Is the current  adequate ? • no – update  and repeat • yes – continue as usual

15. b b a £ A B Optimization 3: incremental alphabet • Claim: removing letters from the global alphabet, over-approximates the product. • Example: L (M) Decreased  L (M) If = {a,b} then ‘bb’ L(A £ B) If = {b} then ‘bb’ 2L(A £ B)

16. M1£M2 2p Model Checking false,  Y • ² M2 false,  Y • ² M1£:p N add  to L(A) A-G with Learning remove  from L(A) N Learning with L* A A £ M1² p true M2² A true M1£M2² p

17. M1£M2 2p Model Checking false,  Y • ² M2 false,  Y • ² M1£:p N add  to L(A) A-G with Learning A= ; remove  from L(A) N Learning with L* (A) A A £ M1² p true M2² A true M1£M2² p

18. A L (M1£:p) L (M2)  Optimization 3: Check if ² M1£:p • We first check  with full alphabet : L (M1£:p) A L (M2)  A real counterexample!

19. A L (M1£:p) L (M2)  Optimization 3: Check if ² M1£:p • We first check  with full alphabet : • Then with a reduced alphabet A: A L (M1£:p) L (M2)  Positive feedback Proceed as usual

20. A L (M1£:p) L (M2)  Optimization 3: Check if ² M1£:p • We first check  with full alphabet : • Then with a reduced alphabet A: A L (M1£:p) L (M2) No positive feedback  is spurious Must refine A 

21. Optimization 3: Refinement • There are various letters that we can add to A in order to eliminate . • But adding a letter for each spurious counterexample is wasteful. • Better: find a small set of letters that eliminate all the spurious counterexamples seen so far.

22. Optimization 3: Refinement • So we face the following problem: • “Given a set of sets of letters, find the smallest set of letters that intersects all of them.” • This is a minimum-hitting-set problem.

23. Optimization 3: Refinement • A naïve solution: • Find for each counterexample the set of letters that eliminate it. • Explicit traversal of M1£:p. • Formulate the problem: “find the smallest set of letters that intersects all these sets” • A 0-1 ILP problem.

24. Optimization 3: Incremental Alphabet • Alternative solution: integrate the two stages. • Formulate the problem:“find the smallest set of letters that eliminate all these counterexamples” • a 0-1 ILP problem

25. b b a p x q y r z Optimization 3: Incremental Alphabet • Let M1 £: p = • Let  = • Introduce a variable for each state-pair: (p,x),(p,y),… • Introduce choice variables A() and A() • Initial constraint: (p,x) • initial state always reachable • Final constraint: :(r,z) • final states must be unreachable

26. b b a p x q y r z Optimization 3: Incremental Alphabet • Let M1£: p = • Let  = • Some sample transitions: • (p,x) ^:A() ) (q,x) • (p,x) ^ :A(b) ) (p,y) • (q,x) ) (r,y) _(q,x) ^ :A(b) ) (r,x) ^(q,y) • Find a solution that minimizes A() + A() • In this case setting A() = A() = TRUE • Updated alphabet  = {,}

27. Experimental Results: Overall

28. Experimental Results: Optimization 3

29. Experimental Results: Optimization 2

30. Experimental Results: Optimization 1

31. Related Work • NASA – original work – Cobleigh, Giannakopoulou, Pasareanu et al. • Applications to simulation & deadlock • Symbolic approach – Alur et al. • Heuristic approach to optimization 3 – Gheorghiu