1 / 87

Learning Assumptions for Compositional Verification

Learning Assumptions for Compositional Verification. J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu. Presented by: Sharon Shoham. Main Limitation of Model Checking:. The state explosion problem The number of states in the system model grows exponentially with the number of variables

sydney
Télécharger la présentation

Learning Assumptions for Compositional Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning Assumptions for Compositional Verification J. M. Cobleigh, D. Giannakopoulou and C. S. Pasareanu Presented by: Sharon Shoham

  2. Main Limitation of Model Checking: The state explosion problem • The number of states in the system model grows exponentially with • the number of variables • the number of components in the system • One solution to state explosion problem: Abstraction • Another:Compositional Verification

  3. Compositional Verification • Inputs: • composite systemM1║M2 • propertyP • Goal: check if M1║M2²P • First attempt: “divide and conquer” approach • Problem: usually impossibleto verify each component separately. • a component is typically designed to satisfy its requirements in specificenvironments (contexts)

  4. Compositional Verification • Assume-Guarantee (AG) paradigm: introduces assumptions representing a component’s environment Instead of: Does componentsatisfy property? Ask: Under assumption A on its environment, does the component guarantee the property? <A> M <P>: whenever M is part of a system satisfying the assumptionA, then the system must also guaranteeP

  5. Useful AG Rule for Safety Properties • check if a component M1 guaranteesP when it is a part of a system satisfying assumptionA. • discharge assumption: show that the remaining component M2 (the environment) satisfies A. <A> M1 <P> <true> M2 <A> <true> M1║M2<P> A M1 M2 ║ ² P

  6. <A> M1 <P> <true> M2 <A> <true> M1║M2<P> Assume-Guarantee • Crucial element: assumption A. • Has to be strong enough to eliminate violations of P, but also general enough to reflect the environment M2appropriately. • requires non-trivial human input in defining assumptions. How to automatically construct assumptions ?

  7. Outline  • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example

  8. Labeled Transition Systems (LTS) • Act – set of observableactions • LTSs communicate using observable actions • τ – local \ internal action LTSM = (Q, q0, αM, δ) • Q : finite non-empty set of states • q0∈ Q : initial state • αMµ Act : observable actions • δµQ x (αM [{τ }) xQ : transitionrelation Alphabet

  9. Labeled Transition Systems (LTS) in send Traces: <in>, <in, send>,… 0 1 2 ack Trace of an LTS M : finite sequence of observable actions that M can perform starting at the initial state. L(M) = the Language of M : the set of all traces of M.

  10. Parallel Composition M1║M2 • Components synchronize on commonobservable actions (communication). • The remaining actions are interleaved. M1 = (Q1,q01,αM1,δ1), M2 = (Q2,q02,αM2,δ2)  M1║M2 = (Q,q0,αM,δ) • Q = Q1 x Q2 • q0 = (q01, q02) • αM = αM1[αM2

  11. Transition Relation δ Synchronization on a ∈αM1ÅαM2 : (q1,a,q1’) ∈δ1 and (q2,a,q2’) ∈δ2 :  ((q1,q2), a, (q1’,q2’)) ∈δ Interleaving on a ∈ (αM\ (αM1 ÅαM2)) [ {τ} : • (q1, a, q1’) ∈δ1 and a∉αM2: ((q1,q2), a,(q1’,q2)) ∈δfor any q2∈ Q2 • (q2, a, q2’) ∈δ2 and a∉αM1: ((q1,q2), a,(q1,q2’)) ∈δfor any q1∈ Q1

  12. Example Input Output in send send out ║ 0 1 2 a b c ack ack

  13. send ack Input Output in send send out ║ 0 0 1 2 1 2 a b c b c a ack ack Input ║Output 0,a 0,b 0,c 0,a in 1,a 1,a 1,b 1,c out 2,a 2,b 2,b 2,c 2,c

  14. in in in send out out out ack Input Output in send send out ║ 0 1 2 a b c ack ack Input ║Output 0,a 0,b 0,c 1,a 1,b 1,c 2,a 2,b 2,c

  15. Safety Properties Also expressed as LTSs, but of a special kind: Safety LTS : • Deterministic: • Does not contain τ -transitions, and • every state has at mostone outgoing transition for each action: (q,a,q’), (q,a,q’’) ∈δ q’ = q’’

  16. Safety Properties For a safety LTS,P: • L(P) describes the set of legal (acceptable) behaviors over αP. M ² P iff ∀σ∈ L(M) : (σαP) ∈ L(P) • For Σ µ Act, σΣ is the trace obtained from σby removing all occurrences of actions a ∉ Σ. Example: <in, send, ack>{in, ack} = <in, ack> Language Containment L(M)αP µL(P)

  17. Input ║Output ack 0,a 0,b 0,c 1,a 1,b 1,c send in in in 2,a 2,b 2,c out out out Example P Order in ? |= out ?  Pref(<in,send,out,ack>*) Pref(<in,out>*)  {in,out}

  18. Model Checking M ² P Safety LTSP an Error LTS, Perr: • “traps” violations with special error state . P = (Q, q0, αP, δ)  Perr = (Q [ {}, q0, αP, δ’), where: • δ’= δ[ {(q,a,) | a ∈αP and ∄q’ ∈Q: (q,a,q’) ∈δ} • Error LTS is complete. •  is a deadend state: has no outgoing transitions.

  19. Example Order Ordererr in out out in

  20. In automata: M ² iff L(AM Å A) =; Model Checking M ² P Theorem: • M ²P iff is unreachable in M ║ Perr Remark: composition M ║ Perr is defined as before with a small exception: If the target state of a transition in Perr is , so is the target state of the corresponding transitions in the composed system M ║ Perr

  21. δ1:M, δ2:Perr δ:M║Perr Transition Relation δ π Synchronization on a ∈αM1ÅαM2 : (q1,a,q1’) ∈δ1 and (q2,a,q2’) ∈δ2 :  ((q1,q2), a, (q1’,q2’)) ∈δ Interleaving on a ∈ (αM\ (αM1 ÅαM2)) [ {τ} : • (q1, a, q1’) ∈δ1 and a∉αM2: ((q1,q2), a,(q1’,q2)) ∈δfor any q2∈ Q2 • (q2, a, q2’) ∈δ2 and a∉αM1: ((q1,q2), a,(q1,q2’)) ∈δfor any q1∈ Q1 π π π

  22. Input ║Output ack 0,a 0,b 0,c 1,a 1,b 1,c send in in in 2,a 2,b 2,c out out out Example Order Ordererr in ║ out out in

  23. in out Example Input ║Output Order Ordererr ack in 0,a ║ 1,a out send out in 2,b 2,c

  24. Input ║Output Order Ordererr in ack ii i ii ║ 0,a 0,a 1,a 1,a 2,b 2,c in send out out out in unreachable ack send 0,a,i 0,a,i 1,a,i 2,b,i 2,c,i 2,c,i in out ack out send 0,a,ii 0,a,ii 1,a,ii 1,a,ii 2,b,ii 2,b,ii 2,c,ii in

  25. Input ║Output Order Ordererr in ack i ii ║ 0,a 1,a 2,b 2,c in send out out in out in counter- example! ack send 0,a,i 0,a,i 1,a,i 2,b,i 2,c,i in out in ack out send 1,a,ii 0,a,ii 1,a,ii 2,b,ii 2,c,ii in in

  26. A ║ M1² P M2²A M1║M2²P <A> M1 <P> <true> M2 <A> <true> M1║M2<P> Assume Guarantee Reasoning • Assumptions: also expressed as safety LTSs. • <A> M <P> is true iff A ║ M ² Pi.e.  is unreachable in A ║ M ║ Perr M1, M2 : LTSs P, A : safety LTSs

  27. Outline  • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example 

  28. A ║ M1² P M2²A M1║M2²P Observation 1 • AG Rule will return conclusive results with: Weakest assumptionAwunder which M1 satisfies P: • Under assumption Aw on M1‘s environment,M1 satisfies P , i.e. Aw ║ M1² P • Weakest: forall env. M’2:M1║M’2 ² P IFF M’2 ²Aw  Given Aw, to check if M1║M2²P, check if M2 meets the assumption Aw. sufficient and necessary assumption on M1’s env.

  29. M1║M’2² P IFF M’2 ² Aw How to Obtain Aw ? • Given moduleM1,propertyP and M1‘s interface with its environmentΣ = (αM1[αP)ÅαM2 : Aw describes exactly alltraces over Σ such that in the context of , M1 satisfies P • Expressible as a safety LTS • Can be computed algorithmically Drawback of using Aw : if the computation runs out of memory, no assumption is obtained.

  30. A ║ M1² P M2²A M1║M2²P Observation 2 • No need to use the weakest env. assumption Aw. • AG rule might be applicable with stronger (lessgeneral) assumption.  Instead of finding Aw: • Use learning algorithm to learnAw . • Use candidates Ai produced by learning algorithm as candidate assumptions: try to apply AG rule with Ai.

  31. A ║ M1² P M2²A M1║M2²P Given a Candidate Assumption Ai If Ai ║ M1² Pdoes nothold: Assumption Ai is not tight enough  need to strengthenAi

  32. A ║ M1² P M2²A M1║M2²P Given a Candidate Assumption Ai Suppose Ai║ M1² Pholds: If M2² Ai also holds  M1║M2²P verified!. Otherwise: σ ∈ L(M2) but(σαΣ) ∉ L(Ai) -- cex P violated by M1 in the context of cex=(σαΣ)? Yes: realviolation !  M1║M2²P falsified ! No: spurious violation.  need to find better approximation of Aw. “cex║M1”|≠ P ?

  33. “cex ∊ L(Aw)” ? “cex║M1” ²P ? Compose M1 with LTS Acex over αΣ cex=a1,…ak  Acex : Q={q0,…,qk} , q0 = q0 δ ={ (qi,ai+1,qi+1) | 0  i < k } Model check Acex║M1²P Is  reachable in Acex║M1║Perr ? Alternatively: simulate cex onM1║ Perr cex║M1²P iff when cex is simulated on M1║Perrit cannot lead to  (error) state. … q0 q1 qk a1 a2 ak Real violation yes no Spurious cex

  34. For Aw : conclusive results guaranteed  termination! 1. Ai║ M1 |= P Assume-Guarantee Framework counterexample – strengthen assumption Model Checking Learning Learning Ai false 1. Ai║ M1 |= P true true P holds in M1||M2 2. M2 |= Ai 2. M2 |= Ai false cex  L(Ai) real error? real error? N Y P violated in M1||M2 counterexample – weaken assumption cex║M1|P ?

  35. Outline  • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example  

  36. Learning Algorithm for DFA – L* • L*: by Angluin, improved by Rivest & Schapire • learns an unknown regular language U • produces a Deterministic Finite state Automaton (DFA) C such that L(C) = U DFAM = (Q, q0, αM, δ, F) : • Q, q0, αM, δ: as in deterministic LTS • F µQ : accepting states • L(M) = {σ | δ(q0, σ) ∈ F}

  37. Learning Algorithm for DFA – L* • L* interacts with a Teacher to answer two types of questions: • Membership queries: is string σinU? • Conjectures: for a candidate DFA Ci, is L(Ci) = U ? • answers are (true) or (false + counterexample) Conjectures C1, C2, … converge to C Equivalence Queries

  38. Learning Algorithm for DFA – L* Myhill-Nerode Thm. For every regular set U ⊆∑* there exists a unique minimal deterministic automaton whose states are isomorphic to the set of equivalenceclasses of the following relation: w ≈ w’ iff ∀u ∈∑* : wu ∈U ⇔ w’u ∈U Basic idea: learn the equivalence classes • Two prefixes are not in the same class iff there is a distinguishingsuffixu

  39. General Method • L* maintains a table that records whether strings in  belong to U • makes membership queries to update it • Once all currently distinguishable strings are in different classes (table is closed), uses table to build a candidate Ci -- makes a conjecture : • if Teacher replies true, done! • if Teacher replies false, uses counterexample to update the table

  40. Learning Algorithm for DFA – L* Observation Table: (S, E, T) S2– prefixes (represent. of equiv. classes / states) E2– suffixes (distinguishing) T: mapping (S ∪ S·∑) · E  {true, false} ∈U ∉U T :

  41. L* Algorithm T: (S ∪ S·∑) ·E  {true, false} (S,E,T) is closed if ∀s∈S ∀a∈∑ ∃s’∈S ∀e∈E: T(sae)=T(s’e) ∈U ∉U sa is undistinguishable from s’ by any of the suffixes represents the next state from s after seeing a i.e. Everyrow sa of Shas a matching row s’inS

  42. Learning Algorithm for DFA – L* sa has no matching row s’ inS Initially: S={λ}, E= {λ} Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? }

  43. Learning Algorithm for DFA – L* Initially: S={λ}, E= {λ} Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? } Candidate DFA from a closed(S,E,T): States: S Initial state: λ Accepting states: s ∈S such that T(s) [= T(s¢λ)] =true Transition relation: δ(s,a) = s’ where ∀e∈E: T(sae)=T(s’e) [ closed: ∀s∈S ∀a∈∑ ∃s’∈S ∀e∈E: T(sae)=T(s’e)]

  44. Learning Algorithm for DFA – L* Initially: S={λ}, E= {λ} Loop { update T using membership queries while (S,E,T) is not closed { add sa to S where sa has no repr. prefix s’ update T using membership queries } from a closed (S,E,T) construct a candidate DFA - C present an equivalence query: L(C) = U ? if C is correct return C else add e∈∑* that witnesses the cexto E }

  45. Choosing e • e must be such that adding it to E will eliminate the current cex in the next conjecture. • e is a suffix of cex that witnesses a difference between L(Ci) and U • Adding e to E will cause an addition of a prefix (state) to S splitting states

  46. Characteristics of L* • Terminates with minimalautomaton C for U • Each candidateCi is smallest • any DFA consistent with table has at least as many states as Ci • |C1| < | C2| < … < |C| • Produces at most n candidates, where n = |C| • # membership queries:O(kn2 + n logm) • m: size of largest counterexample • k: size of ∑

  47. Outline  • Motivation • Setting • Automatic Generation of Assumptions for the AG Rule • Learning algorithm • Assume-Guarantee with Learning • Example   

  48. Assume-Guarantee with Learning Reminder: • Use learning algorithm to learnAw. • Use candidates produced by learning as candidate assumptions Ai for AG rule. In order to use L* to produce assumptions Ai: • Show that L(Aw) is regular • Translate DFA to safety LTS (assumption) • Implement the teacher

  49. L(Aw) isRegular • Awis expressible as a safety LTS • Translation of safety LTS A into DFA C: A = (Q, q0, αM, )  C = (Q, q0, αM, , Q)  There exists a DFA C s.t. L(C) = L(Aw)  L* can be used to learn a DFA that accepts L(Aw)

  50. Weakest assumption is prefix-closed DFA to Safety LTS • DFAs Ci returned by L* are complete, minimal and in this setting also prefix-closed: • if 2 L(Ci), then every prefix of  is also in L(Ci).  Ci contains a single non-accepting state qnf and no accepting state is reachable from qnf.  To get a safety LTS simply remove non-accepting state and all its ingoing transitions : Ci = (Q [{qnf}, q0, αM, , Q)  Ai = (Q, q0, αM, Å (Q £αM £ Q) )

More Related