1 / 28

Verifica Automatica via Model Checking

Verifica Automatica via Model Checking. Enrico Tronci Dipartimento di Informatica, Università di Roma “La Sapienza”, Via Salaraia 113, 00198 Roma, Italy, tronci@di.uniroma1.it http://www.dsi.uniroma1.it/~tronci. May 2006. TRAMP. Verification Goals. Give evidence of the following

astra
Télécharger la présentation

Verifica Automatica via Model Checking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verifica Automatica via Model Checking Enrico Tronci Dipartimento di Informatica, Università di Roma “La Sapienza”, Via Salaraia 113, 00198 Roma, Italy, tronci@di.uniroma1.ithttp://www.dsi.uniroma1.it/~tronci May 2006

  2. TRAMP • Verification Goals. Give evidence of the following • The interaction between the system (trains, vehiecles, etc), the communication infrastructure (TLC), the control policies in the DSS and the Operator never bring the system to an UNSAFE state; • The system efficiency is not decreased (is increased) by the supervisory control proposed by the project. Get measures (with delay, noise, lost, etc). Compute system state and action Measures System Communication Channel DSS + Operator + Policy Get commnad (with delay, noise, lost, etc) Send Command, according to policy

  3. Model Checking Game Sys (VHDL, Verilog, C, C++ Java, MathLab, Simulink, UML …) BAD (CTL, CTL*, LTL, PSL, …) Model Checker (Equivalent to Exhaustive testing) PASS FAIL I.e. no sequence of events (states) can possibly lead to an undesired state. What went wrong … Counterexample I.e. sequence of events (states) leading to an undesired state.

  4. Examples A few exmples from different domains to illustrate the appraoch

  5. A Control System Disturbances: electric users, param. var, etc Settings Fuel Valve Opening FG102 Controller Gas Turbine (Turbogas) Vrot, Texh, Pel, Pmc Vrot: Turbine Rotation speed Texh: Exhaust smokes Temperature Pel: Generated Electric Power Pmc: Compressor Pressure

  6. Experimental Results Results on a INTEL Pentium 4, 2GHz Linux PC with 512 MB RAM. Murphi options: -b, -c, --cache, -m350

  7. Fail trace: MAX_D_U = 2500 KW/sec 10 ms time step (100 Hz sampling frequency) Electric user demand (KW) Rotation speed (percentage of max = 22500 rpm) Allowed range for rotation speed: 40-120

  8. Fail trace: MAX_D_U = 5000 Kw/sec 10 ms time step (100 Hz sampling frequency) Electric user demand (KW) Rotation speed (percentage of max = 22500 rpm) Allowed range for rotation speed: 40-120

  9. Low System (e.g. public info) Example High System (e.g. private info) NRL PUMP Statistically modulated ACK ACK Buffer LS HS Data Data The NRL Pump is a special purpose-device that forwards data from a low (security) level systemLS to a high (security) level system HS, but not conversely. Idea: LS ACK delay isprobabilistically based on a moving average of HS ACK delays. NRL Pump (Probabilistic) Properties: LS HS Ready to send Ready to receive Minimize information flow from HS to LS. Read Data Got ACK Send Data Send ACK Enforce reasonable performances, i.e.: <average ACK delay as seen from LS> = <average HS ACK delay> Waiting ACK Done

  10. Covert Channel Experimental Results (1) Pdec(h): probability of making a decision within h timeunits. Pwrong(h): probability of making the wrong decision withinh time units We can compute the probability of making the right decisionwithin h time units as: Pright(h) = Pdec(h)(1 - Pwrong(h)). Of course we want Pright(h) to be small. We studied the previous probabilities for varioussettings of our model parameters. BUFFER SIZE 3 5 WINDOW SIZE 3 5 OBS WINDOW SIZE 3 5 About 2 days of computation for each setting on a 2GHz IntelPentium PC with Linux OS and 512MB of RAM).

  11. Covert Channel Exp: Pdec, Pwrong as a function of the number of steps h

  12. Covert Channel Exp: Pright as a Function of the number of steps h Our time unit is about the time needed to transfer messagesfrom/to the pump (about 1ms). Our experimental results show that the high system can send bits to the low system at a rate ofabout 1 bit every 10 seconds, i.e. 0.1 bits/sec. This is secure enough for many applications.

  13. Reliability Analysis:Probabilistic Model Checking (1) Sometimes we can associate a probability with each transition. In such cases reachability analysis becomes the task of computing the stationary distribution of a Markov Chain. This can be done using a Probabilistic Model Checker (state space too big for matrices). 0.4 1 0.3 0 0.7 0.2 0.8 2 0.6

  14. A Control System Disturbances: electric users, param. var, etc Settings Fuel Valve Opening FG102 Controller Gas Turbine (Turbogas) Vrot, Texh, Pel, Pmc Vrot: Turbine Rotation speed Texh: Exhaust smokes Temperature Pel: Generated Electric Power Pmc: Compressor Pressure

  15. User Demand Distribution Let u(t) be the user demand at time t. We can define the (stochastic) dynamics of the user demand as follows: min(u(t) + a, M) with probability p(u(t), 1) u(t + 1) = u(t) with probability p(u(t), 0) max(u(t) - a, 0) with probability p(u(t), -1) Where: M = max user demand (MAX_U), a = speed of variation of user demand (MAX_D_U) 0.4 + b*(v – M)*|v – M| /M2 when i = 1 p(v, i) = 0.2 when i = 0 0.4 + b*(M - v)*|M - v| /M2 when i = -1 -0.4 <= b <= 0.4 The further u(t) from u0 (nominal user demand) the higher u(t) probability to return towards u0. That is to decrease when u(t) > u0, to increase when u(t) < u0.

  16. Finite Horizon Markov Chain Analysis… of our turbogas

  17. S1=n1 & S2=t2 S1 S2 n1 t1 n2 t2 1 T 2 S2 = n2 S1 = n1 S1=t1 & T=2 S2=t2 & T=1 S2=n2 & S1=t1 c1 c2 Mutual Exclusion (Mutex) n1, n2, 1 t1, n2, 1 c1, n2, 1 n1, t2, 1 t1, t2, 1 c1, t2, 1 n1, c2, 1 t1, c2, 1 c1, c2, 1 n1, n2, 2 t1, n2, 2 c1, n2, 2 n1, t2, 2 t1, t2, 2 c1, t2, 2 n1, c2, 2 t1, c2, 2 c1, c2, 2 SPEC Mutual exclusion: AG (S1 != c1 | S2 != c2) … true No starvation S1: AG (S1 = t1 --> AF (S1 = c1)) … true

  18. S1=n1 & S2=t2 S1 S2 n1 t1 n2 t2 1 T 2 S2 = n2 S1 = n1 S1=t1 & T=2 S2=t2 & T=1 S2=n2 & S1=t1 c1 c2 Mutex (~ arbitrary initial state) Mutual exclusion: AG (S1 != c1 | S2 != c2) … No starvation S1: AG (S1 = t1 --> AF (S1 = c1)) …

  19. SMV output (mutex) -- specificationAG (S1 != c1 | S2 != c2) is true --specificationAG (S1 = t1 -> AF S1 = c1) is true resources used: user time: 0.02 s, system time: 0.04 s BDD nodes allocated: 635 Bytes allocated: 1245184 BDD nodes representing transition relation: 31 + 6

  20. Algorithms and Tools for Research Activity • Automatic Verification and Validation (Model Checking) of Hardwware systems: SMV, VIS, BMC. • Automatic Verification and Validation of Protocols and Software Systems: Murphi, SPIN. • Automatic Requirements Validation (via Model Checking) • Automatic Verification of Hybrid Systems: Murphi, Hytech. • Automatic Verification of Probabilistic Systems (Reliability Analysis): PRISM, FHP-Murphi. • Covert Channel Analysis, Security Analysis: FHP-Murphi • Automatic Synthesis of Optimal (Supervisory) Controllers for Finite State Systems. • Optimization of Complex Systems (via Mixed Integer Linear Programming and SAT) • Planning (via Model Checking). • Datamining Core Algorithms: Graph Exploration, SAT, PLI, OBDDS

  21. Algorithms and Tools for Research Activity (2) • WIP: Automatic Synthesis of Controllers for PCM systems (e.g. DC-DC converters, digital amplifiers, etc.). • WIP: Automatic Design and Verification of Autonomous Systems.

  22. TOOLS http://www.dsi.uniroma1.it/~tronci/cached.murphi.html Caching Murphi (Cmurphi) (also http://www.stanford.edu/) Cmurphi (Rome “La Sapienza”, L’Aquila) is a disk based extension of the Stanford Murphi Verifier. Cmurphi has been used with success by INTEL to verify Cache Coherence Protocols that, because of state explosion, was not possible to verify using Murphi. FHP-Murphi FHP-Murphi allows Finite Horizon Analysis of Markov Chains modelling stochastic hybrids systems. Unlike PRISM, FHP-Murphi can handle real numbers.

  23. Automatic Verification: A Money Saver Testing without automation tends to discover errors towards the end of the design flow. Error fixing is very expensive at that point and may delay product release. Methods to discover errors as soon as possible are needed. Source: Mercury Interactive, Siebel Siemens Errors caught (percent) Number of times more expensive to fix Early development Implementation

  24. Open Source Model Checkers Here are a few examples of open source model checkers. SMV, NuSMV (Carnegie Mellon University, IRST) [smv,VHDL / CTL] SPIN (Bell Labs) [PROMELA (C like)/ LTL] Murphi (Stanford, Roma “La Sapienza”, L’Aquila) [Pascal like/assert() style] VIS (Berkeley, Stanford, Colorado University) [BLIF, Verilog/CTL, LTL] PVS (Stanford) [PVS/PVS] TVLA (Tel-Aviv) [TVLA/TVLA] Java PathFinder (NASA) [Java Bytecode/LTL] BLAST (Berkeley) [C/assert()]

  25. Java Verification (BANDERA)SAnToS Group at Kansas State University

  26. Some Commercial Model Checkers Here are a few examples of commercial model checkers. Cadence (Verilog, VHDL) Synopsis (Verilog, VHDL) Innologic (Verilog) Telelogic (inside SDL suite) Esterel Coverity (C, C++)

  27. In House Model Checkers Here are a few examples of in house model checkers. FORTE (INTEL) [Verilog, VHDL/Temporal Logic] SLAM (Microsoft) [C/assert()] BEBOP (Microsoft) [C/assert()] Rule Based (IBM) [Verilog, VHDL/CTL, LTL] CANVAS (IBM) [Java/constraints-guarantees] Verisoft (Bell Labs) [C/C]

  28. Summing Up Automatic Verification (reachability analysis) is a very useful tool for design and analysis of complex systems such as: digital hardware, embedded software and hybrid systems. Automatic Verification allows us to: Decrease the probability of leaving undetected bugs in our design, thus increasing design quality. Speed up the testing/simulation process, thus decreasing costs and time-to-market. Early error detection, thus decreasing design costs. Support exploration of more complex, hopefully more efficient, solutions by supporting their debugging.

More Related