1 / 14

Evaluation of Safety Critical Software

Evaluation of Safety Critical Software. David L. Parnas, C ACM, June 1990. Software Reliability.

oma
Télécharger la présentation

Evaluation of Safety Critical Software

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

  2. Software Reliability • Nonetheless, our practical experience is that software appears to exhibit stochastic properties. It is quite useful to associate reliability figures such as MTBF (Mean Time Between Failures) with an operating system or other software product. Some software experts attribute the apparently random behavior to our ignorance. They believe that all software failures would be predictable if we fully understood the software, but our failure to understand our own creations justifies the treatment of software failures as random. 841f07hamlet19oct30

  3. MTTF • Mean is an average. • What is the relationship to Expected Value? 841f07hamlet19oct30

  4. Table 1 Table I shows that, if our design target was to have the probability of failure be less than 1 in 1000, performing between 4500 and 5000 tests (randomly chosen from the appropriate test case distribution) without failure would mean that the probability of an unacceptable product passing the test was less than 1 in a hundred. 841f07hamlet19oct30

  5. Table II 841f07hamlet19oct30

  6. Practical ultra-reliability forabstract data types Borislav Nikolik and Dick Hamlet Softw. Test. Verif. Reliab. 2007; 17:183–203 841f07hamlet19oct30

  7. Term Redundancy Method (TRM) 841f07hamlet19oct30

  8. Boolean Stack ADT r1: pop(push(s, b))→s r2: top(push(s, b))→b Figure 1. Stack of boolean values TRS. 841f07hamlet19oct30

  9. Rewriting pop(push(pop(push(x, y)), b)) = pop(push(x, y)) 841f07hamlet19oct30

  10. Post-release testing The relatively poor reliability estimate from the pre-release testing phase can now be used to obtain ultra-reliable term evaluations. In the self-checking phase the additional equivalent terms are drawn from the test-phase distribution for which the 10−4 bound was obtained. Therefore, if three randomly chosen terms agree, the probability that all of them are failures is less than (10−4)3 = 10−12. 841f07hamlet19oct30

  11. Reliability Suppose a constant failure rate θ of π, and n random terms drawn from π, executed on δ without failure. The probability that δ fails on a randomly chosen term from π is θ, and 1 − θ that it will succeed. Given that the n terms are independent, the probability that δ succeeds on all the terms is (1 − θ)n. The confidence bound α on θ is defined as the probability that the failure rate of δ is below θ. The confidence bound is related to the testset size n and the failure rate θ by α ≤ 1 − (1 − θ)n 841f07hamlet19oct30

  12. Failure rate The confidence bound of Equation (1) is used to quantify the probability of failure of a majority values of a self-check. Equation (1) could be used to estimate the confidence bound on the failure rate of δ on a majority of N random terms generated by RBTR. Suppose a successful test (no failures occurred during the test) of δ on n terms is conducted at test time. Half or more of N terms (majority) falsely agreeing at run-time gives a failure rate of at least N/2n. Therefore, substituting N/2n for θ in Equation (1) yields α ≤ 1 − 1 − N 2nn 841f07hamlet19oct30

  13. Equation 2 Equation (1) yields α ≤ 1 − 1 − N 2nn (2) The meaning of the confidence bound is the probability that the failure rate is below N/2n for a repetition of the test. For example, 1 − α = 6.0 × 10−8 with N = 33 and n = 104. 841f07hamlet19oct30

  14. How do we evaluate Hamlet? 841f07hamlet19oct30

More Related