1 / 36

Model-Validation in Model-Based Development

Model-Validation in Model-Based Development. Kurt Woodham L-3 Communications. Ajitha Rajan, Mats Heimdahl University of Minnesota. OSMA SAS ’08 September 8-12. Problem: Model Validation. Model-Based Development (MBD) is here to stay Use of MBD is accelerating

wholmstrom
Télécharger la présentation

Model-Validation in Model-Based Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model-Validation in Model-Based Development Kurt Woodham L-3 Communications Ajitha Rajan, Mats Heimdahl University of Minnesota OSMA SAS ’08 September 8-12

  2. Problem: Model Validation • Model-Based Development (MBD) is here to stay • Use of MBD is accelerating • Estimate 50% of NASA development projects using some form of MBD • Many advantages: model-checking, code generation, desktop testing, closed-loop simulation • Enhances early detection of requirement, design, or implementation defects • “Executable Specifications” enable evaluation of behavior that might otherwise be relegated to Inspections and Testing • How do we know the models are “right”? • Manually develop black-box tests • When have we validated enough? • Measure test coverage on an implementation/model SAS_08_Model_Val_Tech_Heimdahl

  3. Problem : Current Practice • Measure black-box test coverage over the model • Indirect measure • Defects of omission in model not exposed. • Executable artifact is necessary • Adequacy can only be determined late in the development process Incomplete Model Weak Black-Box Test set SAS_08_Model_Val_Tech_Heimdahl

  4. Goals of Project • Define metrics for objective, implementation-independentmeasure of adequacy of a black-box test suite • Develop tools to measure validation adequacy based on the defined metrics • Provide capability for autogeneration of black-box test suites SAS_08_Model_Val_Tech_Heimdahl

  5. Assertions Does it implement? Specification Does it implement? Model Implementation Model-Based Testing (MBT) to Verify Code Model Does it implement? Source Code Testing – What does it mean? Assertion Based Testing (ABT) to Validate Model In General Our contribution is in providing novel ABT capabilities SAS_08_Model_Val_Tech_Heimdahl

  6. What are Assertions? Properties/ Formal Assertions Assertions Can also be over components, interfaces,... Defined over System in1 out1 ink outm SAS_08_Model_Val_Tech_Heimdahl

  7. Validate Assess Model and Assertion Completeness 3 Model Contributions - ABT 2 Auto-generate Black-Box Tests Assertions Measure Adequacy 1 • We provide the following contributions in the Assertion-Based testing domain (indicated by in the above figure): • Objective, implementation-independentmeasure of adequacy of a black-box test suite • Auto-generation of black-box validation tests directly from assertions • Objective assessment of completeness of model as well as assertions SAS_08_Model_Val_Tech_Heimdahl

  8. G (FD_On -> Cues_On); G((¬ Onside_FD_On Λ ¬ Is_AP_Engaged) → X(Is_AP_Engaged → Onside_FD_On)) Temporal Logic The Idea Write assertions in a formal notation… Synchronous Observers …then define structural coverage metrics to directly and objectively describe coverage of assertions SAS_08_Model_Val_Tech_Heimdahl

  9. LTL Temporal Operators A A A A A Si S0 S1 S2 S3 B SAS_08_Model_Val_Tech_Heimdahl

  10. Formalizing Assertions “If the onside FD cues are off, the onside FD cues shall be displayed when the AP is engaged” G((¬ Onside_FD_On  ¬ Is_AP_Engaged) → X(Is_AP_Engaged → Onside_FD_On)) • Possible Coverage Metrics • Assertion coverage: single test case that demonstrates that assertion is satisfied • Prone to “dumb” tests, e.g., execution in which AP is never engaged. • More rigorous metric is necessary SAS_08_Model_Val_Tech_Heimdahl

  11. Task - 1 • Define a collection of assertion coverage criteria • Formalize the assertion coverage obligations SAS_08_Model_Val_Tech_Heimdahl

  12. Antecedent Coverage • Many of the assertions in the FGS are of the form : • Globally if ‘A’ occurs then ‘B’ will occur G (A → B) • Two ways of satisfying (A → B) • A is false • A is true and B is true • Antecedent Coverage–test cases will exercise the antecedent. What if:AÚCÚD → B S0 S1 Sn Not A Not A A, B SAS_08_Model_Val_Tech_Heimdahl

  13. Modified Condition/Decision Coverage (MC/DC) • To satisfy MC/DC • Every point of entry and exit in the model should be invoked at least once, • Every basic condition in a decision in the model should take on all possible outcomes at least once, and • Each basic condition should be shown to independently affect the decision’s outcome Basic Conditions Independent effect of A Independent effect of B SAS_08_Model_Val_Tech_Heimdahl

  14. Unique First Cause (UFC) Coverage “System shall eventually generate an Ack (A) or a Time Out (B)” Req. LTL property - F(A  B) . ¬A, B ¬A, ¬B ¬A, ¬B ¬A, ¬B A, ¬ B Si S0 S1 S2 S3 Path satisfies UFC obligation for A but not B. ¬A, B ¬A, ¬B ¬A, ¬B To show independence of B, Si S1 S0 Formal UFC obligation for A : ¬(A  B) U (A ¬B) for B : ¬(A  B) U (B ¬A) SAS_08_Model_Val_Tech_Heimdahl

  15. UFC Coverage • G(A)+ = {A U(a  G(A)) | a є A+} G(A)- = {A U a | a є A-} • F(A)+ = {¬A U a | a є A+} F(A)-= {¬A U (a  G(¬A))| a є A-} • (A U B)+ = {(A ¬B) U ((a  ¬B)  (A U B)) | a є A+}{(A  ¬B) U b | b є B+} (A U B)-= {(A  ¬B) U (a  ¬B) | a є A-}  {(A ¬B) U(b  ¬(A U B)) | b є B-} • X(A)+= {X(a) | a є A+}X(A)-= { X(a) | a є A-} Michael Whalen, Ajitha Rajan, Mats Heimdahl and Steven Miller. Coverage Metrics for Requirements-Based Testing. In Proceedings of ISSTA 2006. SAS_08_Model_Val_Tech_Heimdahl

  16. Task 2 – Validation Adequacy Measurement Tool • We currently support the following coverage metrics: • Assertion Coverage • Assertion Antecedent Coverage • Assertion UFC Coverage SAS_08_Model_Val_Tech_Heimdahl

  17. Task - 3 • Automatically generate requirements-based tests from … • Formal assertions • Abstract model called Assertion Model created using assertions and environmental constraints (specified as invariants) … to provide the defined assertion coverage. SAS_08_Model_Val_Tech_Heimdahl

  18. Automatically Generating Requirements-Based Tests Common with the Adequacy Measurement Tool Assertions and environmental constraints specified as invariants SAS_08_Model_Val_Tech_Heimdahl

  19. What Are Model Checkers? • Breakthrough technology of the 1990’s • Widely used in hardware verification • (Intel, Motorola, IBM, …) • Several different types of model checkers • Explicit, Symbolic, Bounded, Infinite Bounded, … • Exhaustive search of the global state space • Consider all combinations of inputs and states • Equivalent to exhaustive testing of the model • Produces a counter example if a property is not true • Easy to use • “Push button” formal methods • Very little human effort unless you’re at the tool’s limits • Limitations • State space explosion SAS_08_Model_Val_Tech_Heimdahl

  20. Preliminary Evaluation Interested in determining: • Feasibilityof generating assertion-based tests from a set of assertions • Generated assertion-based tests to provide UFC coverage over the assertions • Effectiveness of these test sets in validating the system model • Measured MC/DC achieved by the test sets over the system model Used three realistic sized examples: • Flight Guidance System (FGS), • and two models related to the Display Window Manager system (DWM1 and DWM2) SAS_08_Model_Val_Tech_Heimdahl

  21. Results Ajitha Rajan, Michael Whalen, and Mats Heimdahl. Model Validation using Automatically Generated Requirements-Based Tests. In Proceedings of 10th IEEE High Assurance Systems Engineering Symposium, Nov 2007.

  22. Results and Analysis • UFC test suites achieved high MC/DC coverage over DWM models – well defined set of assertions • Test-suite generated for UFC achieved very low MC/DC over the FGS model “When the FGS is in independent mode, it shall be active”. G(m_Independent_Mode_Condition.result → X(Is_This_Side_Active = 1)) RSML–e Macro Structure of Independent_Mode_Condition is not captured in the property Independent_Mode_Condition = ((Is_LAPPR_Active & Is_VAPPR_Active & IS_Offside_LAPPR_Active & Is_Offside_VAPPR_Active) | ( Is_VGA_Active & Is_Offside_VGA_Active)) SAS_08_Model_Val_Tech_Heimdahl

  23. Benefits of ABT • Saves time and effort in generating validation test suites from assertions • Effective method for generating model validation tests when the assertions are well defined • Helps in identifying missing assertions and over constrained models SAS_08_Model_Val_Tech_Heimdahl

  24. Model Assertions Useful ?? Conformance Tests Code Bonus Task – Adequacy of Conformance Testing Measure Adequacy Measure Adequacy Run • Direct assessment of how well tests exercise the assertions • Will expose defects of omission • Assertion coverage could necessitate longer test cases than for model coverage SAS_08_Model_Val_Tech_Heimdahl

  25. Assertion Coverage as an Adequacy Measure for Conformance Testing Hypothesis 1(H1): Conformance tests providing assertion UFC coverage are more effective than conformance tests providing MC/DC over the model Hypothesis 2(H2): Conformance tests providing assertion UFC coverage in addition to MC/DC over the model are more effective than conformance tests providing only MC/DC over the model SAS_08_Model_Val_Tech_Heimdahl

  26. Experiment • Used four industrial systems : • Two models from the display window manager • Two models representing the mode logic of a flight guidance system • Assessed effectiveness of test suites in terms of their fault finding ability Ajitha Rajan, Michael Whalen, Matt Staats, and Mats Heimdahl. Requirements Coverage as an Adequacy Measure for Conformance Testing. To Appear in Proceedings of 10th International Conference on Formal Engineering Methods, Oct 2008. SAS_08_Model_Val_Tech_Heimdahl

  27. Results – Hypothesis 1 Hypothesis 1 rejected at 5% statistical significance on all but the Latctl system SAS_08_Model_Val_Tech_Heimdahl

  28. Analysis – Hypothesis 1 • Model coverage better than assertion coverage for measuring adequacy of conformance test suites • Assertion UFC coverage is heavily dependent on the nature and completeness of the assertions • Rigor and robustness of assertion coverage metric used is important • UFC metric gets cheated when assertions are structured to hide the complexity of conditions SAS_08_Model_Val_Tech_Heimdahl

  29. Results – Hypothesis 2 Hypothesis 2 accepted at 5% statistical significance on all but the DWM2 system SAS_08_Model_Val_Tech_Heimdahl

  30. Analysis – Hypothesis 2 Does UFC really help in revealing additional faults? SAS_08_Model_Val_Tech_Heimdahl

  31. Summary – Bonus Task • UFC > MC/DC FALSE • 3 of the 4 case examples at 5% statistical significance • UFC + MC/DC > MC/DC TRUE • 3 of the 4 case examples at 5% statistical significance • Combine rigorous metrics for assertion coverage and model coverage to measure adequacy of conformance test suites • UFC metric is sensitive to structure of assertions • Need assertion coverage metrics that are robust to structure of assertions SAS_08_Model_Val_Tech_Heimdahl

  32. Technology Readiness Level • “Requirements-Based Test Generation Tool” TRL = 6 • System/subsystem model or prototype demonstration in a relevant environment • “Validation Adequacy Measurement Tool” TRL = 6 • System/subsystem model or prototype demonstration in a relevant environment SAS_08_Model_Val_Tech_Heimdahl

  33. Relevance to NASA • MBD is here - estimate one-half of all NASA missions in development or on the books will use model-based subsystem development • Extensive use in avionics industry • How do we know the models are right? • Model validation problem • We provide the capability to • Objectively measure the “quality” of assertion-based black-box validation tests • Objectively assess the completeness of a model • Does the model address all assertions? • Objectively assess the adequacy of a set of assertions • Are there enough assertions to adequately describe the model? • Automatically generate truly assertion-based tests SAS_08_Model_Val_Tech_Heimdahl

  34. Achievements to Date • Formal assertion notation identified • Most work with LTL • Extended to work with Live Sequence Charts (LSC) • Objective validation metrics defined • Requirements, Antecedent, Unique First Cause, and Unique Cause • Test case generation tool developed • Developed tool generating tests from LTL • Capable of generating tests to all metrics defined • Prototype tool working on LSC developed • Developed test-adequacy measurement tool for the defined validation metrics • Evaluation of metrics and tool • 12 papers and one PhD dissertation (Ajitha Rajan) SAS_08_Model_Val_Tech_Heimdahl

  35. Next Steps • Investigate alternative requirements notations to LTL • Complete empirical evaluation of the effectiveness in model validation • Flight Guidance Sensor (FGS) evaluation complete • Display Manager (DM) evaluation in work • Coordinate evaluation on NASA IV&V project • Coordinate technology transfer SAS_08_Model_Val_Tech_Heimdahl

  36. Discussion SAS_08_Model_Val_Tech_Heimdahl

More Related