1 / 16

Galit Friedman, Alan Hartman, Kenneth Nagin, Tomer Shiran IBM Haifa Research Laboratory

Projected State Machine Coverage. Galit Friedman, Alan Hartman, Kenneth Nagin, Tomer Shiran IBM Haifa Research Laboratory hartman@il.ibm.com ISSTA 2002. Outline. Specification-based testing EFSM models and test generation Projected State Machine Coverage Criteria

jereni
Télécharger la présentation

Galit Friedman, Alan Hartman, Kenneth Nagin, Tomer Shiran IBM Haifa Research Laboratory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Projected State Machine Coverage Galit Friedman, Alan Hartman, Kenneth Nagin, Tomer Shiran IBM Haifa Research Laboratory hartman@il.ibm.com ISSTA 2002

  2. Outline • Specification-based testing • EFSM models and test generation • Projected State Machine • Coverage Criteria • Test Generation Algorithms • Experimental & Industrial Experience

  3. Specification-based testing Test Suite Test Generation Modeling • Build a model based on the specifications • Derive test cases from the model • Test cases generated based on some coverage criterion • Test cases contain stimuli and expected responses

  4. deposit 1 OK deposit 2 OK deposit 2 fail Start 0 - withdraw 0 fail withdraw 0 OK withdraw 1 OK EFSM Models • Labeled directed graph • Nodes (states) labeled with both control and data • Arcs (transitions) labeled by stimuli to application • Includes expected responses to stimuli deposit withdraw

  5. Problems with EFSM • State space explosion • Test case explosion

  6. deposit 1 OK deposit 2 OK deposit 2 fail Start 0 - withdraw 0 fail withdraw 0 OK withdraw 1 OK Test Generation • Extracting a set of paths from the EFSM • How do you choose which paths? • Coverage criteria!

  7. deposit fail deposit 1 OK deposit OK deposit 2 OK deposit 2 fail Start 0 - Start - withdraw 0 fail withdraw 0 OK withdraw fail withdraw OK withdraw 1 OK Projected State Machine

  8. deposit fail deposit OK Start - withdraw fail withdraw OK Coverage Criteria I • CC_State_Projection on <exprlist> • Generate a set of test cases – one through each equivalence class of states in the projected state machine • E.g. CC_State_Projection onaction; result;

  9. Coverage Criteria II • CC_Transition_Projection from <exprlist> to <exprlist> • E.g. CC_Transition_Projection fromaction; toaction; result; • Equivalent to Carver and Tai’s CSPE-1 coverage criterion (Constraints on Succeeding and Preceeding Events) IEEE TSE 1998 • Controllable stimuli: Start, Deposit, Withdraw • Observable results: Fail, OK

  10. Other coverage criteria for TG • Hartmann et al. ISSTA 2000 – transition coverage of data partitions • Offut & Abdurazik UML 1999 – explicit test purposes, transition coverage, predicate coverage of transitions • Jeron & Morel CAV 1999 – test purposes • Amman et al. FME 1998 – mutation coverage • Henniger & Ural SDL 2000 – define-use coverage on message flow graph

  11. Test Constraints • Forbidden classes of states • Forbidden classes of paths • E. g. TC_Forbidden_Statebuffer=2; deposit 1 OK Forbidden Start 0 - withdraw 0 fail withdraw 0 OK withdraw 1 OK

  12. Test Generation Algorithm • Traverse the whole EFSM reachable state space • Use BFS, DFS, or CFS • Record data on reachable coverage tasks – including random representative selection • Eliminate forbidden configurations • Extract a path to each selected task representatives • When state space too large – generate on-the-fly

  13. Experiments • Buffer, Readers and Writers, Gas Station, Sliding Window Protocol, Elevator Control. • Use different projections to obtain a hierarchy of test suites of varying strength. • More projection variables created larger test suites with increased power of defect detection • Use test constraints to partition the state space, enabling measurable coverage of well-defined subsets of behavior

  14. Distributed File System • Statistics: • FSM 370000 states, 290 test cases, 729 coverage tasks • Original Test: 12 PM, 18 defects (10 severity 2) • Our test: 10 PM, 15 old defects (10 severity 2) 2 new defects • Bottom Line – We made a convert

  15. Call Center • Two FSM models • 37 defects • Responsiveness to changes in spec. • Reuse of function test for system test

  16. Conclusions • Flexible coverage criteria for a hierarchy of test suites • FSM constraints help with state explosion • Systematic and automated methodology • Eases reuse and maintenance of test suites • Successful in detecting faults and communicating error scenarios

More Related