1 / 23

Simplifying and Isolating Failure-Inducing Input

Simplifying and Isolating Failure-Inducing Input. Andreas Zeller and Ralf Hildebrandt IEEE Transactions on Software Engineering (TSE) 2002. The Targeted Problem. Which circumstances of the test case are responsible for this failure? … help the debugger!!. Test Case (Input). Program Under

jaxon
Télécharger la présentation

Simplifying and Isolating Failure-Inducing Input

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simplifying and Isolating Failure-Inducing Input Andreas Zeller and Ralf Hildebrandt IEEE Transactions on Software Engineering (TSE) 2002

  2. The Targeted Problem Which circumstances of the test case are responsible for this failure? … help the debugger!! Test Case (Input) Program Under Test Failure … Ugh

  3. Overall Strategy Diff? Test Case (Input) Minimal Test Case Passing Test Case Program Under Test Program Under Test Program Under Test Failure … Ugh Failure … Ugh Pass! Simplify Failing test case AND Isolate Difference of Pass/Failing Tests

  4. Basic ddmin Algorithm - Simplify INPUT: Failing test case Simplify failing test and run simplified test Until minimal test case, where deleting 1 more input entity causes failure to vanish. OUTPUT: Only relevant part of failing test case  repetitive execution of successively smaller test cases Key idea – execution behavior is determined by a number of circumstances… input = set of circumstances

  5. A test case = circumstances set Note: Some of these slides are taken from Zeller’s ISSTA 2000 talk

  6. Input differences = set of changes

  7. Representing Test and Outcomes

  8. Simplifying to Minimal Tests

  9. Example of Minimizing Process Partition into delta 1 and delta 2. If delta 1 fails, delta 1 is a smaller test case. Done If delta 1 does not fail, delta 2 fails, delta 2 is smaller test case. Done. Both tests pass or unresolved. Need to test larger subset of original.

  10. Observation and Strategy Larger subsets of original  higher chance of test failing Smaller subsets of original  faster progression in case test fails, but less chance that test fails Strategy: - Partition into larger number n of subsets - Test each small set AND its large complement Outcomes: - small set fails  reduce to subset with n=2 - large complement fails  reduce to complement,n-1 - no test fails  try with 2n subsets - granularity is n > size of original set. Minimal. done

  11. Summary of 2nd minimizing Process

  12. Example: GCC Dumps Core Original Test Case (input is program for GCC to compile)

  13. Sample of Tests

  14. Another example: Minimizing Fuzz

  15. A Third Example: Mozilla print issue

  16. Simplify = minimize user actions

  17. Simplify = minimize HTML

  18. Isolating Failure-Inducing Differences • The motivating issue: • The larger size of the simplified input -> The higher the number of required tests. Ex. Simplified FLEX input of 2121 characters: number of tests 11,589 – 17,960. With high precision up to 37,454 tests. If individual tests fast, no problem. If complicated tests, unacceptable!!!

  19. The basic idea of Isolation • Initially empty passing case. • As a test case fails: • Smaller test case is failing case • When a test case passes: • Larger test case is used as new passing test case  minimizes diff between pass and fail test

  20. Simplification vs Isolation • Simplification • Make each part of simplified test case relevant (ie, remove parts making failure go away) • Isolation • Find one relevant part of test case (ie removing this particular part makes failure go away) • Isolating diff will pinpoint failure cause much faster than minimizing test case

  21. dd: extending ddmin • Compute subsets as subsets of failing test – passing test • Change rule “reduce to subset” so passing test U subset tested • Add 2 rules: • Increase to complement: If fail-subset passes, reduce diff between fail-subset and fail. • Increase to subset: If pass U subset passes, make pass U subset the new passing test.

  22. Comparing ddmin vs dd • Isolating GCC input: • Ddmin – 731 tests to minimize program • Dd – 59 tests. Pinpoints relevant diff of 2 chars • Isolating fuzz input: • Ddmin number of tests significantly lower • Minimal failure-inducing diff always 1 char

  23. Summary and Future Work • Delta debugging has become well known, with a plugin in Eclipse • Isolating failure-inducing inputs is key contribution • Since this paper: • Domain-specific simplification methods • Optimization • Undoing changes • Combine with other techniques • Other applications – security flaws

More Related