1 / 14

White-Box Testing Techniques IV

White-Box Testing Techniques IV. Software Testing and Verification Lecture 10. Prepared by Stephen M. Thebaut, Ph.D. University of Florida. White-Box Testing Topics. Logic coverage (lecture I) Dataflow coverage (lecture II) Path conditions and symbolic evaluation (lecture III)

econstance
Télécharger la présentation

White-Box Testing Techniques IV

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. White-Box Testing Techniques IV Software Testing and Verification Lecture 10 Prepared by Stephen M. Thebaut, Ph.D. University of Florida

  2. White-Box Testing Topics • Logic coverage (lecture I) • Dataflow coverage (lecture II) • Path conditions and symbolic evaluation (lecture III) • Other white-box testing strategies (e.g., “fault-based testing”) (lecture IV)

  3. Other white-box testing strategies • Program instrumentation • Boundary value analysis (revisited) • Fault-based testing • Mutation analysis • Error seeding

  4. Program Instrumentation • Allows for the measurement of white-box coverage during program execution. • Code is inserted into a program to record the cumulativeexecution of statements, branches, du-paths, etc. • Execution takes longer and program timing may be altered.

  5. Boundary Value Analysis (1) if (X<Y) then A (2) else B end_if_else • Applies to both control and data structures. • Strategies are analogous to black-box boundary value analysis.

  6. Fault-Based Testing • Suppose a test case set reveals NOprogram errors – should you celebrate or mourn the event? • Answer: it depends on whether you’re the developer or the tester...  • Serious answer: it depends on the error-revealing capability of your test set. • Mutation Analysisattempts to measure test case set sufficiency.

  7. Mutation Analysis Procedure • Generate a large number of “mutant” programs by replicating the original program except for one small change (e.g., change the “+” in line 17 to a “-”; change the “<“ in line 132 to a “<=“; etc.). • Compile and run each mutant program against the test set. (cont’d)

  8. Mutation Analysis Procedure (cont’d) • Compare the ratio of mutants “killed” (i.e., revealed) by the test set to the number of “survivors.” --- • The higher the “kill ratio” the better the test set. • What are some of the potential draw-backs of this approach?

  9. Error Seeding • A similar approach, Error Seeding, has been used to estimate the “number of errors” remaining in a program. • Count the number of errors in the following Quick Sort program:

  10. Error Seeding Procedure • Before testing, “seed” the program with a number of “typical errors,” keeping careful track of the changes made. • After a period of testing, compare the number of seeded and non-seeded errors detected. (cont’d)

  11. Error Seeding Procedure (cont’d) • If N is the total number of errors seeded, n is the number of seeded errors detected, and x is the number of non-seeded errors detected, the number of remaining (non-seeded) errors in the program is about x(N/n – 1) • What assumptions underlie this formula? Consider its derivation…

  12. Derivation of Error Seeding Formula LetX be the total number of NON-SEEDED errors in the program Assuming seeded and non-seeded errors are equally easy/hard to detect, after some period of testing,x:nX:N. So,X xN/n X – x  xN/n – x  x(N/n – 1) as claimed.

  13. Exercise After just 2 days of testing a new product release, Janice happily announced to her manager that her team had already found 20 errors. The manager, who was surprised to hear that so many errors had been dis-covered so quickly, asked Janice why she was feeling happy in light of this depressing news. She then said, “Well, 16 of those 20 ‘errors’ weren’t real ‘bugs’ – they were, in fact, among 40 ‘errors’ that I created and then ‘seeded’ into the system before testing started. So, I figure that there should only be just a few REAL bugs left to find now.” Assuming Janice used the error seeding technique dis-cussed in class, how many real bugs does she estimate are still left to find?

  14. White-Box Testing Techniques IV Software Testing and Verification Lecture 10 Prepared by Stephen M. Thebaut, Ph.D. University of Florida

More Related