1 / 31

Software Testing

Software Testing. Testing . 1. We test to find errors 2. A good test case has a high probability of finding an as yet undiscovered error 3. A successful test uncovers an as yet undiscovered error 4. Testing cannot prove the absence of defects, only that defects are present .

britain
Télécharger la présentation

Software Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing

  2. Testing 1. We test to find errors 2. A good test case has a high probability of finding an as yet undiscovered error 3. A successful test uncovers an as yet undiscovered error 4. Testing cannot prove the absence of defects, only that defects are present.

  3. Testing Methodologies • Inspection/Review/Observation • Demonstration • Simulation/Prototyping • Live Scenarios (normal and abnormal operation) • Stress

  4. Types of Testing • Functional • Establishes confidence • Structural • Seeks faults • Integration • Tests modules together • System • Tests in the environment • User acceptance • Idiot proofing • Regression • After repairing

  5. Functional Testing Inputs Outputs f(inputs) = outputs

  6. Structural Testing Inputs Outputs

  7. Boundary Value Testing • Boundary Value Analysis • Robustness Testing • Worst-Case Testing • Special Value Testing • Random Testing

  8. Boundary Value Testing • Refers to testing along the boundaries of the input domains of a function. • The purpose is to stress the limits of the function and determine if it responds correctly to proper and improper inputs.

  9. Boundary Value Analysis • Boundary Value Analysis refers to analysis of functions with inputs from a range of values that have boundaries. a  x  b • If there are two input variables x1 and x2, then the boundaries might be: a  x1 b c  x2 d

  10. Boundary Value Analysis • The intervals [a,b] and [c,d] are referred to as the ranges of x1 and x2. • One of the reasons for strong typing in some programming languages, is to prevent cases where boundary value analysis will not help in testing. • Limited in cases where the inputs are not independent, or the ranges of the inputs is not bounded.

  11. Input Domain of a function for two variables x2 d c x1 a b Figure 5.1 from the textbook.

  12. Boundary value analysis test cases for a function of two variables x2 d c x1 a b Figure 5.2 from the textbook.

  13. Generalizing boundary value analysis • Can be generalized in two ways: • Number of variables • Hold one variable at nominal value and run test cases for all of the others at the boundaries. • Repeat for all input variables • For n inputs, 4n +1 test cases • Kinds of ranges • What are the types of the inputs • Are their minimums and maximums?

  14. Robustness Testing • Extension of Boundary value testing • Add boundary values outside of the range of inputs allowed. • The philosophy here is to make the function fail gracefully.

  15. Robustness Testing range of inputs x2 d c x1 a b Figure 5.3 from the textbook.

  16. Worst-Case Testing • Boundary Value Analysis makes the Single Fault Assumption of Probability Theory • What happens when more than one variable has an extreme value? • Worst Case Analysis

  17. Worst-Case Testing • For each variable, start with the five element set {min, min+, nom, max-, max} • Take the cartesian product of these sets and generate test cases from these.

  18. Worst-Case Testing x2 d c x1 a b Figure 5.4 from the textbook.

  19. Worst-Case Testing x2 d c x1 a b Figure 5.4 from the textbook.

  20. Special Value Testing • Most popular form of functional testing • Based on domain knowledge of the tester • Pick values that are likely to cause a problem • February 29 for a date, June 31…

  21. Random Testing • Use a random number generator to generate test cases. • Statistically the most valid method. • How many test cases are needed to adequately test the function?

  22. Guidelines for Boundary Value Testing • Test methods based on input domain of the function are rudimentary. • Share the common assumption that input variables are independent. This is dangerous • Two other distinctions: • Normal vs. Robust • Single vs. Multiple Fault Assumption

  23. Equivalence Classes • Equivalence Classes form a partition of a set • The partition is a collection of mutually disjoint subsets when the union of those subsets is the entire set. • Completeness • Non-redundancy • Elements of the subset (Equivalence Class) have something in common. • Equivalence Class Testing selects test cases by choosing one element from each equivalence class.

  24. Equivalence Classes • Key to equivalence class testing is the choice of equivalence relation that determines the classes. • For example purposes, consider an expanded function, F • F(x1, x2) for a  x1 d, with intervals [a,b), [b,c),[c,d] and e  x2 g, with intervals [e,f), [f,g] • Weak Normal Equivalence Class Testing • Strong Normal Equivalence Class Testing

  25. Weak Normal Equivalence Class Testing • Follows Single Fault Assumption • Use one variable from each equivalence class (interval). g f e x1 a b c d Figure 6.1 from the textbook.

  26. Strong Normal Equivalence Class Testing • Based on the Multiple Fault Assumption • Test cases taken from each element of the cartesian product of the intervals. g f e x1 a b c d Figure 6.2 from the textbook.

  27. Weak Robust Equivalence Class Testing • For valid inputs, use one value from each valid class • For invalid inputs, use one invalid value, and the rest valid. g f e x1 a b c d Figure 6.3 from the textbook.

  28. Strong Robust Equivalence Class Testing • Name is redundant • Invalid values for all possible combinations. g f e x1 a b c d Figure 6.4 from the textbook.

  29. Guidelines and Observations • The weak forms of equivalence class testing are not as comprehensive as the corresponding strong forms. • If the implementation language is strongly typed, then robust makes no sense. • If error conditions are a high priority, the robust forms are appropriate. • Equivalence class testing is appropriate when input data is defined in terms of intervals and sets of discrete values, especially when system malfunctions can result from improper inputs.

  30. Guidelines and Observations (cont.) • Equivalence class testing is strengthened by a hybrid approach with boundary value testing. • Equivalence class testing is appropriate when the function is complex. • Strong equivalence class testing makes a presumption that the variables are independent, and the corresponding multiplication of test cases causes redundancy. • Selecting the right equivalence class relation may not be that easy. It may take several tries. • The difference between weak and strong forms is helpful in the distinction between progression and regression testing.

  31. Questions?

More Related