1 / 18

Bottom-Up Integration Testing

Bottom-Up Integration Testing. After unit testing of individual components the components are combined together into a system. Bottom-Up Integration : each component at lower hierarchy is tested individually; then the components that rely upon these are tested.

betty_james
Télécharger la présentation

Bottom-Up Integration Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bottom-Up Integration Testing After unit testing of individual components the components are combined together into a system. Bottom-Up Integration: each component at lower hierarchy is tested individually; then the components that rely upon these are tested. Component Driver: a routine that simulates a test call from parent component to child component

  2. Bottom-Up Testing Example A B C • Test B, C individually (using drivers) • Test A such that it calls BIf an error occurs we know that the problem is in A or in the interface between A and B • Test A such that it calls CIf an error occurs we know that the problem is in A or in the interface between A and C • (-) Top level components are the most important yet tested last.

  3. Top-Down Testing Example A B C • Test A individually (use stubs for B and C) • Test A such that it calls B (stub for C)If an error occurs we know that the problem is in B or in the interface between A and B • Test A such that it calls C (stub for B)If an error occurs we know that the problem is in C or in the interface between A and C • * Stubs are used to simulate the activity of components that are not currently tested; (-) may require many stubs

  4. Big-Bang Integration After all components are unit testing we may test the entire system with all its components in action. (-) may be impossible to figure out where faults occur unless faults are accompanied by component-specific error messages

  5. Sandwich Integration Multi-level component hierarchy is divided into three levels with the test target being in the middle: Top-down approach is used in the top layer; Bottom-down approach used in the lower layer.

  6. Sync & Stabilize Approach

  7. Integration Strategies Comparison

  8. Testing Object Oriented Systems Objects tend to be small and simple while the complexity is pushed out into the interfaces. Hence unit testing tends to be easy and simple, but the integration testing complex and tedious. (-) Overridden virtual methods require thorough testing just as the base class methods.

  9. Testing Object Oriented Systems

  10. Test-Related Activities Define test objectives (i.e. what qualities of the system you want to verify) Devise test cases Program (and verify!) the tests Execute tests Analyze the results

  11. Test Plan A document that describes the criteria and the activities necessary for showing that the software works correctly. A test plan is well-defined when upon completion of testing every interested party can recognize that the test objectives has been met.

  12. Test Plan Contents What automated test tools are used (if any) Methods for each stage of testing (e.g. code walk-through for unit testing, top-down integration, etc.) Detailed list of test cases for each test stage How test data is selected or generated How output data & state information is to be captured and analyzed

  13. Static & Dynamic Analysis Static Analysis: uncovers minor coding violations by analyzing code at compile time (e.g. compiler warnings, FxCop) Dynamic Analysis: runtime monitoring of code (e.g. profiling, variable monitoring, asserts, tracing, exceptions, runtime type information, etc)

  14. Automated Testing Stub & driver generation Unit test cases generation Keyboard input simulation / terminal user impersonation Web request simulation / web user impersonation Code coverage calculation / code path tracing Comparing output to the desired

  15. When to Stop Testing? Run out of time, money? Curiously, the more faults we find at the beginning of testing the greater the probability that we will find even more faults if we keep testing further! Finding and testing all possible path combinations though the code?

  16. Fault Seeding Faults are intentionally inserted into the code. Then the test process is used to locate them. The idea is that: Detected seeded faults/total seeded faults = detected nonseeded faults/total nonseeded faults Thus we can obtain a measure of effectiveness of the test process and use it to adjust test time and thoroughness.

  17. Confidence in the Software The likelihood that the software is fault-free: C = S/(S-N+1), if n <= N C = 1, if n > N Where S the number of seeded faults, N – the expected number of indigenous faults, n – number of indigenous faults found.

  18. Read Chapter 8 from the white book.

More Related