1 / 31

Software Testing

Software Testing. Testing types Testing strategy Testing principles. Testing Objectives. Testing is a process of executing a program with the intent of finding an error A good test case is one that has a high probability of finding an as-yet undiscovered error

aileen
Télécharger la présentation

Software Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing • Testing types • Testing strategy • Testing principles

  2. Testing Objectives • Testing is a process of executing a program with the intent of finding an error • A good test case is one that has a high probability of finding an as-yet undiscovered error • A successful test is one that uncovers an as-yet undiscovered error

  3. Reminders • Testing is a process for finding semantic (logical) errors, but not syntactic (symbolical) errors, in implemented programs. • Testing can reveal the presence of errors, NOT their absence. • Can you tell the differences between testing, debugging and compilation debugging?

  4. Testing Techniques • White-box testing (WBT) • When knowing the internal workings of a product • Some techniques • Basis path testing • Condition testing • Data flow testing

  5. Testing Techniques • Black-box testing (BBT) • When knowing the specified function that a product has been designed to perform • Some techniques • Equivalence partitioning (Input domain testing) • Boundary value analysis • Comparison testing (back-to-back testing)

  6. Basis Path Testing (WBT) • Steps • Construct a flow chart based on a program • Compute (cyclomatic) complexity of the flow chart, which also indicates the number of independent paths of the program • Determine basis paths • Design test cases to go through these basis paths

  7. Cyclomatic Complexity • The complexity of a program • Three ways for calculating the complexity • The number of regions of the flow graph • (total edges) – (total nodes) + 2 • Number of predicate edges + 1

  8. Cyclomatic Complexity • 4 basis paths • 1-2-3-4-5-7-10 • 1-2-3-4-5-8-10 • 1-2-3-4-6-9-10 • 1-2-10 (May have more than one option for defining basis paths)

  9. Condition Testing (WBT) • Focus on testing each condition in the program • Branch testing may be the simplest method for doing condition testing • Steps • Define combinations of truth values for variables in predicate statements • Design test cases to test all these combinations

  10. Data Flow Testing (WBT) • Select test paths of a program according to the locations of definitions and uses of variables in the program, then test these define-use chains. • Steps • Determine define-use chains for each variables • Design test cases to pass through these define-use chains (may have different testing criteria, ex. all-defines, all-uses, …)

  11. Equivalence Testing (BBT) • Divide input domain of a program into classes of data from which test cases can be derived • Ex. If an input condition specifies a range or a value, one valid and two invalid equivalence classes are defined. • Ex. Boolean: one valid and one invalid classes can be defined.

  12. Boundary Value Analysis (BBT) • Lead to a selection of test cases that exercise boundary values of the input domain • Ex. A range v1 and v2, test cases should be designed with values v1 and v2, just above and just below v1 and v2, respectively.

  13. Comparison Testing (BBT) • Appropriate when multiple implementations of same specification have been produced • Use same inputs to test whether they generate same outputs

  14. Verification & Validation (V&V) • Verification: "Are we building the product right" • The software should conform to its specification • Validation: "Are we building the right product" • The software should do what the user really requires

  15. The V & V Process • Is a whole life-cycle process - V & V must be applied at each stage in the software process. • Has two principal objectives • The discovery of defects in a system • The assessment of whether or not the system is usable in an operational situation.

  16. Static and Dynamic Verification • Software inspections Concerned with analysis of the static system representation to discover problems (static verification) • May be supplement by tool-based document and code analysis • Software testing Concerned with exercising and observing product behaviour (dynamic verification) • The system is executed with test data and its operational behaviour is observed

  17. Inspections and Testing • Inspections and testing are complementary and not opposing verification techniques • Both should be used during the V & V process • Inspections can check conformance with a specification but not conformance with the customer’s real requirements • Inspections cannot check non-functional characteristics such as performance, usability, etc.

  18. Software Inspections • Involve people examining the source representation with the aim of discovering anomalies and defects • Do not require execution of a system so may be used before implementation • May be applied to any representation of the system (requirements, design, test data, etc.) • Very effective technique for discovering errors

  19. Program Inspections • Formalised approach to document reviews • Intended explicitly for defect DETECTION (not correction) • Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g. an uninitialised variable) or non-compliance with standards

  20. Inspection Pre-conditions • A precise specification must be available • Team members must be familiar with the organisation standards • Syntactically correct code must be available • An error checklist should be prepared • Management must accept that inspection will increase costs early in the software process • Management must not use inspections for staff appraisal

  21. Inspection Procedure • System overview presented to inspection team • Code and associated documents are distributed to inspection team in advance • Inspection takes place and discovered errors are noted • Modifications are made to repair discovered errors • Re-inspection may or may not be required

  22. Testing Principles • All tests should be traceable to customers • Tests should be planned long begin testing begins • The Pareto principle applies to software testing • Testing should begin “in the small” and progress toward to testing “in the large” • Exhaustive testing is impossible • To be more effective, testing should be conduced by an independent third party

  23. Software Testing Strategy • Unit test -> Integration test -> Validation Test -> System Test

  24. Unit Test • Use white-box testing techniques • Test for each module, including interface, local data structures, boundary conditions, independent paths, error handling paths, and so on.

  25. Integration Test • Test the correctness of integrations of modules. • Usually use an incremental integration strategy to integrate and test modules. • Integration strategies • Top-down integration • Bottom-up integration • Combination of both methods

  26. Top-down testing

  27. Bottom-up testing

  28. Validation Testing • Test whether the software functions in a manner that can be reasonably expected by the customer. • The reasonable expectations are defined in the software requirements. • Methods • Alpha testing-developer observes user’s operation and records all errors from the operation • Beta testing-customer operates system in real environment without developer, but reports all errors to developer after his operation

  29. System Testing • Focus on not only software, but also the integration of software, hardware and environment • Different testing • Recovery testing • Security testing • Stress testing • Performance testing

  30. Debugging Approaches • Brute force-insert “write” statement to indicate the position of the system when an error occurs, and find the causes of the error before the “write” statement. • Backtracking-begin at the site where a symptom has been uncovered, and trace backward in the source until the causes have been located. • Cause elimination-list all the hypotheses about the causes of an error, check and eliminate these hypotheses one by one, until the real cause has been found.

  31. PS. • Real-time systems/applications are harder to be tested than other systems-time/temporal considerations (hard real-time systems/soft real-time systems) • For testing object-oriented programs, please refer to Dr. Kung’s web page.

More Related