Download
csce 431 testing n.
Skip this Video
Loading SlideShow in 5 Seconds..
CSCE 431: Testing PowerPoint Presentation
Download Presentation
CSCE 431: Testing

CSCE 431: Testing

180 Vues Download Presentation
Télécharger la présentation

CSCE 431: Testing

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CSCE 431:Testing Some material from Bruegge, Dutoit, Meyer et al

  2. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  3. Testing Truism • Untested systems will not work • Why? • Requirements not correct • Misunderstood requirements • Coding errors • Miscommunication CSCE 431 Testing

  4. Edsger W. Dijstra, in 1970 Program testing can be used to show the presence of bugs, but never to show their absence! • It is impractical or impossible to exhaustively test all possible executions of a program • It is important to choose tests wisely CSCE 431 Testing

  5. Increasing System Reliability • Fault avoidance • Detect faults statically, without relying on executing any system models • Includes development methodologies, configuration management, verification • Fault detection • Debugging, testing • Controlled (and uncontrolled) experiments during development process to identify erroneous states and their underlying faults before system release • Fault tolerance • Assume that system can be released with faults and that failures can be dealt with • E.g., redundant subsystems, majority wins • For a little extreme approach, see Martin Rinard: Acceptability-Oriented Computing, Failure-Oblivious Computing CSCE 431 Testing

  6. Fault Avoidance and Detection • Static Analysis • Hand execution: Reading the source code • Walk-Through (informal presentation to others) • Code Inspection (formal presentation to others) • Automated Tools checking for • Syntactic and semantic errors • Departure from coding standards • Dynamic Analysis • Black-box testing (Test the input/output behavior) • White-box testing (Test the internal logic of the subsystem or class) • Data-structure based testing (Data types determine test cases) CSCE 431 Testing

  7. Terminology test component part of the system isolated for testing test case a set of inputs and expected results that exercises a test component (with the purpose of causing failures or detecting faults) test stub a partial implementation of a component on which a test component depends test driver a partial implementation of a component that depends on a test component fault design or coding mistake that may cause abnormal behavior erroneous state manifestation of a fault during execution. Caused by one or more faults and can lead to a failure failure deviation between the observed and specified behavior • When exact meaning not important, fault, failure, erroneous state commonly called errors, defects, bugs CSCE 431 Testing

  8. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  9. What is This? • A failure? • An error? • A fault? • We need to describe specified behavior first! • Specification: “A track shall support a moving train” CSCE 431 Testing

  10. Erroneous State (“Error”) CSCE 431 Testing

  11. Fault • Possible algorithmic fault: Compass shows wrong reading • Or: Wrong usage of compass • Or: Communication problems between teams CSCE 431 Testing

  12. Mechanical Fault CSCE 431 Testing

  13. Modular Redundancy CSCE 431 Testing

  14. Declaring the Bug as a Feature CSCE 431 Testing

  15. Patching CSCE 431 Testing

  16. Testing CSCE 431 Testing

  17. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  18. Typical Test Categorization • Unit testing • Integration testing • System testing • Reliability testing • Stress testing CSCE 431 Testing

  19. Unit Testing • Test each module individually • Choose data based on knowing the source code • “White-box” testing • Desirable to try to cover all branches of a program • Heuristics: choose input data • Well within acceptable input range • Well outside acceptable input range • At or near the boundary • Usually performed by the programmer implementing the module • Purchases components should be unit tested too • Goal: component or subsystem correctly implemented, and carries out the intended functionality CSCE 431 Testing

  20. Integration Testing • Testing collections of subsystems together • Eventually testing the entire system • Usually carried out by developers • Goal: test interfaces between subsystems • Integration testing can start early • Stubs for modules that have not yet been implemented • Agile development ethos CSCE 431 Testing

  21. System Testing • The entire system is tested • Software and hardware together • Black box methodology • Robust testing • Science of selecting test cases to maximize coverage • Carried out by developers, but likely a separate testing group • Goal: determine if the system meets its requirements (functional and nonfunctional) CSCE 431 Testing

  22. Reliability Testing • Run with same data repeatedly • Finding timing problems • Finding undesired consequences of changes • Regression testing • Fully automated test suites to run regression test repeatedly CSCE 431 Testing

  23. Stress Testing • How the system performs under stress • More than maximum anticipated loads • No load at all • Load fluctuating from very high to very low • How the system performs under exceptional situations • Longer than anticipated run times • Loss of a device, such as a disk, sensor • Exceeding (physical) resource limits (memory, files) • Backup/Restore CSCE 431 Testing

  24. Acceptance Testing • Evaluates the system delivered by developers • Carried out by/with the client • May involve executing typical transactions on site on a trial basis • Goal: Enable the customer to decide whether to accept a product CSCE 431 Testing

  25. Verification vs. Validation • Validation: “Are you building the right thing?” • Verification: “Are you building it right?” • Acceptance testing about validation, other testing about verification CSCE 431 Testing

  26. Another Test Categorization – By Intent • Regression testing • Retest previously tested element after changes • Goal is to assess whether changes have (re)introduced faults • Mutation testing • Introduce faults to assess test quality CSCE 431 Testing

  27. Categorization by Process Phase • Unit testing • Implementation • Integration testing • Subsystemintegration • System testing • System integration • Acceptance testing • Deployment • Regression testing • Maintenance V-Model CSCE 431 Testing

  28. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  29. Goal – Partition Testing • Cannot test for all possible input data • Idea: For each test, partition input data into equivalence classes, such that: • The test fails for all elements in the equivalence class; or • The test succeeds for all elements in the equivalence class. • If this succeeds: • One input from each equivalence class suffices • No way to know if partition is correct (likely not) • Heuristics - could partition data like this: • Clearly good values • Clearly bad values • Values just inside the boundary • Values just outside the boundary CSCE 431 Testing

  30. Choosing Values From Equivalence Classes • Each Choice (EC): • For every equivalence class c, at least one test case must use a value from c • All Combinations (AC): • For every combination ecof equivalence classes, at least one test case must use a set of values from ec • Obviously more extensive, but may be unrealistic • Think, e.g., testing a compiler (all combinations of all features) CSCE 431 Testing

  31. Example Partitioning • Date-related program • Month: 28, 29, 30, 31 days • Year: • Leap • Standard non-leap • Special non-leap (x100) • Special leap (x400) • Month-to-month transition • Year-to-year transition • Time zone/date line locations • All combinations: some do not make sense CSCE 431 Testing

  32. About Partition Testing • Applicable to all levels of testing • unit, class, integration, system • Black box • Based only on input space, not the implementation • A natural and attractive idea, applied by many (most) testers • No rigorous basis for assessing effectiveness, as there is generally no way of being certain that partition corresponds to reality CSCE 431 Testing

  33. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  34. Test Automation • Testing is time consuming • Should be automated as much as possible • At a minimum, regression tests should be run repeatedly and automatically • Many tools exist to help • E.g., automating test execution with “xUnit” tools • http://en.wikipedia.org/wiki/XUnit • It is possible to automate more than just test execution CSCE 431 Testing

  35. Test Automation • Generation • Test inputs • Selection of test data • Test driver code • Execution • Running the test code • Recovering from failures • Evaluation • Oracle: classify pass/no pass • Other info about results • Test quality estimation • Coverage measures • Other test quality measures • Feedback to test data generator • Management • Save tests for regression testing CSCE 431 Testing

  36. Automated Widely • Generation • Test inputs • Selection of test data • Test driver code • Execution • Running the test code • Recovering from failures • Evaluation • Oracle: classify pass/no pass • Other info about results • Test quality estimation • Coverage measures • Other test quality measures • Feedback to test data generator • Management • Save tests for regression testing CSCE 431 Testing

  37. Difficult to Automate • Generation • Test inputs • Selection of test data • Test driver code • Execution • Running the test code • Recovering from failures • Evaluation • Oracle: classify pass/no pass • Other info about results • Test quality estimation • Coverage measures • Other test quality measures • Feedback to test data generator • Management • Save tests for regression testing CSCE 431 Testing

  38. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  39. What to Unit Test? • Mission critical: test or die • Complex: test or suffer • Everything non-trivial: test or waste time • Everything trivial: test == waste of time CSCE 431 Testing

  40. Code Coverage Metrics • Take a critical view • E.g.: Java getters and setters usually trivial • Not testing them results in low code coverage metric (<50%) • But they can indicate poorly covered parts of code • Example: error handling CSCE 431 Testing

  41. xUnit • cppunit — C++ • JUnit — Java • NUnit — .NET • SUnit— Small Talk • This was the first unit testing library • pyUnit — Python • vbUnit — Visual Basic • . . . CSCE 431 Testing

  42. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  43. Mock Objects • Often (practically) impossible to include real objects, those used in full application, into test cases • To test code that depends on such objects, one often uses mock objects instead • Mock object simulates some part of the behavior of another object, or objects • Useful in situations where real objects • Could provide non-deterministic data • States of real objects are hard to reproduce (e.g. are result of interactive use of software, erroneous cases) • Functionality of real objects has not yet been implemented • Real objects are slow to produce results • Tear-up/tear-down requires lots of work and/or is time consuming CSCE 431 Testing

  44. Knowing What is Being Tested • Assume a failed test involves two classes/data types • Who to blame? • One class’ defect can cause the other class to fail • Essentially, this is not unit testing, but rather integration testing • “Mocking” one class makes it clear which class to blame for failures CSCE 431 Testing

  45. Mocks and Assigning Blame in Integration Testing • Direct integration testing • Code + database • My code + your code • Integration testing with mock objects • Code + mock database • My code + mock your code • Mock my code + your code • Mocks help to make it clear what the system under test (SUT) is CSCE 431 Testing

  46. Mocks and Assigning Blame in Integration Testing • Direct integration testing blame assignment unclear • Code + database • My code + your code • Integration testing with mock objects blame assignment clear • Code + mock database • My code + mock your code • Mock my code + your code • Mocks help to make it clear what the system under test (SUT) is • As with scientific experiments, only change the variable being measured, control others CSCE 431 Testing

  47. Terminology Dummy object Passed around, never used. Used for filling parameter lists, etc. Fake object A working implementation, but somehow simplified, e.g., uses an in-memory database instead of a real one Stub Provides canned answers to calls made during a test, but cannot respond to anything outside what it is programmed for Mock object Mimic some of the behavior of the real object, for example, dealing with sequences of calls • The definitions are a bit overlapping and ambiguous, but the terms are being used, and it is good to know their meaning, even if imprecise CSCE 431 Testing

  48. Outline • Introduction • How to deal with faults, erroneous states, and failures • Different kinds of testing • Testing strategies • Test automation • Unit testing • Mocks etc. • Estimating quality: Coverage • Using dataflow in testing • Mutation testing • Test management • Collaborative construction • Effectiveness of different quality assurance techniques • References CSCE 431 Testing

  49. White Box Testing • White box – you know what is inside, i.e. the code • Idea • To assess the effectiveness of a test suite, measure how much of the program it exercises • Concretely • Choose a kind of program element, e.g., instructions (instruction coverage) or paths (path coverage) • Count how many are executed at least once • Report as percentage • A test suite that achieves 100% coverage achieves the chosen criterion. Example: • “This test suite achieves instruction coverage for routine r ” • Means that for every instruction i in r, at least one test executes i CSCE 431 Testing

  50. Coverage Criteria • Instruction (or statement) coverage • Measure instructions executed • Disadvantage: insensitive to some control structures • Branch coverage • Measure conditionals whose paths are both/all executed • Condition coverage • How many atomic Boolean expressions evaluate to both trueand false • Path coverage • How many of the possible paths are taken • path == sequence of branches from routine entry to exit CSCE 431 Testing