1 / 28

Testing Strategies

Testing Strategies. Sources: Code Complete, 2 nd Ed., Steve McConnell Software Engineering, 5 th Ed., Roger Pressman Testing Computer Software, 2 nd Ed., Cem Kaner, et. Al. Overall Testing Strategy. System Engineering

kurt
Télécharger la présentation

Testing Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing Strategies Sources: Code Complete, 2nd Ed., Steve McConnell Software Engineering, 5th Ed., Roger Pressman Testing Computer Software, 2nd Ed., Cem Kaner, et. Al.

  2. Overall Testing Strategy • System Engineering • The software you're building may be only one part of a much larger system containing many hardware and software components • System Engineering deals with engineering the entire system, not just the software components, and goes beyond the scope of Software Engineering System Testing (Final) System Engineering Requirements System Testing (Software) Integration Testing Design Construction Unit Testing

  3. Unit Testing • Tests the basic units that make-up a software system (in object-oriented programming this means testing at the class level) • Performed by the programmer (not a separate testing group) • How the programmer says “I did what I was supposed to do, and it works”. • If it isn’t tested, it doesn’t work! • Part of being a professional • Saves debugging time • May be manual or automated

  4. Automated Unit Testing • Code written to exercise the functionality in the unit under test • Alerts the programmer to errors • Easy to run at any time (before and after changes, etc.) • Makes your life as a programmer easier • Eliminates the fear of making changes • A collection (suite) of automated unit tests becomes a regression test • Usually written using a testing framework

  5. xUnit Testing Frameworks • Family of tools and framework code supporting automated unit testing for various programming languages • JUnit • VBUnit • NUnit • CPPUnit • RubyUnit • SUnit • PyUnit • Based on a design for testing by Kent Beck • Most tools contain command line and GUI implementation • Integrated into popular IDEs

  6. xUnit Testing Design • Write a separate test method for each test • Use assertions that are expected to be true • setUp() method executed before each test method (for common setup code) • tearDown() method executed after each test • Failures reported in various ways, depending on language and tool (command-line, GUI, IDE integrated)

  7. JUnit Versions >= 4.0 • Create a separate test class • Write test methods (marked with @Test) • Include assertions in test methods • Mark methods with @Before and @After for test fixture (@BeforeEach and @AfterEach-–and several others in Junit 5)

  8. Problems with Unit Testing • Testing comes at the end of the development cycle when budgets and schedules get tight • Programmers tend not to write enough (if any) tests • Solution: Test-Driven Development • Move testing to the front of the process • Let tests drive the development effort

  9. Test-Driven Development

  10. Test-Driven Development Principles • Write tests before functionality • Functionality is not complete until current and any previous tests pass • Only one test should be failing at any given time • Let tests drive the code • No functionality is written unless required by a test • Start with the simplest way to make the test pass • Add complexity only when required by a test • User refactoring to clean up the code and eliminate duplication (in the code and the tests)

  11. Test-Driven Development Demo

  12. Test Doubles Support isolation of the unit under test (UUT) • Dummy • The simplest form of a test double. Used for simple parameter or return values. • Stub • May have simple logic to provide different outputs • Spy • Records some information for later verification (such as how many times a method was called) • In Mockito, a mocking decorator for a real object • Mock • Normally generated by mocking frameworks to provide specific return values or throw exceptions in response to method calls. • Records information that can be used at end of test to verify method call order and times called • Expectation set at generation. Expectation failures result in test failure

  13. Test Results cases Driver Class to Test Mock Mock Mock Unit Testing: Using mocks and drivers to isolate the class under unit test Supported by frameworks like JUnit Generated by frameworks like Mockito or EasyMock

  14. Unit Testing: What do mocks do? • Return a hard-coded answer regardless of the input • Validate method call sequence (including parameter values) • Validate the number of times a method was called • Generate errors (e.g., throw exceptions)

  15. Unit Testing: What do drivers do? • Invokes the class with fixed inputs • If an oracle is available, inputs can be generated rather than fixed • Compares actual outputs with expected outputs • Records failure if expected and actual outputs don’t match • Normally continues to execute even if a test case fails • Generates report detailing what worked and what didn’t

  16. Integration Testing • Integration involves combining the individual software units into larger functional units • If the units work individually, why wouldn't they work when you put them together? • Ask the 2010 Miami Heat • Example: Interactions between units may be flawed • Mars Orbiter disaster caused by one module assuming English units and another assuming Metric units

  17. Integration Testing • Big-Bang integration • Entire system is integrated at once • The system doesn't work, and you don't know why • Incremental integration • Add a piece, retest the system, Add a piece, retest the system, … • If it breaks, you know what caused the problem (i.e., the last piece you added) • Integration Testing is testing that's done during integration to ensure that the system continues to work each time a new piece is added

  18. Top-Down Integration Testing Top module is tested with test doubles Test doubles are replaced one at a time, either breadth-first or depth-first Tests are run each time new modules are integrated

  19. Bottom-Up Integration Testing Modules are integrated in a bottom-up fashion until the entire system has been assembled Tests are run each time new modules are integrated Drivers must be developed, but test doubles are not needed

  20. Sandwich Integration • Combination of top-down and bottom-up integration • Integrate from the top and from the bottom as it makes sense, and meet somewhere in the middle

  21. Continuous Integration • Some projects wait several weeks between integrations • Between integrations, engineers work on their pieces in relative isolation • Each integration produces a new "build" • The system may not build successfully between integrations • A better approach for many projects is "continuous integration" • A minimal working system is checked into source control • Engineers check in new code every time they finish a meaningful unit of work • Engineers must never break the build, and are responsible for adding new test cases to exercise the new code • The system is built and tests are run every night to ensure that all is well • The system is always integrated and build-able

  22. System Testing • Black box testing to ensure that the product meets all of the specified requirements • Performed by independent testing group • The testing organization should be separate from the development organization to avoid conflict of interest • The testing organization creates a “test plan” that details how the product will be tested • Development delivers periodic builds of the complete system • Acceptance test is run on new builds to verify that they're stable enough to test • If a build fails the acceptance test, it's rejected and no testing resources are expended on it • If a build passes the acceptance test, tests are run and bugs are entered into the bug tracking system

  23. System Testing • Before shipping a product it's important to get feedback from real customers • Alpha Testing • Product not yet feature-complete (some things are missing) • Give the product to trusted, enthusiastic customers for evaluation • Could be done at your place or theirs • Lots of hand-holding by the developers • Incentives to alpha testers could include influence on product features or early access to needed features • Beta Testing • Product much closer to shipment than with alpha testing • Product is feature-complete (debugging and tuning still in progress) • Broader distribution to a less selective group of customers • Less hand-holding • Incentives could include cash, free software or customer support, early access to needed features

  24. System Testing • When are we done testing? • "You're never done testing, the burden simply shifts from you to your customer." • "You're done testing when you run out of time or you run out of money." • You're done testing when: • All "show stopper" bugs have been fixed • The rate at which new bugs are being found falls below some acceptable threshold • If your testing is thorough, the rate at which you're finding new bugs is correlated to how many bugs still remain

  25. System Testing • Can we predict when we'll be done testing in advance? • An estimation and scheduling problem • How about this? • Record the rate at which you're finding bugs each week • Once you've got several data points, fit a curve to the data that can be used to predict the date at which the bug rate will be low enough to ship • This might be better than guessing

  26. Regression Testing (I) • Any change to a software product, even a slight one, has the potential to cause bugs anywhere in the system • Basically, you have to assume that anything can break at any time • Regression testing is done on every build to ensure that new code (including bug fixes) did not break features that used to work • It's not feasible to re-run all prior tests on every build (especially manual tests) • The regression test suite is a subset of tests that covers all product areas and can feasibly be run on every build • The regression test suite will evolve (i.e., grow larger) over time

  27. Regression Testing (II) • In addition to having regression tests that cover all functional areas of the product, you should also have a regression test for every bug that has been fixed • Whenever a bug is discovered, if you don't already have one, design a test case that can be used later to verify that the bug has been fixed • After the bug is fixed, re-run the test case on each new build to ensure that the bug stays fixed • Experience has shown that fixed bugs often get “unfixed” later • Why? • Source control mistakes (i.e., somebody unintentionally overwrites the bug fix in the code repository) • The bug fix was “fragile” (i.e., it barely worked, and some later change pushes it over the edge) • The feature in which the bug was found is later redesigned and re-implemented. The original mistake is repeated, thus reintroducing the bug

  28. Customer Acceptance Testing • After system testing is complete, the customer might perform a "customer acceptance test" before signing off on the product • The customer acceptance test is a suite of tests that will be run by the customer (or someone they hire) to ensure that the product meets requirements

More Related