1 / 33

Testing (1)

Testing (1). Let’s look at :. Principles of testing. The testing process. Methods used in testing & debugging. Testing (2). Analysis. Design. Test. Code. Testing (3). Verification. A good test is one that has a high probability of finding a new error. Narrow View (unit level).

Télécharger la présentation

Testing (1)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing (1) Let’s look at : • Principles of testing • The testing process • Methods used in testing & debugging

  2. Testing (2) Analysis Design Test Code

  3. Testing (3) Verification A good test is one that has a high probability of finding a new error Narrow View (unit level) A successful test is one that discovers a new error Run a program component with the purpose of finding errors prior to delivery of product Broad View Validation The process of ensuring that the software conforms to its specification and meets user requirements

  4. Testing (4) Testing Principles • All tests should be traceable to user requirements • Tests should be planned long before testing begins – • test planning should really be done with program design 20% of the land grows 80% of the crops • Pareto Principle – 80% of errors occur in 20% of classes 20% of the citizens own 80% of the wealth • Testing should begin “in the small” and proceed towards • testing “in the large” • Exhaustive (complete) testing is usually not possible • To be effective, testing should be conducted by an • independent 3rd party

  5. Testing (5) Who tests the software? Understands the system but will test It “gently” and be motivated by need to deliver the product Developer? Independent tester? Needs to learn about the system but will attempt to break it/crash it and is driven by quality

  6. Testing (6) Features of Testable Software Operability - The better it works the more easily it can be tested (Bugs are easier to find in software which at least executes) Observability - The results of each test should be easy to observe Controlability - If we can control the execution of a separate parts of the software then it will be easier to set up specific test cases and perhaps to automate testing Simplicity - Simple system architectures are easier to test than complex ones Test case – see next slide Stability - Changes disrupt test planning and test cases

  7. Testing (7) A test case is a controlled experiment that tests all or part of the system with defined test data Test process - Objective – to uncover errors Criteria – in a complete manner Constraints – with a minimum of time & effort However, this is very often not the case and test design often done badly and in an ad hoc manner Remember that “bugs lurk in corners and congregate at boundaries!” “Make it up as we go along” Why?

  8. Testing (8) Exhaustive Testing (not feasible) Two nested loops containing four if .. then .. else statements. Each loop can execute up to 20 times. There are 10^14 possible paths if we count each single iteration. If we execute one test per millisecond, it would take 3170 years to test this program!

  9. Testing (9) Selective Testing (feasible) Test a carefully selected execution path. Note that it cannot be comprehensive

  10. Testing (10) Testing Methods Black Box testing – examines fundamental interface without looking at internal processing. In short, is the program’s output correct for a given set of inputs? ? ? ? White (Glass) Box testing – examines in detail the internal processing done by software components ? ? ? Debugging – fixes the errors identified during testing

  11. Testing (11) The objective in white box testing is to ensure that all statements and conditions have been executed at least once. Derive test cases that: 1. Exercise all independent execution paths 2. Exercise all logical decisions on both the true and the false sides 3. Execute all loops at their boundaries and within operational bounds 4. Exercise all internal data structures to ensure they are valid and that read/write accesses are as they should be

  12. Testing (12) Why cover all paths? • Logic errors and incorrect assumptions are inversely proportional to • the probability that a program path will be executed • We often may be inclined to believe that a logical path is not likely to be • executed when, in fact, it may be frequently executed • Typographical errors occur at random and so it is likely that untested paths • will contain some

  13. Testing (13) Basis Path Testing - Provides a measure of the logical complexity of a method or code component and provides a guide for defining a basis set of execution paths It uses flow graph notation to represent the flow of control where nodes represent processing and arrows represent control flow. Sequence While If

  14. Testing (14) Flow Graphs – Compound Conditions Separate nodes are created for each arm of a compound condition (e.g. a and b are separate nodes in the condition if(a && b)) a Example: If(a || b) { x(); } else { y() } Z() b y x z

  15. Testing (15) Cyclomatic Complexity is a software metric that gives a quantitative measure of the logical complexity of a program Find the cyclomatic complexity, V(G), of a flow graph G: - Number of simple predicates (decisions) + 1 or - V(G) = E – N+2 (where E are edges and N are nodes or - Number of enclosed areas + 1 In this case, V(G) = 4

  16. Testing (16) Analysis has shown that the number of errors and the maintenance Increases significantly for modules with a V(G) > 10. Another use of cyclomatic complexity is that V(G) identifies the number of independent paths though a program that need to be tested

  17. Testing (17) Basis Path testing V(G) is the number of linearly independent paths through the program (each has at least one edge not covered by any other path) 1 • Path 1: 1-2-3-8 • Path 2: 1-2-3-8-1-2-3-8 2 • Path 3: 1-2-4-5-7-8 4 • Path 4: 1-2-4-6-7-8 3 5 6 Test design must prepare test cases that will force the execution of each path in the basis set 7 8

  18. Testing (18) Basis Path testing Example: Draw the flow graph, calculate the cyclomatic complexity, and list the basis paths using the following C++ piece of code: While(value[i]!= -999.0 && totinputs < 100 { totinputs++ if(value[i] >= min && value[i] <= max) { totvalid++ sum = sum +value[i] } i++ } 2 1 While(value[i]!= -999.0 && totinputs < 100 { totinputs++ if(value[i] >= min && value[i] <= max) { totvalid++ sum = sum +value[i] } i++ } 4 3 5 6 7 6 1 3 7 5 2 4

  19. Testing (19) Basis Path testing Basis Paths to be Tested are: V(G) = number of enclosed areas + 1 = 5 1. 1-7 (value[i] = -999.0) V(G) = number of simple predicates + 1 = 5 2. 1-2-7 (value[i] = 0, totinputs=100) 6 3. 1-2-3-6-1-7 1 3 V(G) = E – N + 2 = 10 – 7 + 2 = 5 7 4. 1-2-3-4-6-1-7 5 5. 1-2-3-4-5-6-1-7 2 4

  20. Testing (20) Other White Box Methods Condition testing: exercises the logical (boolean) conditions in a program Data Flow testing: selects test paths according to the location of the definition and use of variables in a program Loop testing: focuses on the correctness of loop structures

  21. Testing (21) Loop Testing Unstructured Loops Concatenated Loops Nested Loops Simple Loop

  22. Testing (22) Loop Testing Test Cases for simple Loops 1. Skip the loop entirely 2. Only one pass through the loop 3. Two passes through the loop 4. m passes through the loop (m<n) 5. (n-1), n and (n+1) passes through the loop where n is the maximum number of allowable passes

  23. Testing (23) Loop Testing Testing Nested Loops 1. Start at the innermost loop. Set all the outer loops to their minimum iteration parameter variables (i.e. loop control variable) 2. Test the min, min+1, typical, max-1 and max for the inner loops 3. Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this until the outermost loop has been tested Testing Concatenated Loops If the loops are independent of one another then treat each as a simple loop, otherwise treat as nested loops Testing Unstructured Loops Don’t bother! Re-design!

  24. Testing (24) Black Box Testing is complementary to white box testing. Decide on what external conditions (i.e. inputs, requirements, events) that fully exercise all functional requirements (i.e. test all functions encoded in software) Black Box Testing Requirements Outputs Inputs Events

  25. Testing (25) Black Box Strengths Black Box Testing Attempts to find errors in the following categories: • Incorrect or missing functions • Interface errors • Errors in data structures or external database access • Behaviour or performance errors • Initialisation or termination errors Black box testing is performed during later stages of testing

  26. Testing (26) Black Box Methods Black Box Methods Equivalence Partitioning - Divide input domain into classes of data - Each test case then uncovers whole classes of errors • Examples: • valid data (user supplied commands, files names, graphical • data (e.g. mouse selections), • invalid data (data outside bounds, • physically impossible data e.g. negative value when • only positive possible), • valid data supplied in an invalid situation • (e.g. an order_quantityof 5,000 might be valid for an item • named chalk but would most likely be invalid for an item • called projector.

  27. Testing (27) Black Box Methods Black Box Methods Boundary Value Analysis - More errors tend to occur at the boundaries of the input domain - Select test cases that exercise bounding values • Examples: • an input condition specifies a valid range of input values bounded • by values a and b. • Test cases should be designed with values a and band just above • and below a and b.

  28. Testing (28) Debugging > .. Test cases Execution of cases . . . . . . New test cases Suspected causes Debugging Regression tests Identified causes Corrections Testing for unintended knock-on effects Testing is a structured process that identifies an error’s “symptoms” Debugging is a diagnostic process that identifies an error’s “cause”

  29. Testing (29) Debugging Effort Time required to correct the error and conduct regression tests Time required to diagnose the symptom and determine the cause Regression testing means re-execution of a subset of test cases to ensure that changes made to correct errors do not have unintended side effects

  30. Testing (30) Bugs- Symptoms & Causes • Symptom and cause may be geographically • separated • Symptom may disappear when another • problem is fixed • Cause may be due to a combination of • non-errors • Cause may be due to a system or compiler • error • Cause may be due to an assumption that • everyone believes Symptom • Symptom may be intermittent Cause

  31. Testing (31) Not All Bugs are Equal ! infectious damage catastrophic extreme serious disturbing annoying mild Bug type Bug Categories: function-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc

  32. Testing (32) Debugging Techniques Brute Force: - Use when all else fails - Try memory dumps and run-time traces - Search through mass of information which may lead to source of error Backtracking: - Can work in small programs where there are few backward paths - Trace the source code backwards from the error to the source Cause Elimination: - Create a set of “cause – hypothesis” for each error - Use error data (program output) or further tests to prove or disprove these hypotheses Some people seem to have intuitive skill at debugging and can find the source of errors quickly

  33. Testing (33) Debugging Tips • Don’t immediately dive into the code, think about the symptom you are seeing • Use tools (e.g. dynamic debuggers) to gain further insight about the error problem • If you can’t solve the problem and locate the source of the error, get help from someone Ask these questions before attempting to “fix” the bug: Is the cause of the bug reproduced in another part of the program, i.e. are there duplicates of the error in the code? 2. Could another bug be introduced by the fix? 3. What could have been done to fix the bug at a design or coding-plan level in the first place? Be absolutely sure to conduct regression tests when you do “fix” the bug

More Related