1 / 49

Dynamic Testing Techniques

Chapter 4. Software Testing ISTQB / ISEB Foundation Exam Practice. Dynamic Testing Techniques. 1 Principles. 2 Lifecycle. 3 Static testing. 4 Dynamic test techniques. 5 Management. 6 Tools. 1. 2. 3. ISTQB / ISEB Foundation Exam Practice. 4. 5. 6. Dynamic Testing Techniques.

lei
Télécharger la présentation

Dynamic Testing Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4 Software Testing ISTQB / ISEB Foundation Exam Practice Dynamic Testing Techniques 1 Principles 2 Lifecycle 3 Static testing 4 Dynamic testtechniques 5 Management 6 Tools

  2. 1 2 3 ISTQB / ISEB Foundation Exam Practice 4 5 6 Dynamic Testing Techniques Contents What is a testing technique? Black and White box testing Black box test techniques White box test techniques Experience-based techniques

  3. 4.1 The Test Development Process • Test Documentation Standard [IEEE 829] • Test Conditions • Test Design Specification document • i.e., if we have a requirements specification , the table of contents can be our initial list of test conditions • Test Cases • Test Case Specification document: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item • Test Procedures (or scripts) • Test Procedure Specification (aka., test script) document

  4. 4.1.2 Test analysis: identifying test conditions • Traceability: Test conditions should be able to be linked back to their sources in the test basis • Horizontal Traceability: The tracing of requirements for a test level through the layers of test documentation (i.e., test plan, test design specification, test case specification, test procedure specification) • Vertical Traceability: The tracing of requirements through the layers of development documentation to components (i.e., from requirements to components) • Test Design Specification template in IEEE 829 Standard • Test design specification identifier • Features to be tested • Approach refinements • Test identification • Feature pass/fail criteria

  5. 4.1.3 Test Design: Specifying Test Cases • Test conditions can be vague • Test cases must be very specific • Test Oracle or Oracle: In order to know what the system should do, we need to have a source of information about the correct behaviour of the system • Test Case Specification template (IEEE Standard) • Test case specification identifier • Test items • Input specifications • Output specifications • Environmental needs • Special procedural requirements • Intercase dependencies

  6. 4.1.4. Test Implementation: Specifying Test Scripts • Group the test cases in a sensible way for execution • A set of simple tests may form a regression suite • An automation script is written in a programming language that the test tool can interpret • Test scripts are formed into a test execution schedule that specifies which procedures are to be run first • Test Schedule • When a given script should be run and by whom • For example; a regression script may always be the first to be run when a new release of the software arrives, as a smoke test or sanity check • Test Procedure Specification template • Test procedure specification identifier, purpose, special requirements, procedure steps

  7. Why dynamic test techniques? • Exhaustive testing (use of all possible inputs and conditions) is impractical • must use a subset of all possible test cases • must have high probability of detecting faults • Need thought processes that help us select test cases more intelligently • test case design techniques are such thought processes

  8. What is a testing technique? • a procedure for selecting or designing tests • based on a structural or functional model of the software • successful at finding faults • 'best' practice • a way of deriving good test cases • a way of objectively measuring a test effort Testing should be rigorous, thorough and systematic

  9. Advantages of techniques • Different people: similar probability find faults • gain some independence of thought • Effective testing: find more faults • focus attention on specific types of fault • know you're testing the right thing • Efficient testing: find faults with less effort • avoid duplication • systematic techniques are measurable Using techniques makes testing much more effective

  10. 1 2 3 ISTQB / ISEB Foundation Exam Practice 4 5 6 Dynamic Testing Techniques Contents What is a testing technique? Black and White box testing Black box test techniques White box test techniques Experience-based techniques

  11. Three types of systematic technique Static (non-execution) • examination of documentation, etc.. Specification-based (Black Box) • based on behaviour /functionality of software Structure-based (White Box) • based on structureof software Experience-based

  12. Some Test Techniques Static Control Flow Data Flow Dynamic Informal Reviews Static Analysis Inspection Structure-based Specification-based Walkthroughs Technical Reviews Experience-based Equivalence Partitioning Error Guessing Boundary Value Analysis Exploratory Testing Statement Decision Tables Decision State Transition Condition Use case testing Multiple Condition

  13. Acceptance System Integration Component Black box versus white box? Black box appropriate at all levels but dominates higher levels of testing White box used predominately at lower levels to compliment black box

  14. 1 2 3 ISTQB / ISEB Foundation Exam Practice 4 5 6 Dynamic Testing Techniques Contents What is a testing technique? Black and White box testing Black box test techniques White box test techniques Experience-based techniques

  15. Black Box test design techniques • Techniques defined in BS 7925-2 • Equivalence partitioning • Boundary value analysis • Decision tables • State transition • Use case testing

  16. 1. Equivalence partitioning (EP) * EP: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions * Equivalence Partition (or equivalence class): A portion of an input or output domain for which the behaviour of a component is assumed to be the same In principle test cases are designed to cover each partition at least once

  17. 1. Equivalence partitioning (EP) • divide (partition) the inputs, outputs, etc. into areas which are the same (equivalent) • assumption: if one value works, all will work • one from each partition better than all from one

  18. Example • A savings account in a bank earns a different rate of interest depending on the balance in the account • i.e., if a balance in the range $0 - $100 has a 3% interest rate a balance over $100 and up to $1000 has a 5% interest rate, balances of $1000 and over have a 7% interest rate Invalid partition | Valid (for 3% interest) | Valid (for 5%) | Valid (for 7%) - $0.01 $0.00 $100.00 $100.01 $999.99 $1000.00 We might choose the following balances as test cases: -$10.00, $50.00, $260.00, $1348.00

  19. 2. Boundary value analysis (BVA) • BVA is based on testing at the boundaries between partitions • It’s an extension of equivalence partitioning (EP) • Boundary Value: An input value or output value which is on the edge of an equivalence partition • Example: Consider a printer that has an input option of the number of copies to be made, from 1 to 99 Invalid | Valid | Invalid 0 1 99 100 (1, 99, 0, 100) will be boundary value tests

  20. Example: Invalid partition | Valid (for 3% interest) | Valid (for 5%) | Valid (for 7%) - $0.01 $0.00 $100.00 $100.01 $999.99 $1000.00 Boundary Values: -$0.01, $0.00, $100.00, $100.01, $999.99, $1000.00

  21. 2. Boundary value analysis (BVA) • faults tend to lurk near boundaries • good place to look for faults • test values on both sides of boundaries

  22. Designing Test Cases • One test cases can cover one or more test conditions • Using the bank balance example, our first test could be of a new customer with a balance of $500. • This would cover a balance in the partition from $100.01 to $999.99 and output partition of a 5% interest rate • We would also be covering other partitions that we haven’t discussed yet, for example a valid customer, a new customer, a customer with only one account, etc. • When we come to test invalid partitions, the safest option is probably to try to cover only one invalid test condition per test case

  23. Why do both EP and BVA? • If you do boundaries only, you have covered all the partitions as well • technically correct and may be OK if everything works correctly! • But if the test fails, is the whole partition wrong, or is a boundary in the wrong place ? • testing only extremes may not give confidence for typical use scenarios (especially for users) • boundaries may be harder (more costly) to set up • We recommend that you test the partitions separately from boundaries – this means choosing partition values that are NOT boundary values !

  24. 3. Decision tables • A black box test design technique in which test cases are designed to execute the combinations of inputs and/or causes shown in a decision table • explore combinations of inputs • sometimes also referred to as a cause-effect table • it is very easy to overlook specific combinations of input • start by expressing the input conditions of interest so that they are either TRUE or FALSE

  25. Example: Credit Card • If you are a new customer opening a credit card account, you’ll get a 15% discount on all your purchases today • If you’re an existing customer and you hold a loyalty card, you get a 10% discount • If you have a coupon, you can get 20% off today but it can’t be used with the new customer’s discount • Discount amounts are added, if applicable Rule 1 and 2: cannot be both a new customer and existing customer

  26. 4. State Transition Testing • A black box test design technique in which test cases are designed to execute valid and invalid state transitions • State Diagram: A diagram that depicts the states that a system can assume and shows the events that cause and/or result from a change from one state to another • A state transition model has 4 basic parts: • The states that the software may occupy (open/closed) • The transitions from one state to another • The events that cause a transition (closing a file) • The actions that result from a transition (an error message)

  27. 4. State Transition Testing State Diagram for PIN entry 7 states: start, wait for PIN, 1st try, 2nd try, 3rd try, eat card, access to account 4 events: Card_Inserted, Enter_PIN, PIN_OK, PIN_NOT_OK

  28. Deriving test cases • We may start with a typical scenario • 1st test case: The correct PIN is entered the first time. We may want to cover every transition • 2nd test case: Enter an incorrect PIN each time, so that the system eats the card • 3rd test case: PIN was incorrect the first time but OK the second time • 4th test case: PIN was correct on the third try The last two tests are less important than the first two

  29. Testing for invalid transition • Deriving tests only from a state graph is very good for seeing the valid transitions, but we may not easily see the negative tests, where we try to generate invalid transitions • State table is useful

  30. 5. Use Case Testing • A black box test design technique in which test cases are designed to execute scenarios of use cases • Each use case describes the interactions the actor has with the system in order to achieve a specific task • We would have a test of the success scenario and one test for each extension

  31. 1 2 3 ISTQB / ISEB Foundation Exam Practice 4 5 6 Dynamic Testing Techniques Contents What is a testing technique? Black and White box testing Black box test techniques White box test techniques Experience-based techniques

  32. Structure-Based or White Box Techniques • White-box techniques • focus on the structure of a software component, such as statements, decisions, branches • serve 2 purposes • Test coverage measurement • Structural test case design • Test coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite

  33. What is test coverage? • Test coverage • measures in some specific way the amount of testing performed by a set of tests Coverage = # of coverage items exercised X 100% Total number of coverage items • 100% coverage does NOT mean 100% tested • Two different test cases may achieve exactly the same coverage but the input data of one may find an error that the input data of the other doesn’t • One drawback of code coverage measurement is that it measures coverage of what has been written, i.e. The code itself; it cannot say anything about the software that has not been written

  34. Types of Coverage • Coverage can be measured at Component testing level, Integration testing, System testing or Acceptance testing level • i.e., at acceptance level level, the coverage items may be requirements, menu options, screens, or typical business transactions • We can measure coverage for each of the specification-based techniques as well • EP: percentage of equivalence partitions exercised • BVA: percentage of boundaries exercised • Decision tables: percentage of decision table columns tested • State transition testing • Percentage of states visited • Percentage of valid transitions exercised (aka, Chow’s 0-switch coverage) • Percentage of pairs of valid transitions exercised (transition pairs or Chow’s 1-switch coverage) • Percentage of invalid transitions exercised (from the state table)

  35. Types of Coverage • When coverage is discussed by business analysts, system testers or users • it refers to the percentage of requirements that have been tested by a set of tests • This may be measured by a requirements management tool or test management tool • When coverage is discussed by programmers • it refers to the coverage of code such as statement coverage or decision coverage • Statement coverage is significantly weaker than decision coverage • D0-178B requires structural coverage for systems • Structural coverage techniqyes should always be used in addition to the specification-based and experience-based techniques rather than as an alternative to them

  36. Structure-based Test Case Design • Let’s aim for a given level of coverage (95%) • i.e., we have reached only 87% so far • Additional test cases can be designed with the aim of exercising some of the structural elements not yet reached • This is structure-based test design • These new tests are then run through the instrumented code and a new coverage measure is calculated • This is repeated until the required coverage measure is achieved

  37. Statement coverage is normally measured by a software tool. Statement coverage • percentage of executable statements exercised by a test suite number of statements exercised x 100% total number of statements • example: • program has 100 statements • tests exercise 87 statements • statement coverage = 87% ? = Typical ad hoc testing achieves 30% - this leaves 70% of the statements untested

  38. Test case Input Expected output 1 2 3 4 5 1 7 7 As all 5 statements are ‘covered’ by this test case, we have achieved 100% statement coverage Statement numbers Example of statement coverage read(a) IF a > 6 THEN b = a ENDIF print b

  39. Decision Coverage and decision testing • A decision is an IF statement, a loop control statement (e.g. DO-WHILE) or a CASE statement, where there are 2 or more possible outcomes from the statement • Functional testing may achieve only 40% to 60% decision coverage • Decision coverage is stronger than statement coverage • 100% decision coverage guarantees 100% statement coverage, but not the other way around !

  40. Decision coverage is normally measured by a software tool. Decision coverage(Branch coverage) • percentage of decision outcomesexercised by a test suite number of decisions outcomes exercised total number of decision outcomes • example: • program has 120 decision outcomes • tests exercise 60 decision outcomes • decision coverage = 50% False ? = True Typical ad hoc testing achieves 20%

  41. 1 2 3 4 ? 1 2 1 2 3 1 2 ? ? ? ? ? Paths through code

  42. Read Yes A=21 A>0 Yes No No Print End Example Read A IF A = 21 THEN ENDIF Print “Key” IF A > 0 THEN ENDIF Read A IF A > 0 THEN IF A = 21 THEN Print “Key” ENDIF ENDIF • Cyclomatic complexity: _____ • Minimum tests to achieve: • Statement coverage: ______ • Branch coverage: _____ 3 1 3

  43. Other Structure-based Techniques • Branch Coverage: The percentage of branches that have been exercised by a test suite • 100% branch coverage implies both 100% decision coverage and 100% statement coverage • Branch coverage is related to decision coverage • Decision coverage measures the coverage of conditional branches; • Branch coverage measures the coverage of both conditional and unconditional branches • Other control-flow code coverage measures include • Linear code sequence and jump coverage (LCSAJ) • Condition coverage • Multiple condition coverage • Condition determination coverage (a.k.a., modified condition decision coverage MCDC)

  44. 1 2 3 ISTQB / ISEB Foundation Exam Practice 4 5 6 Dynamic Testing Techniques Contents What is a testing technique? Black and White box testing Black box test techniques White box test techniques Experience-based techniques

  45. Non-systematic test techniques A testing approach that is only rigorous, thorough and systematic is incomplete Some defects are hard to find using more systematic approaches, so a good hunter can be very creative at finding those elusive defects

  46. 1. Error-Guessing • always worth including • after systematic techniques have been used • can find some faults that systematic techniques can miss • supplements systematic techniques Not a good approach to start testing with

  47. 1. Error Guessing: deriving test cases • Consider: • past failures • intuition • experience • brain storming • “What is the craziest thing we can do?” • There are no rules for error guessing • Typical conditions to include are division by zero, blank or no input, empty files, wrong kind of data • Fault attack: Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur

  48. 2. Exploratory Testing • Exploratory testing is about exploring, finding out about the software, what it does, what it doesn’t, what works, and what doesn’t work. • The tester is constantly making decisions about what to test next and where to spend the limited time. • This is an approach that is most useful when there are no or poor specifications and when time is severly limited. • It can also serve to complement other, more formal testing, helping to establish greater confidence in the software • The test design and test execution activities are performed in parallel typically without formally documenting test conditions, test cases or test scripts

  49. 1 2 3 ISTQB / ISEB Foundation Exam Practice 4 5 6 Dynamic Testing Techniques Summary: Key Points Test techniques are ‘best practice’: help to find faults Black Box techniques are based on behaviour White Box techniques are based on structure Error Guessing supplements systematic techniques

More Related