460 likes | 760 Vues
Testing 29/Apr/2009. Petr Panuška petr.panuska@hp.com QA Manager & Offshore Team Manager, SOA Center HP Software R&D . Agenda. Why is SW testing necessary Testing principles Testing design techniques. Why is SW testing necessary. SW does not always work as we wish
E N D
Testing 29/Apr/2009 Petr Panuška petr.panuska@hp.com QA Manager & Offshore Team Manager, SOA Center HP Software R&D
Agenda • Why is SW testing necessary • Testing principles • Testing design techniques
Why is SW testing necessary • SW does not always work as we wish • Defects in SW can cause harm • Not all defects in SW result in a failure • Testing reduces the probability of undiscovered defects remaining in the software • Testing gives us the confidence about the tested SW
Testing Principles • ISTQB defines 7 Testing Principles • Testing shows presence of defects • Exhaustive testing is impossible • Early testing • Defect clustering • Pesticide paradox • Testing is context dependent • Absence-of-errors fallacy
Testing shows presence of defects • Tests show the presence not the absence of defects. • Testing can’t prove that SW is defect free • Testing reduces the probability of undiscovered defects remaining in the software • However, the fact that no defects were found, is not a proof of correctness.
Exhaustive testing is impossible • It is not cost-effective • Defects are not of an equal risk • We need to prioritize our tests • Use risk analysis and priorities to focus testing efforts • Use appropriate testing techniques to meet the risk, giving us confidence in what testing has been performed • Exhaustive testing is equivalent to Halting problem • There is no algorithm that can for an arbitrary program and its input validates if the program works (with the given input) fine or not
Early testing • In SW development, what can be tested • Requirement specification • Use-case Analysis document • Technical Analysis document • Design document • Functional Implementation • Performance • Usability • What can be also tested • Test Description • Documentation Price of potential defect fix increases
Example – Early testing • Defect in H310: JBoss 4.3.0 not supported on Win 2008 (although Hermes 310 PRD requires this combination to be supported) • 7 other duplicates reported • Involving 4 people from QA • 9 people from DEV
Defect clustering • Defects get clustered since different reasons • A component might be more complex than others • A component might be developed by a less experienced developer • A component might be developed by a less careful developer • A component might have a poorer specification • A component might need more refactoring (introducing more other defects) • Another explanation: http://parlezuml.com/blog/?postid=242
Pesticide paradox • Old tests will eventually stop finding new defects • Defects create immunity to these tests • To find defects, new tests need to be introduced • Or old tests refactored • Conclusion: Regression tests do not find majority of new defects
Testing is context dependent • Different kinds of tests are run in different periods • Defect testing • To discover faults or defects in the software where its behavior is incorrect or not in conformance with its specification; • A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system. • Validation testing • To demonstrate to the developer and the system customer that the software meets its requirements; • A successful test shows that the system operates as intended. • We start with defect testing and perform validation testing later in the SW development process.
Testing is context dependent II. • We can also test differently because of • The type of industry (safety-critical, business, nuclear) • Number of customers (impact the SW makes) • One customer patch • New version of SW for potentially many customers
Absence-of-errors fallacy • We test a system to see if it meets the documented requirements • We find and fix defects to demonstrate that the system meets these specifications • Finding and fixing defects does not help if the system is unusable and does not fulfill the user’s needs and expectations
Testing design techniques • Black-box techniques • Equivalence partitioning • Boundary value analysis • Decision tables • State transitions • White-box techniques • Statement testing and coverage • Decision testing and coverage • Other structural-based coverage No details about the product implementation are known Tester knows how the tested requirement is implemented
Equivalence partitioning • Tax Calculator based on annual income • Input – integral number • Partitions of equivalence
Equivalence partitioning II • Split input into partitions • Valid partitions • Invalid partitions • Have one test-case for each partition • Even invalid ones • One more example: Enter number of a month
Boundary value analysis • Similar as Equivalence partitioning but • Test boundary values • N-BVA (BVA = 0-BVA) • Test N boundary values • BV, BV+1, BV-1, … BV+N, BV-N
Decision tables - Example • Example – HP SOA Systinet Licensing Framework • HP SOA Systinet consists of • Core, Reporting, Lifecycle, etc. • Policy Manager (optional) • Contract Manager (optional) • HP SOA Systinet License can limit • Number of users • Number of days • Default license included in the installer • Policy & Contract Manager included • Limited to 60 days, unlimited to number of users
Decision tables • Example – Testing license application
Decision tables • Example – Testing license application
Decision tables • Example – Testing license application
Decision tables • Example – Testing license application Default license
Decision tables • Example – Testing license application Default license
Decision tables • Example – Testing license application Default license Visibility Edition
Decision tables • Example – Testing license application Default license Unlimited Standard Edition Visibility Edition
Decision tables • Example – Testing license application Default license Limited Standard Edition Unlimited Standard Edition Visibility Edition
Decision tables II • Catches all possible combinations • Helps to analyze the situation and to decide • Which use-cases must be tested • Which use-cases does not make sense • Requires knowledge of the business environment • Helps to prioritize the use-cases
State transitions • Example – Testing ‘Contract Request Lifecycle’ Create Request Accept Request Reject Request Revoke Request Delete Request
State transitions II • Transition diagram shows valid transitions • Does not show invalid (that should be also tested) • Good for testing use-cases that are possible to be described by transitions between states • The scenarios may contain test-cases for each • State • Transition • Event that triggers state change (transition) • Action that may result from those transitions
Other structure-based techniques • Condition Coverage • All single boolean conditions in a single statement must be evaluated to true and false • Full condition coverage does not imply full decision one! • Condition/Decision Coverage • Hybrid metric composed by the union of condition coverage and decision coverage.
Other structure-based techniques II • Multiple Condition Coverage • All possible combinations of all boolean conditions in a single statement must be evaluated to true and false • Path Coverage • Whether each of the possible paths in each function have been followed. A path is a unique sequence of branches from the function entry to the exit. • Function, Call, LCSAJ, Loop, Race, etc. coverages • FMI, see http://www.bullseye.com/coverage.html
Example • RTCA published DO-178B that requires minimal coverage for aeronautics SW systems based on their criticality:
Summary • Testing Principles • Testing shows presence of defects • Exhaustive testing is impossible • Early testing • Defect clustering • Pesticide paradox • Testing is context dependent • Absence-of-errors fallacy • Testing Techniques • Black-box techniques • White-box techniques