1 / 66

Software Engineering Testing

Software Engineering Testing. Based on Presentations by Mira Balaban Department of Computer Science Ben-Gurion university Ian Sommerville: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/Presentations/index.html

ndonald
Télécharger la présentation

Software Engineering Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software EngineeringTesting Based on Presentations by Mira Balaban Department of Computer Science Ben-Gurion university Ian Sommerville: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/Presentations/index.html Dan Berry: http://se.uwaterloo.ca/~dberry/COURSES/software.engr/lectures.pdf/ Glenford J. Myers : The Art of Software Testing, Second Edition Software Engineering 2012 Department of Computer Science Ben-Gurion university

  2. Software verification and validation • Verification and validation is intended to show that a system conforms to its specification and meets the requirements of the system customer • Involves checking and review processes and system testing. • System testing involves executing the system with test cases that are derived from the specification of the real data to be processed by the system.

  3. Verification vs. Validation • Verification: "Are we building the product right?” • Tests against a simulation. • The software should conform to its specification. • Validation: "Are we building the right product?” • Tests against a real world situation. • The software should do what the user really requires.

  4. Verification & Validation Goals • Establish confidence that the software is fit for purpose. • Two principal objectives • The discovery of defects in a system • The assessment of whether or not the system is useful and useable in an operational situation. • This does not mean: • completely free of defects • correct • Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed.

  5. V & V confidence • Depends on system’s purpose, user expectations and marketing environment • Software function • The level of confidence depends on how critical the software is to an organization. • User expectations • Users may have low expectations of certain kinds of software. • Marketing environment • Getting a product to market early may be more important than finding defects in the program.

  6. Static and dynamic verification • Static verification. Concerned with Analysis of the static system representation to discover problems • Software inspections (walkthrough, peer rating) • May be supplement by tool-based document and code analysis • Static analysis • Formal verification • Dynamic V & V: Concerned with exercising and observing product behaviour, using Software testing. • The system is executed with test data and its operational behaviour is observed

  7. Static verificationCode Inspections • Formalized approach to document reviews • Intended explicitly for defect detection (not correction). • Defects may be: logical errors, anomalies in the code that might indicate an erroneous condition (e.g. an uninitialized variable) or non-compliance with standards. • Inspections can check conformance with a specification but not conformance with the customer’s real requirements. • Inspections cannot check non-functional characteristics such as performance, usability, etc. • Usually done by a group of programmers • The comments are to the program not the programmer!!

  8. Static verificationCode Inspections • An Error Checklist for Inspections • The checklist is largely language independent • Sometimes (unfortunately) concentrate more on issues of style than on errors (for example, “Are comments accurate and meaningful?” and “Are if- else, code blocks, and do-while groups aligned?”), and the error checks are too nebulous to be useful (such as “Does the code meet the design requirements?”). • Beneficial side effects • The programmer usually receives feedback concerning programming style, choice of algorithms, and programming techniques. • The other participants gain in a similar way by being exposed to another programmer’s errors and programming style.

  9. Static verificationCode Inspections

  10. Static verificationCode Inspections -example Data Flow Errors • Will the program, module, or subroutine eventually terminate? • what happens if NOTFOUND never becomes false? Comparison errors • Does the way in which the compiler evaluates Boolean expressions affect the program? • This statement may be acceptable for compilers that end the test as soon as one side of an and is false, but may cause a division-by-zero error with other compilers. If ((x==0) && (y/x)>z)

  11. Static verificationAutomated Static Analysis • Static analyzers are software tools for source text processing. • A static Analyzer: • A collection of algorithms and techniques used to analyze source code in order to automatically find bugs • Parses the program text and try to discover potentially erroneous conditions and bring these to the attention of the V & V team. • Very effective as an aid to inspections – they are a supplement to but not a replacement for inspections.

  12. Examples Static Analyzer output Example from the PVS-Studio's Static Code Analyzer for C/C++/ Diagnostic message: • V523 The 'then' statement is equivalent to the 'else' statement. • V501 There are identical sub-expressions 'p_transactionId' to the left and to the right of the '!=' operator. • V532 Consider inspecting the statement of '*pointer++' pattern. Probably meant: '(*pointer)++'. mpeg2_dec umc_mpeg2_dec.cpp 59 if (Condition) { Block StatementA else Block StatementA } p_transactionId!=p_transactionId static IppTableInitAlloc(Ipp32s *tbl, ...) { ... for (i = 0; i < num_tbl; i++) { *tbl++; } } • http://www.viva64.com/en/a/0069/#ID0EY6BI

  13. Static verificationFormal Verification Methods • Formal methods can be used when a mathematical specification of the system is available. • Form the ultimate static verification technique. • Involve detailed mathematical analysis of the specification • May develop formal arguments that a program conforms to its mathematical specification. • Model checking. • Predicate abstraction. • Termination analysis. Output- Does the model satisfied Specification Model (System Design) Model Checking Tool Formal Specification

  14. Static verificationArguments for/against formal methods For Against Producing a mathematical specification requires a detailed analysis of the requirements and this is likely to uncover errors. Can detect implementation errors before testing when the program is analyzed alongside the specification Require specialized notations that cannot be understood by domain experts. Very expensive to develop a specification and even more expensive to show that a program meets that specification. It may be possible to reach the same level of confidence in a program more cheaply using other V & V techniques.

  15. Dynamic V & V Software Testing • According to Boehm (1975), • Programmers in large software projects typically spend their time as follows: • 45-50% program checkout • 33% program design • 20% coding

  16. Software testing • Yet, in spite of this checkout expense, delivered “verified” and “validated” code is still unreliable. What is Software Testing? • The only validation technique for non-functional requirements as the software has to be executed to see how it behaves. • Should be used in conjunction with static verification to provide full V&V coverage.

  17. Two types of Testing • Defect testing • Tests designed to discover system defects. • A successful defect test is one which reveals the presence of defects in a system. • Validation testing • Intended to show that the software meets its requirements. • A successful test is one that shows that a requirements has been properly implemented.

  18. Psychology of Testing (1) • A program is its programmer’s baby! • Trying to find errors in one’s own program is like trying to find defects in one’s own baby. • It is best to have someone other than the programmer doing the testing. • Tester must be highly skilled, experienced professional.

  19. Psychology of Testing (2) • Testing achievements depend a lot on what are the goals. • Myers says (79): • If your goal is to show absence of errors, you will not discover many. • If you are trying to show the program correct, your subconscious will manufacture safe test cases. • If your goal is to show presence of errors, you will discover large percentage of them. Testing is the process of executing a program with the intention of finding errors (G. Myers)

  20. Testing When? • Testing along software development process. What? • What to test? How? • How to conduct the test? • Discussion on Integration testing

  21. When? Testing along software development

  22. The Testing Process

  23. Black-box and White-box Testing • Black-box testing • Also named : closed box , data-driven , input/output driven, behavioral testing • Provide both valid and invalid inputs. • The output is matched with expected result (from specifications). • The tester has no knowledge of internal structure of the program. • It is impossible to find all errors using this approach. • Test all possible types of inputs (including invalid ones like characters, float, negative integers, etc.).

  24. Black-box and White-box Testing •  White-box Testing • Also named: open-box, clear box , glass box , logic-driven • Examine the internal structure of the program. • Test all possible paths of control flow. • Capture: • errors of omission – neglected specification • errors of commission – not defined by the specification

  25. Testing Stages Component Testing • Unit testing • Individual components are tested. • Module testing • Related collections of dependent components are tested. • Sub-system testing • Modules are integrated into sub-systems and tested. The focus here should be on interface testing. • System testing • Testing of the system as a whole. Testing of emergent properties. • Acceptance testing • Testing with customer data to check that it is acceptable. Integration Testing System Testing User Testing

  26. Testing Stages • Component testing – unit and module • Testing of individual program components, in isolation from specifications; • Usually the responsibility of the component developer (except sometimes for critical systems); • Tests are derived from the developer’s experience. • Based on program structure – structural tests. • white-box testing

  27. Testing Stages: (cont) • Integration testing: • Testing of groups of components integrated to create a system or sub-system; • The test team has access to the system source code. • The system is tested as components are integrated. • System Testing (Release Testing): • Tests are based on a system specification. • The test team tests the complete system to be delivered. • Acceptance testing: • Tests involving the customer. • Based on customer requirements. • Usually black-box testing.

  28. Component testing (1) • Component or unit testing is the process of testing individual components in isolation. • It is a defect testing process. • Components may be: • Individual functions or methods within an object; • Object classes with several attributes and methods; • Composite components with defined interfaces used to access their functionality.

  29. Component testing (2) • Ideal object class testing: • Complete test coverage of a class involves • Testing all operations associated with an object; • Setting and interrogating all object attributes; • Exercising the object in all possible states. • Inheritance makes it more difficult to design object class tests as the information to be tested is not localized.

  30. Integration testing • Involves building a system from its components and testing it for problems that arise from component interactions. • Top-down integration • Develop the skeleton of the system and populate it with components. • Bottom-up integration • Integrate infrastructure components then add functional components. • To simplify error localisation, systems should be incrementally integrated

  31. System/Release testing (1) • Testing a release of a system that will be distributed to customers. • Primary goal :Increase the supplier’s confidence that the system meets its requirements. • Release testing is usually black-box or functional testing • Based on the system specification only; • Testers do not have knowledge of the system implementation • Part of release testing may involve testing the emergent properties of a system, such as performance and reliability.

  32. System Testing (1) • Usability Testing • Has each user interface been tailored to the intelligence, educational background, and environmental pressures of the end user? • Are the outputs of the program meaningful, nonabusive, and devoid of computer gibberish? • Security Testing • attempting to devise test cases that subvert the program’s security checks. • For example, get around an operating system’s memory protection mechanism. • You can try to subvert a database management system’s data security mechanisms. • Volume Testing • Heavy volumes of data • If a program is supposed to handle files spanning multiple volumes, enough data is created to cause the program to switch from one volume to another

  33. System Testing (2) • Stress Testing • The tester attempts to stress or load an aspect of the system to the point of failure • The tester identifies peak load conditions at which the program will fail to handle required processing loads within required time spans. • Stress testing has different meaning for different industries where it is used. • Stressing the system often causes defects to come to light. • Stressing the system test failure behavior. • Systems should not fail catastrophically. • Particularly relevant to distributed systems that can exhibit severe degradation as a network becomes overloaded.

  34. System Testing (3) • Performance Testing • Determine if the system meets the stated performance criteria: response times and throughput rates under certain workload and configuration conditions. • E.g. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.). • Recovery Testing • Programs such as operating systems, database management systems, have recovery objectives that state how the system is to recover from programming errors, hardware failures, and data errors. • Programming errors can be purposely injected into a system to determine whether it can recover from them. • Hardware failures such as memory parity errors or I/O device errors can be simulated. • Data errors such as noise on a communications line or an invalid pointer in a database can be created purposely or simulated to analyze the system’s reaction.

  35. What? Test case design • Involves designing the test cases (inputs and outputs) used to test the system. • The goal of test case design is to create a set of tests that are effective in validation and defect testing. • Design approaches: • Requirements-based testing (black-box) • Mainly in release testing; • Partition testing • Based on structure of input or output domain; • In component and release testing. • Structural testing (based on code, white-box). • In component testing.

  36. Finding What to Test (1) • Only exhaustive testing can show a program is free from defects. • Exhaustive testing is impossible – leads to combinatorial explosion. • We need to group cases into equivalence classes. • Two test cases are in the same equivalence class if they find the same error. • Try to find a set of test cases with exactly one element of each equivalence class.

  37. Finding What to Test (2) • Each new test case added to suite should expose at least one (and as many as possible) previously undetectable error. • Each tested thing must appear in at least, and preferably at most one, test case. • A test case may and should test more than one thing. • It is hoped to get enough test cases to find all errors – high test coverage. • A certain amount of skill and deviousness is needed to generate test cases.

  38. Finding What to Test (3) • Content of a test case: • describe what is tested • give input data • give output data (assumed to be expected) • Further constraints, if any. http://www.wintestgear.com/products/TCMLite/TCMLite.html

  39. Requirements based testing • A general principle of requirements engineering : requirements should be testable. • A validation testing technique where you consider each requirement and derive a set of tests for that requirement. • Use cases can be a basis for deriving the tests for a system. • They help identify operations between an actor and the system, or internal system operations. • For each operation, there is a contract, that specifies input of the operation, preconditions and postconditions for the operation. • The contracts specify test cases

  40. Partition based testing (1) • Input data and output results often fall into different classes where all members of a class are related. • Each of these classes is an equivalence partition or domain where the program behaves in an equivalent way for each class member. • Test cases should be chosen from each partition. • Thoroughly test all input options described in the specifications, in all combinations • including that of same input as standard input and as file input, if that is an option. • Test boundaries of input • Test boundaries of output • Test each invalid input

  41. Partition based testing (3)

  42. Partition based testing (4) Example: Testing guidelines for sequences: • Test software with sequences which have only a single value. • Use sequences of different sizes in different tests. • Derive tests so that the first, middle and last elements of the sequence are accessed. • Test with sequences of zero length. • Test with sequences of maximal length or nesting – if exists.

  43. Structural testing – code-based (1) • Sometime called white-box testing. • Used mainly in component testing • To distinguish from functional (requirements-based, “black-box” testing) • Testing product functionality against its specification. • Derivation of test cases according to program structure. • Knowledge of the program is used to identify additional test cases. • Objective is to exercise all program statements (not all path combinations).

  44. Structural testing – code-based (2) • Test each conditional branch in each direction • Test each path at least once if reasonable: • For loops, test repetition of: • 0, • 1, • representative number of times; • max number (if exists) of times. • For loops – termination verification: • Try to find invariants that guarantee termination: • e.g., show for some loop variable • Its value decreases with every round, • the value domain for this variable is well founded – no infinite descending chains. • x’ = x – i, where x’ is value in next loop round, i > 0, • x range over natural numbers.

  45. Structural testing – code-based (3) • Test sensitivities to particular values, e.g., • division by 0, • overflow, • underflow, • subscript boundaries, • null pointers. • when you have a large range, use boundary, typical, and sensitive values.

  46. Structural testing – code-based (4) • Thoroughly test • all input options described in the specifications, in all combinations, • including that of same input as standard input and as file input, if that is an option. • Test “all command line options” in “all combinations”. • In the above, “in all combinations” must be taken with a grain of salt, as it could lead to combinatorial explosion.

  47. Structural testing – code-based (5) • If an analysis of the code shows that some options are truly independent, then the number of combinations can be reduced. • The risk: a mistake could be made in determining independence: The bug could be the non-independence! • Solution: verify the independence assumption with some test cases and then assume it.

  48. Error testing • Test all specified error conditions. • Test all algorithm implied errors. • Try to crash the program.

  49. Regression Testing • When you are doing an enhancement: • you should run all old tests • even ones that are invalid for original purpose; • just provide new expected results. • An expensive process. Requires automation. • Can apply risk analysis to reduce the amounts of tests.

  50. Testing guidelines – (Whittaker 02) • Testing guidelines - hints for the testing team to help them choose tests that will reveal defects in the system • Choose inputs that force the system to generate all error messages; • Design inputs that cause buffers to overflow; • Repeat the same input or input series several times; • Force invalid outputs to be generated; • Force computation results to be too large or too small.

More Related