1.46k likes | 1.59k Vues
Introduction to SoftwareTesting. CSC 450. Slides adapted from various sources including Software Engineering: A Practitioner’s Approach, 6/e by R.S. Pressman and Associates Inc. What is software testing?. “Testing is the process of exercising a program with the intent of finding Errors.”.
E N D
Introduction to SoftwareTesting • CSC 450 Slides adapted from various sources including Software Engineering: A Practitioner’s Approach, 6/e by R.S. Pressman and Associates Inc.
What is software testing? “Testing is the process of exercising a program with the intent of finding Errors.” “Testing is the process of exercising a program with the intent of proving its correctness.”
What is software testing? “Observing the execution of a software system to validate whether it behaves as intended and identify potential malfunctions.” Antonia Bertolino
What does testing show? errors requirements conformance performance an indication of quality
What does testing NOT show? Bugs remaining in the code! …
a b 1 c d 2 3 4 5 loop < =20 X e Why is testing so difficult? • How many paths are there through this loop? • How long would it take to execute all paths @ 1 test per millisecond (10-3)?
a b 1 c d 2 3 4 5 loop < =20 X e A Testing Scenario • There are about 1014 possible paths • How long would it take to execute all paths?
a b 1 c d 2 3 4 5 loop < =20 X e A Testing Scenario • How long would it take to execute 1014 possible paths @ 1 paths per 10-3 seconds? • 1 year =31,536,000 seconds ~= 32,000,000,000 milliseconds • Total time = 1014 / 32 * 109 ~= 105 /32 years • About 3,170 years @ one test/millisecond • About 3 years @ one test/microsecond • About 11 days @ one test/nanosecond
a b 1 c d 2 3 4 5 loop <= 20 X e Why is testing so difficult? • Because exhaustive testing is impossible • Exhaustive testing means: • Testing a program with all combinations of input values and preconditions for each element of the software under test and/or • Testing all paths through a program.
Why is testing so difficult? Looping : for (i=0; i< n; i++) etc. Loop header Conditional statement statement • How long would it take to execute all paths @ 1 test per millisecond (10-3)?
Terms! Terms! Terms! Anomaly BUG Failure Error Defect Fault Exception
Testing Terminology • Fault/defect • An incorrect step, process, condition, or data definition in a computer program. • Software problem found after release (Pressman) • Error • An incorrect internal state resulting from a fault. • A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition (ISO). • Software problem found before release (Pressman) • Failure • The inability of a system or component to perform its required functions within specified performance requirements (IEEE). • Failures are caused by faults, but not all faults cause failures.
Testing Terminology • Anomaly • Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents (IEEE). • Bug • Things the software does that it is not supposed to do • Something the software doesn't do that it is supposed to. • Exception • An event that causes suspension of normal program execution; types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception (IEEE).
Testing Terminology: Testing vs. Debugging • Testing • Evaluating software to determine conformance to some objective. • Debugging • Finding a fault, given a failure.
Testing Terminology: The RIP Fault/Failure Model Three conditions are necessary for failures to be observable. • Reachability: The location of the fault must be reached. • Infection: After executing the location, the state of the program must be incorrect. • Propagation: The infected state must propagate to cause incorrect program output.
Testing Terminology • Validation • Does software meet its goals? Are we building the right product? User-centric! • Verification • Are we building the product right? E.g., does the code reflect the design? Developer-centric! • Name for the artifact being tested • Implementation under test (IUT) • Method under test (MUT), object under test (OUT) • Class/component under test (CUT), etc.
Testing Terminology: Test Plan • A test plan specifies how we will demonstrate that the software is free of faults and behaves according to the requirements specification • A test plan breaks the testing process into specific tests, addressing specific data items and values • Each test has a test specification that documents the purpose of the test • If a test is to be accomplished by a series of smaller tests, the test specification describes the relationship between the smaller and the larger tests • The test specification must describe the conditions that indicate when the test is complete and a means for evaluating the results
Testing Terminology: Test Oracle • A test oracle is the set of predicted results for a set of tests, and is used to determine the success of testing • Test oracles are extremely difficult to create and are ideally created from the requirements specification
Testing Terminology: Test Case • A test case is a specification of • The pretest state of the IUT • Prefix values: values needed to put software into state required before test execution. • A set of inputs to the system • Postfix values: values that need to be sent to the software after test execution. • The expected results
Testing Terminology • Test Point • A specific value for test case input and state variables. • Domain • Set of values that input or state variables of IUT may take. • Heuristics for test point selection • Equivalence classes – one values in set represents all values in set. • Partition testing techniques • Boundary value analysis • Special values testing • Test suite • A collection of test cases.
Testing Terminology • Fault model • A description about where faults are likely to occur in a program. • Test strategy • An algorithm or heuristic for creating test cases from a representation, an implementation or a test model • Test model • A description of relationships between elements of a representation or implementation. • Test design • The process of creating a test suite using a test strategy. • Test effectiveness: relative ability of testing strategy to find bugs. • Test efficiency: relative cost of finding bugs.
Testing Terminology: Testing Strategies • Responsibility-based test design • Behavioural, functional, black-box, etc. testing • Implementation-based test design • Relies on source code,, e.g., white-box • Fault-based testing
A Classification of Testing: The V Model Requirements Analysis Acceptance Test Architectural Design System Test Subsystem Design Integration Test Detailed Design Module Test Implementation Unit Test
A Classification of Testing • Graphs • Logical expressions • Input domain characterizations • Syntactic descriptions
Test-Case Exercise • UML class model for Figure Hierarchy
The Polygon Class abstract class Polygon { abstract void draw(int r, int g, int b); /* color closed area*/ abstract void erase(); /* set to background rgb */ abstract float area(); /* return area */ abstract float perimeter(); /* return sum of sides */ abstract Point center(); /* return centroid pixel */ }
The Triangle Class class Triangle extends Polygon { public Triangle(LineSegment a, LineSegment b, LineSegment c){…} public void setA(LineSegment a) {…} public void setB(LineSegment b) {…} public void setC(LineSegment c) {…} public LineSegment getA() {…} public LineSegment getB() {…} public LineSegment getC() {…} public boolean is_isoceles(){…} public boolean is_scalene(){…} public boolean is_equilateral(){…} public void draw(int r, int g, int b){…} public void erase(){…} public float area(){…} public float perimeter(){…} public Point center(){…} }
The LineSegment Class class LineSegment extends Figure { public LineSegment(Point x1, Point y1, Point x2, Point y2){…} public void setx1(Point x1) {…} public void sety1(Point y1) {…} public void setx2(Point x2) {…} public void sety2(Point y2) {…} public Point getx1() {…} public Point gety1() {…} public Point getx2() {…} public Point gety2() {…} }
Test-Case Exercise • Specify as many test cases as you can for the Triangle class defined above. • The class is used to determine if a triangle is: isosceles, equilateral or scalene. • A triangle is isosceles if two of its sides are equal. • A triangle is equilateral if all its sides are equal. • A triangle is scalene if its sides are of different sizes.
Test-Case Exercise • A valid triangle must meet two conditions: • No side may have a length of zero. • Each side must be smaller than the sum of all sides divided by two. • i.e. Given s = (a + b + c)/2, then a < s, b < s, c < s must hold. • Stated differently, a < s && b < s && c < s a < b + c, b < a + c and c < a + b.
Test-Case Exercise • Specify as many test cases as you can for the Triangle class defined above.
Test Case Design Exercise 2 The C++ IntSet Class • Design test cases to test the methods (ignore the constructor and destructor) of the IntSet class. • The class performs operations on a single set – duplications are not allowed • If add(val) is called and val is already in the set, then the Duplicate exception is thrown. class IntSet { public: //operations on a set of integers IntSet(); /* constructor */ ~IntSet(); /* destructor */ IntSet& add (int); /* add a member */ IntSet& remove (int); /* remove a member */ IntSet& clear (int); /* remove all members */ int isMember (int arg); /* Is arg a member */ int extent (); /* Number of elements */ int isEmpty (); /* Empty or not */ } • Design test cases to test the methods (ignore the constructor and destructor).
developer independent tester Who Tests the Software? • Understands the system • but, will test "gently“ • and, is driven by "delivery" • Must learn about the system • but, will attempt to break it • and, is driven by quality
Testing Strategy • We begin by ‘testing-in-the-small’ and move toward ‘testing-in-the-large’ • For conventional software • The module (component) is our initial focus • Integration of modules follows • For OO software • “testing in the small” OO class • Has attributes and operations and implies communication and collaboration
module to be tested interface local data structures boundary conditions results independent paths error handling paths software engineer test cases Unit Testing • The units comprising a system are individually tested • The code is examined for faults in algorithms, data and syntax • A set of test cases is formulated and input and the results are evaluated • The module being tested should be reviewed in context of the requirements specification
Unit Test Environment driver interface local data structures boundary conditions Module independent paths error handling paths stub stub test cases RESULTS
What is JUnit? • JUnit is an open source unit testing framework for Java created by Kent Beck and Erich Gamma. • JUnit features include: • Assertions for testing expected results • Test fixtures for sharing common test data • Test suites for easily organizing and running tests • Graphical and textual test runners
JUnit Terminology • Fixture: a set of objects against which tests are run • A test fixture is ideal when two or more tests are to be executed for a common set of objects. In these scenarios, a test fixture avoids duplicating the initialization tasks before each test and the cleanup activities after each test • setup: a method which sets up the fixture, called before each test is executed. • teardown: a method to tear down the fixture, called after each test is executed. • Test Case: a class which defines the fixture to run multiple tests • - create a subclass of TestCase • - add an instance variable for each part of the fixture • - override setUp() to initialize the variables and allocate resource • - override tearDown() to release any permanent storage, etc. • Test Suite: a collection of test cases.