1 / 145

Introduction to SoftwareTesting

Introduction to SoftwareTesting. CSC 450. Slides adapted from various sources including Software Engineering: A Practitioner’s Approach, 6/e by R.S. Pressman and Associates Inc. What is software testing?. “Testing is the process of exercising a program with the intent of finding Errors.”.

shina
Télécharger la présentation

Introduction to SoftwareTesting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to SoftwareTesting • CSC 450 Slides adapted from various sources including Software Engineering: A Practitioner’s Approach, 6/e by R.S. Pressman and Associates Inc.

  2. What is software testing? “Testing is the process of exercising a program with the intent of finding Errors.” “Testing is the process of exercising a program with the intent of proving its correctness.”

  3. What is software testing? “Observing the execution of a software system to validate whether it behaves as intended and identify potential malfunctions.” Antonia Bertolino

  4. What does testing show? errors requirements conformance performance an indication of quality

  5. What does testing NOT show? Bugs remaining in the code! …

  6. a b 1 c d 2 3 4 5 loop < =20 X e Why is testing so difficult? • How many paths are there through this loop? • How long would it take to execute all paths @ 1 test per millisecond (10-3)?

  7. a b 1 c d 2 3 4 5 loop < =20 X e A Testing Scenario • There are about 1014 possible paths • How long would it take to execute all paths?

  8. a b 1 c d 2 3 4 5 loop < =20 X e A Testing Scenario • How long would it take to execute 1014 possible paths @ 1 paths per 10-3 seconds? • 1 year =31,536,000 seconds ~= 32,000,000,000 milliseconds • Total time = 1014 / 32 * 109 ~= 105 /32 years • About 3,170 years @ one test/millisecond • About 3 years @ one test/microsecond • About 11 days @ one test/nanosecond

  9. a b 1 c d 2 3 4 5 loop <= 20 X e Why is testing so difficult? • Because exhaustive testing is impossible • Exhaustive testing means: • Testing a program with all combinations of input values and preconditions for each element of the software under test and/or • Testing all paths through a program.

  10. Why is testing so difficult? Looping : for (i=0; i< n; i++) etc. Loop header Conditional statement statement • How long would it take to execute all paths @ 1 test per millisecond (10-3)?

  11. Terms! Terms! Terms! Anomaly BUG Failure Error Defect Fault Exception

  12. Testing Terminology • Fault/defect • An incorrect step, process, condition, or data definition in a computer program. • Software problem found after release (Pressman) • Error • An incorrect internal state resulting from a fault. • A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition (ISO). • Software problem found before release (Pressman) • Failure • The inability of a system or component to perform its required functions within specified performance requirements (IEEE). • Failures are caused by faults, but not all faults cause failures.

  13. Testing Terminology • Anomaly • Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents (IEEE). • Bug • Things the software does that it is not supposed to do • Something the software doesn't do that it is supposed to. • Exception • An event that causes suspension of normal program execution; types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception (IEEE).

  14. Testing Terminology: Testing vs. Debugging • Testing • Evaluating software to determine conformance to some objective. • Debugging • Finding a fault, given a failure.

  15. Testing Terminology: The RIP Fault/Failure Model Three conditions are necessary for failures to be observable. • Reachability: The location of the fault must be reached. • Infection: After executing the location, the state of the program must be incorrect. • Propagation: The infected state must propagate to cause incorrect program output.

  16. Testing Terminology • Validation • Does software meet its goals? Are we building the right product? User-centric! • Verification • Are we building the product right? E.g., does the code reflect the design? Developer-centric! • Name for the artifact being tested • Implementation under test (IUT) • Method under test (MUT), object under test (OUT) • Class/component under test (CUT), etc.

  17. Testing Terminology: Test Plan • A test plan specifies how we will demonstrate that the software is free of faults and behaves according to the requirements specification • A test plan breaks the testing process into specific tests, addressing specific data items and values • Each test has a test specification that documents the purpose of the test • If a test is to be accomplished by a series of smaller tests, the test specification describes the relationship between the smaller and the larger tests • The test specification must describe the conditions that indicate when the test is complete and a means for evaluating the results

  18. Testing Terminology: Test Oracle • A test oracle is the set of predicted results for a set of tests, and is used to determine the success of testing • Test oracles are extremely difficult to create and are ideally created from the requirements specification

  19. Testing Terminology: Test Case • A test case is a specification of • The pretest state of the IUT • Prefix values: values needed to put software into state required before test execution. • A set of inputs to the system • Postfix values: values that need to be sent to the software after test execution. • The expected results

  20. Testing Terminology • Test Point • A specific value for test case input and state variables. • Domain • Set of values that input or state variables of IUT may take. • Heuristics for test point selection • Equivalence classes – one values in set represents all values in set. • Partition testing techniques • Boundary value analysis • Special values testing • Test suite • A collection of test cases.

  21. Testing Terminology • Fault model • A description about where faults are likely to occur in a program. • Test strategy • An algorithm or heuristic for creating test cases from a representation, an implementation or a test model • Test model • A description of relationships between elements of a representation or implementation. • Test design • The process of creating a test suite using a test strategy. • Test effectiveness: relative ability of testing strategy to find bugs. • Test efficiency: relative cost of finding bugs.

  22. Testing Terminology: Testing Strategies • Responsibility-based test design • Behavioural, functional, black-box, etc. testing • Implementation-based test design • Relies on source code,, e.g., white-box • Fault-based testing

  23. A Classification of Testing: The V Model Requirements Analysis Acceptance Test Architectural Design System Test Subsystem Design Integration Test Detailed Design Module Test Implementation Unit Test

  24. A Classification of Testing • Graphs • Logical expressions • Input domain characterizations • Syntactic descriptions

  25. Test-Case Exercise • UML class model for Figure Hierarchy

  26. The Polygon Class abstract class Polygon { abstract void draw(int r, int g, int b); /* color closed area*/ abstract void erase(); /* set to background rgb */ abstract float area(); /* return area */ abstract float perimeter(); /* return sum of sides */ abstract Point center(); /* return centroid pixel */ }

  27. The Triangle Class class Triangle extends Polygon { public Triangle(LineSegment a, LineSegment b, LineSegment c){…} public void setA(LineSegment a) {…} public void setB(LineSegment b) {…} public void setC(LineSegment c) {…} public LineSegment getA() {…} public LineSegment getB() {…} public LineSegment getC() {…} public boolean is_isoceles(){…} public boolean is_scalene(){…} public boolean is_equilateral(){…} public void draw(int r, int g, int b){…} public void erase(){…} public float area(){…} public float perimeter(){…} public Point center(){…} }

  28. The LineSegment Class class LineSegment extends Figure { public LineSegment(Point x1, Point y1, Point x2, Point y2){…} public void setx1(Point x1) {…} public void sety1(Point y1) {…} public void setx2(Point x2) {…} public void sety2(Point y2) {…} public Point getx1() {…} public Point gety1() {…} public Point getx2() {…} public Point gety2() {…} }

  29. Test-Case Exercise • Specify as many test cases as you can for the Triangle class defined above. • The class is used to determine if a triangle is: isosceles, equilateral or scalene. • A triangle is isosceles if two of its sides are equal. • A triangle is equilateral if all its sides are equal. • A triangle is scalene if its sides are of different sizes.

  30. Test-Case Exercise • A valid triangle must meet two conditions: • No side may have a length of zero. • Each side must be smaller than the sum of all sides divided by two. • i.e. Given s = (a + b + c)/2, then a < s, b < s, c < s must hold. • Stated differently, a < s && b < s && c < s  a < b + c, b < a + c and c < a + b.

  31. Test-Case Exercise • Specify as many test cases as you can for the Triangle class defined above.

  32. Solution To Test Case Exercise

  33. Solution To Test Case Exercise

  34. Solution To Test Case Exercise

  35. Solution To Test Case Exercise

  36. Test Case Design Exercise 2 The C++ IntSet Class • Design test cases to test the methods (ignore the constructor and destructor) of the IntSet class. • The class performs operations on a single set – duplications are not allowed • If add(val) is called and val is already in the set, then the Duplicate exception is thrown. class IntSet { public: //operations on a set of integers IntSet(); /* constructor */ ~IntSet(); /* destructor */ IntSet& add (int); /* add a member */ IntSet& remove (int); /* remove a member */ IntSet& clear (int); /* remove all members */ int isMember (int arg); /* Is arg a member */ int extent (); /* Number of elements */ int isEmpty (); /* Empty or not */ } • Design test cases to test the methods (ignore the constructor and destructor).

  37. developer independent tester Who Tests the Software? • Understands the system • but, will test "gently“ • and, is driven by "delivery" • Must learn about the system • but, will attempt to break it • and, is driven by quality

  38. Testing Strategy • We begin by ‘testing-in-the-small’ and move toward ‘testing-in-the-large’ • For conventional software • The module (component) is our initial focus • Integration of modules follows • For OO software • “testing in the small”  OO class • Has attributes and operations and implies communication and collaboration

  39. Junit Overview

  40. module to be tested interface local data structures boundary conditions results independent paths error handling paths software engineer test cases Unit Testing • The units comprising a system are individually tested • The code is examined for faults in algorithms, data and syntax • A set of test cases is formulated and input and the results are evaluated • The module being tested should be reviewed in context of the requirements specification

  41. Unit Test Environment driver interface local data structures boundary conditions Module independent paths error handling paths stub stub test cases RESULTS

  42. What is JUnit? • JUnit is an open source unit testing framework for Java created by Kent Beck and Erich Gamma. • JUnit features include: • Assertions for testing expected results • Test fixtures for sharing common test data • Test suites for easily organizing and running tests • Graphical and textual test runners

  43. JUnit Terminology • Fixture: a set of objects against which tests are run • A test fixture is ideal when two or more tests are to be executed for a common set of objects. In these scenarios, a test fixture avoids duplicating the initialization tasks before each test and the cleanup activities after each test • setup: a method which sets up the fixture, called before each test is executed. • teardown: a method to tear down the fixture, called after each test is executed. • Test Case: a class which defines the fixture to run multiple tests • - create a subclass of TestCase • - add an instance variable for each part of the fixture • - override setUp() to initialize the variables and allocate resource • - override tearDown() to release any permanent storage, etc. • Test Suite: a collection of test cases.

  44. The JUnit Framework

More Related