1 / 30

Lecture 9 Testing

Lecture 9 Testing. CSCE 492 Software Engineering. Topics Testing Readings:. Spring, 2008. Overview. Last Time Achieving Quality Attributes (Nonfunctional) requirements Today’s Lecture Testing = Achieving Functional requirements References: Chapter 8 - Testing Next Time:

Télécharger la présentation

Lecture 9 Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 9 Testing CSCE 492 Software Engineering • Topics • Testing • Readings: Spring, 2008

  2. Overview • Last Time • Achieving Quality Attributes (Nonfunctional) requirements • Today’s Lecture • Testing = Achieving Functional requirements • References: • Chapter 8 - Testing • Next Time: • Requirements meetings with individual groups • Start at 10:15 • Sample test -

  3. Testing • Why Test? • The earlier an error is found the cheaper it is to fix. • Errors/bugs terminology • A fault is a condition that causes the software to fail. • A failure is an inability of a piece of software to perform according to specifications

  4. Testing Approaches • Development time techniques • Automated tools: compilers, lint, etc • Offline techniques • Walkthroughs • Inspections • Online Techniques • Black box testing (not looking at the code) • White box testing

  5. Testing Levels • Unit level testing • Integration testing • System testing • Test cases/test suites • Regression tests

  6. Simple Test for a Simple Function • Test cases for the function ConvertToFahrenheit • Formula • Fahrenheit = Celsius * scale + 32 ; // scale = 1.8 • Test Cases for f = convertToFahrenheit(input); • convertToFahrenheit(0); // result should be 32 • convertToFahrenheit(100); // result should be 212 • convertToFahrenheit(-10); // result should be ???

  7. Principles of Object-Oriented Testing • Object-oriented systems are built out of two or more interrelated objects • Determining the correctness of O-O systems requires testing the methods that change or communicate the state of an object • Testing methods in an object-oriented system is similar to testing subprograms in process-oriented systems

  8. Testing Terminology • Error - refers to any discrepancy between an actual, measured value and a theoretical, predicted value. Error also refers to some human action that results in some sort of failure or fault in the software • Fault - is a condition that causes the software to malfunction or fail • Failure - is the inability of a piece of software to perform according to its specifications. Failures are caused by faults, but not all faults cause failures. A piece of software has failed if its actual behaviour differs in any way from its expected behaviour

  9. Code Inspections • Formal procedure, where a team of programmers read through code, explaining what it does. • Inspectors play “devils advocate”, trying to find bugs. • Time consuming process! • Can be divisive/lead to interpersonal problems. • Often used only for safety/time critical systems.

  10. Walkthroughs • Similar to inspections, except that inspectors “mentally execute” the code using simple test data. • Expensive in terms of human resources. • Impossible for many systems. • Usually used as discussion aid.

  11. Test Plan • A test plan specifies how we will demonstrate that the software is free of faults and behaves according to the requirements specification • A test plan breaks the testing process into specific tests, addressing specific data items and values • Each test has a test specification that documents the purpose of the test

  12. Test Plan • If a test is to be accomplished by a series of smaller tests, the test specification describes the relationship between the smaller and the larger tests • The test specification must describe the conditions that indicate when the test is complete and a means for evaluating the results

  13. Example Test Plan Deliverable 8.1 p267 • Test #15 Specification: addPatron() while checking out resource • Requirement #3 • Purpose: Create a new patron object when a new Patron is attempting to check out a resources • Test Description: • Enter check out screen • Press new patron button • … (next slide) • Test Messages • Evaluation – print patron list to ensure uniqueness and that data was entered correctly

  14. Example Test Description of Test Plan • 3. Test Description: • Enter check out screen • Press new patron button • Enter Jill Smith Patron Name field • Enter New Boston Rd.  Address field • Enter … • Choose Student from status choice box • A new Patron ID Number is generated if the name?? is new.

  15. Test Oracle • A test oracle is the set of predicted results for a set of tests, and is used to determine the success of testing • Test oracles are extremely difficult to create and are ideally created from the requirements specification

  16. Test Cases • A test case is a set of inputs to the system • Successfully testing a system hinges on selecting representative test cases • Poorly chosen test cases may fail to illuminate the faults in a system • In most systems exhaustive testing is impossible, so a white box or black box testing strategy is typically selected

  17. Black Box Testing • The tester knows nothing about the internal structure of the code • Test cases are formulated based on expected output of methods • Tester generates test cases to represent all possible situations in order to ensure that the observed and expected behaviour is the same

  18. Black Box Testing • In black box testing, we ignore the internals of the system, and focus on relationship between inputs and outputs. • Exhaustive testing would mean examining output of system for every conceivable input. • Clearly not practical for any real system! • Instead, we use equivalence partitioning and boundary analysis to identify characteristic inputs.

  19. Equivalence Partitioning • Suppose system asks for “a number between 100 and 999 inclusive”. • This gives three equivalence classes of input: • – less that 100 • – 100 to 999 • – greater than 999 • We thus test the system against characteristic values from each equivalence class. • Example: 50 (invalid), 500 (valid), 1500(invalid).

  20. Boundary Values • Arises from the fact that most program fail at input boundaries. • Suppose system asks for “a number between 100 and 999 inclusive”. • The boundaries are 100 and 999. • We therefore test for values: • 99 100 101 the lower boundary • 998 999 1000 the upper boundary

  21. White Box Testing • The tester uses knowledge of the programming constructs to determine the test cases to use • If one or more loops exist in a method, the tester would wish to test the execution of this loop for 0, 1, max, and max + 1, where max represents a possible maximum number of iterations • Similarly, conditions would be tested for true and false

  22. White Box Testing • In white box testing, we use knowledge of the internal structure of systems to guide development of tests. • The ideal: examine every possible run of a system. • Not possible in practice! • Instead: aim to test every statement at least once! • EXAMPLE. • if (x > 5) { • System.out.println(‘‘hello’’); • } else { • System.out.println(‘‘bye’’); • } • There are two possible paths through this code, corresponding to x > 5 and x 5.

  23. Unit Testing • The units comprising a system are individually tested • The code is examined for faults in algorithms, data and syntax • A set of test cases is formulated and input and the results are evaluated • The module being tested should be reviewed in context of the requirements specification

  24. Integration Testing • The goal is to ensure that groups of components work together as specified in the requirements document • Four kinds of integration tests exist • Structure tests • Functional tests • Stress tests • Performance tests

  25. System Testing • The goal is to ensure that the system actually does what the customer expects it to do • Testing is carried out by customers mimicking real world activities • Customers should also intentionally enter erroneous values to determine the system behaviour in those instances

  26. Testing Steps • Determine what the test is supposed to measure • Decide how to carry out the tests • Develop the test cases • Determine the expected results of each test (test oracle) • Execute the tests • Compare results to the test oracle

  27. Analysis of Test Results • The test analysis report documents testing and provides information that allows a failure to be duplicated, found, and fixed • The test analysis report mentions the sections of the requirements specification, the implementation plan, the test plan, and connects these to each test

  28. Special Issues for Testing Object-Oriented Systems • Because object interaction is essential to O-O systems, integration testing must be more extensive • Inheritance makes testing more difficult by requiring more contexts (all sub classes) for testing an inherited module

  29. Configuration Management • Software systems often have multiple versions or releases • Configuration management is the process of controlling development that produces multiple software systems • An evolutionary development approach often results in multiple versions of the system • Regression testing is the process of retesting elements of the system that were tested in a previous version or release

  30. Alpha/beta testing • In-house testing is usually called alpha testing. • For software products, there is usually an additional stage of testing, called beta testing. • Involves distributing tested code to “beta test sites” (usually prospective customers) for evaluation and use. • Typically involves a formal procedure for reporting bugs. • Delivering buggy beta test code is embarrassing!

More Related