1 / 35

Chapter 9 Testing the System

Chapter 9 Testing the System. Function testing Performance testing Acceptance testing Software reliability, availability,and maintainability Installation testing Debugging. 9.1 Principle of System Testing. Source of software faults. Customer requirements specifications.

allene
Télécharger la présentation

Chapter 9 Testing the System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 9Testing the System • Function testing • Performance testing • Acceptance testing • Software reliability, availability,and maintainability • Installation testing • Debugging

  2. 9.1 Principle of System Testing Source of software faults

  3. Customer requirements specifications System functional requirements Other software requirements User environment System In Use! Function test Performance test Acceptance test Installation test Functioning system Verified validated software Accepted system System testing process Integrated modules does the integrated system perform as promised by the requirements specification? does the system run at the customer site(s)? is the system what the customer expects? are the non-functional requirements met?

  4. Process Objectives Function test: Check that the integrated system performs its functions as specified in the requirements. Performance test: Compare the integrated components with the nonfunctional system requirements. Acceptance test: The customers make sure that the system meets their understanding of the requirements. Installation test: Allow users to exercise system functions and document additional problems that result from being at the actual site.

  5. Incremental testing Build or Integration Plan • 例: 一个传递呼叫的电信系统可以划分为如下的子系统: • 在一个交换机内为呼叫寻找路由 • 在一个电话区号中为呼叫寻找路由 • 在一个州、省或区为呼叫寻找路由 • 在一个国家为呼叫寻找路由 • 为国际呼叫寻找路由 spin Build Plan or Integration Plan:Define the subsystem to be tested and to describe how, where, when, and by whom the tests will be conducted.

  6. Build Plan for Telecommunications System Spin Functions Test Start Test End 0 Exchange 1/9 15/9 1 Area code 30/9 15/10 2 State/province/district 25/10 5/11 3 Country 10/11 20/11 4 International 1/12 15/12 The number of spins and their definition depend primarily on our resources and those of our customers. The spin definitions also depend on the system components’ ability to operate in the stand-alone mode.

  7. Spin 0: 测试中心计算机的一般功能 Spin 1: 测试中心计算机的消息翻译功能 Spin 2: 测试中心计算机的消息接受功能 Spin 3: 以单机模式测试每个外部计算机 Spin 4: 测试外部计算机的消息发送功能 Spin 5: 测试外部计算机的消息接受功能

  8. Configuration Version Configuration management A system configuration is a collection of system components delivered to a particular customer. Configuration management: the control of system differences to minimize risk and error. versions and releases A version is one for each platform or situation in which the software will be used, described an version n. A release of the software is an improved system intended to replace the old one, described an release m. Version n.m reflects the system’s position an it grows and matures.

  9. production system vs. development system A version has been tested and performs according to only a subset of the customers’ requirements Adds the functionality of the next phase, and corrects the problems found in previous version Regression testing to re-execute some subset of tests to ensure that changeshave not propagated unintended side effects.

  10. deltas, separate files and conditional compilation • 单独文件( separate files ) • 差别文件( deltas) • 条件编译( conditional compilation) change control

  11. Analysts Professional testers Configuration management specialists System designers Users Test team organize and run the tests who created requirements to help control fixes to evaluate issues that arise understand the proposed solution

  12. Function testing Thread testing 测试: 确定溶解氧的改变 确定温度的改变 确定酸度的改变 确定放射性的改变 9.2 Function Testing Purpose and Roles Tread: the set of actions associated with a function 例:水监控系统 需求:监测出4种特性的大变化 溶解氧 温度 酸度 放射性

  13. A test should: • have a high probability of detecting a fault • use a test team independent of the designers and programmers • know the expected actions and output • never modify the system just to make testing easier • have stopping criteria

  14. Cause-and-Effect Graphs The inputs are called causes, and the outputs and transformations are effects. The result is a Boolean graph reflecting these relationships, called a cause-and-effect graph.

  15. Creating a cause-and-effect graph: Step1: the requirements are separated so each requirement describes a single function Step2: all causes and effects are described. Example: a water-level monitoring system Requirement: the system sends a message to the dam operator about the safety of the lake level INPUT: The syntax of the function is LEVEL(A,B) where A is the height in meters of the water behind the dam, and B is the number of centimeters of rain in the last 24-hour period.

  16. PROCESSING: The function calculates whether the water level is within a safe range, is too high, or is too low. • OUTPUT: The screen shows one of the following messages: • “LEVEL = SAFE”, when the result is safe or low. • “LEVEL = HIGH”, when the result is high. • “INVALID SYNTAX” depending on the result of the calculation. Causes: 1. The first five characters of the command “LEVEL”. 2. The command contains exactly two parameters separated by a comma and enclosed in parentheses. 3. The parameters A and B are real numbers such that the water level is calculated to be LOW.

  17. 4. The parameters A and B are real numbers such that the water level is calculated to be SAFE. 5. The parameters A and B are real numbers such that the water level is calculated to be HIGH. Effects: • The message “LEVEL = SAFE” is displayed on the screen. • The message “LEVEL = HIGH” is displayed on the screen. • The message “LNVALID SYNTAX” is printed out Intermediate nodes: 1. The command is syntactically valid. 2. The operands are syntactically valid.

  18. Table 9.2. Decision table for cause-and-effect graph. Test 1 Test 2 Test 3 Test 4 Test 5 Cause 1 I I I S I Cause 2 I I I X S Cause 3 I S S X X Cause 4 S I S X X Cause 5 S S I X X Effect 1 P P A A A Effect 2 A A P A A Effect 3 A A A P P

  19. 9.3 Performance tests Purpose and Roles System performance is measured against the performance objectives set by the customer as expressed in the nonfunctional requirements Types of performance testing: • Stress tests • Volume tests • Configuration tests • Compatibility tests • Regression tests • Security tests • Timing tests • Environmental tests • Quality tests • Recovery tests • Maintenance tests • Documentation tests • Human factors (usability) tests

  20. 9.4 Reliability, Availability, and Maintainability Definitions Software Reliability is the probability that a system is operate without failure under given conditions for a given time interval. 0:unreliable system 1:high reliable system Software Availability is the probability that a system is operating successfully according to specification at a given point in time. 0: unusable system 1:completely up and running system

  21. Software Maintainability is the probability that , for a given condition of use, a maintenance activity can be carried out within a stated time interval and using stated procedures and resources. 0: un-maintainable system 1: high maintainable system • Four different level of failure severity: • Catastrophic • Critical • Marginal • Minor

  22. Table 9.3. Inter-failure times (read left to right, in rows) 3 30 113 81 115 9 2 91 112 15 138 50 77 24 108 88 670 120 26 114 325 55 242 68 422 180 10 1146 600 15 36 4 0 8 227 65 176 58 457 300 97 263 452 255 197 193 6 79 816 1351 148 21 233 134 357 193 236 31 369 748 0 232 330 365 1222 543 10 16 529 379 44 129 810 290 300 529 281 160 828 1011 445 296 1755 1064 1783 860 983 707 33 868 724 2323 2930 1461 843 12 261 1800 865 1435 30 143 108 0 3110 1247 943 700 875 245 729 1897 447 386 446 122 990 948 1082 22 75 482 5509 100 10 1071 371 790 6150 3321 1045 648 5485 1160 1864 4116 Failure Data

  23. Type-1 uncertainty: Reflecting uncertainty about how the system will be used Type-2 uncertainty: Reflecting our lack of knowledge about the effect of fault removal

  24. Measuring Reliability, Availability, and Maintainability Mean Time To Failure( MTTF ,平均失效等待时间): the average of interfailure times or times to failure( as t1,t2,…, tn). Ti: denote the yet-to-be-observed next time to failure Mean Time To Repair( MTTR, 平均修复时间): the average time it takes to fix a faulty software component. Mean Time Between Failure( MTBF, 平均失效间隔时间):MTBF=MTTF + MTTR Reliability: R=MTTF/(1+MTTF) Availability: A=MTBF/(1+MTBF) Maintainability: M=1/(1+MTTR)

  25. Reliability Stability and Growth Reliability Stability: if the system’s interfailure times stay the same Reliability Growth: if the system’s interfailure times increase Reliability Prediction

  26. CREAT DELETE MODIFY 0.5 0.25 0.25 Importance of the Operational Environment Operational profile: describe likely user input over time • Two benefits of the statistical testing: • Testing concentrates on the parts of the system most likely to be used and hence should result in a system that the user finds more reliable. • Reliability predictions based on the test results should give us accurate prediction of reliability as seen by the user.

  27. 9.5 Acceptance Testing Purpose and Roles Enable the customers and users to determine if the system we build really meets their needs and expectations. Types of acceptance tests • Benchmark test: the customer prepares a set of test cases that represent typical conditions under which the system will operates when actually installed to evaluate the system’s performance. • Pilot test:install on experimental basis • Alpha test:in-house test

  28. Beta test:customer pilot • Parallel testing:new system operates in parallel with old system Result of Acceptance testing • The system is acceptable because the functions and performances accord with the requirement specification. • The system can’t be accepted because the functions and performances differ with the requirement specification.

  29. 9.6 Installation Testing • The tests focus on two things: • Completeness of the installed system • Verification of any functional or nonfunctional characteristics that may be affected by site conditions

  30. 9.7 Debugging an art for removing errors Execution of cases Test cases Additional tests Results Suspected causes Regression tests Debugging Corrections Identified causes The process of debugging

  31. Debugging Effort time required to diagnose the symptom and determine the cause time required to correct the error and conduct regression tests

  32. symptom cause Symptoms & Causes • symptom and cause may be geographically separated • symptom may disappear when another problem is fixed • cause may be due to a combination of non-errors • cause may be due to a system or compiler error • cause may be due to assumptions that everyone believes • symptom may be intermittent

  33. brute force / testing backtracking induction deduction Debugging Techniques Memory dump Spy points

  34. Debugging: Final Thoughts 1. Don't run off half-cocked, think about the symptom you're seeing. 2. Use tools (e.g., dynamic debugger) to gain more insight. 3. If at an impasse, get help from someone else. 4. Be absolutely sure to conduct regression tests when you do "fix" the bug.

More Related