1 / 50

Best Practices in System Testing

Best Practices in System Testing. Mahesh Kuruba Dr.Gargi Keeni. International Symposium on Future Software Technology, Xi’an October 2002. To emphasize the significance of System Testing and its Best Practices to improve the System Test Processes. Objective. Agenda.

Télécharger la présentation

Best Practices in System Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Best Practices in System Testing Mahesh Kuruba Dr.Gargi Keeni International Symposium on Future Software Technology, Xi’an October 2002

  2. To emphasize the significance of System Testing and its Best Practices to improve the System Test Processes Objective

  3. Agenda

  4. American Airlines - $20,000 per minute for outage Sabre Reservation system - $50,000,000 Denver airport - $1,100,000 per day What does a defect mean to the customer Source:Ed Bryce, “Failure is not an Option”, http://www.stickyminds.com/docs_index/XDD3316filelistfilename1.pdf

  5. $1,000 to $27,000 per minute of down time ESPN - $7,800,000 for 3 days Amazon.Com - $400,000 for 6 hours Egghead.com - $280,000 for 2 days American Airlines Sabre reservation system 1 min - $20,000 12 hrs 1989, 5 hrs in 1994 What does Down time mean to the customer Source:Ed Bryce, “Failure is not an Option”, http://www.stickyminds.com/docs_index/XDD3316filelistfilename1.pdf

  6. Market / Customer perspective Reputation Market share Organization perspective Productivity loss Schedule slippage Loss in Profit Margin What does a defect mean to the Software Development Organization

  7. PhaseCost Requirements $63.75 Integration/System Test $750-$3,000 Production $10 K - $140 K Cost of Uncovering Defect

  8. Phase in which defect FoundCost ratio Requirements 1 Design 3-6 Coding 10 Unit/ Integration Testing 15-40 System testing/ Acceptance testing 30-70 Production 40-1000 Cost of Fixing defect

  9. ROI Analysis Source: Rex Black, The Cost of Software Quality,Investing in Software Testing Series, Part 1www.stickyminds.com

  10. System Testing

  11. Review Inspection Unit testing Integration testing System testing Unit Testing Integration Testing Coding Requirements Design Verification Verification Verification Validation Validation Verification and Validation

  12. Scope of system testing e-Security testing Compatibility testing Performance testing Usability testing Disaster Recovery testing Why System Testing?

  13. V-Model Requirements System Testing Design Integration Testing Coding Unit Testing

  14. Objective What type of project? What type of software? What is the technical environment? What is the project scope What are the success factors? How critical is the system to the organization? What is the required Quality level? System Test Strategy

  15. System Testing Process Testability Assessment Test Automation Assessment Performance Assessment Requirements Traceability Matrix Test Strategy Test Plan Test Cases Testing Test Results and Analysis

  16. System Test Activities Requirements Design Coding/ Implementation System Testing Design Doc Requirements Doc R E V I E W Test Automation Assessment Review Test Cases Testing Testability Assessment Review Test Plan Test Results Test Strategy

  17. Best Practices

  18. Requirements traceability Design – to – Test Design of Test Cases Reviews Production data Traceability of defects Train the Testers Best Practices…

  19. Reusability Software Configuration Management Manage Test Library Automated Test Data generator Motivation Beta / Internal Beta Operational scenario Best Practices…

  20. System Tests Regression Testing e-Security Testing Usability Testing Disaster Recovery Testing Compatibility testing (multi-platform) Performance testing Best Practices…

  21. Tracing business requirements To business objectives To design To code To test cases Requirements Traceability Best Practices…

  22. Simple Logic Proper flow Design – to - Test Best Practices… Design of Test Cases • Maximum code coverage • Defect prone test case

  23. Requirements review Design review Code review Test case review Reviews Best Practices… Benefits • Identifying the Quality levels • Identifying Non-Testable requirements • Planning the System testing

  24. Environment issues Data Size issues Performance Production data Best Practices…

  25. Traceability of defects Requirements Code Decision to release Feature Code Re-inspect Re-construct Traceability of defects Best Practices…

  26. Knowledge required in Functional Technical Testing Train testers Testing methodology Train the Testers Best Practices…

  27. Software components Test artifacts and Test cases Generic test case components Automated Test scripts Reusability Best Practices… Benefits • Minimal testing • Good Quality • Improved productivity

  28. Requirement changes affect Test Plan Test strategy Requirements traceability matrix Test Automation scripts Test cases Test effort Software Configuration Management Best Practices…

  29. Develop and Manage Test cases Test scripts Best practices Test result report Test analysis reports Manage Test Library Best Practices…

  30. System testing require more data for execution Probability of finding fault in a black box testing approach Y= 1-(1-x) N Where ‘x’ is the probability that the random input will cause the failure N is the random black box probes. Automated Test Data generator Best Practices…

  31. Awards Rewards Team building activities Motivation Best Practices…

  32. Beta for a small group of clients Internal Beta for a small group of the organization Uncover Defects Fix Defect Product release to the group Stable Product Feedback Beta / Internal Beta Best Practices…

  33. Test Case preparation based upon the Legacy system failures Previous iteration failures User focused test Operational failures Production support team Operational scenario Best Practices…

  34. Challenges

  35. When to Stop Testing Did I achieved the Quality Goals Selection of Testers Estimation of Test Effort Optimal Test effort Support from the Management Test tool selection Challenges…

  36. Quiz

  37. Test Management Metrics

  38. Activities Planning Monitoring Controlling Test Managers Job Concerns Cost Quality Schedule

  39. Defect Density (SCR/FP) Requirements Stability Shape (release) Trend (Across the releases) Test Group efficiency Planned Vs Executed Metrics Note: ‘Test’ means ‘System testing’ across the Graphs

  40. Defect Density

  41. Requirements Stability

  42. Type of Defect

  43. Severity Defects

  44. Test Group Efficiency

  45. Test Planning & Monitoring Planning • No of BR’s to Test • Testers Available • No of Defects acceptable • Time

  46. 1. Are we good Or Was Development Team didn’t do a good job 2. Were we bad before? 3. Why didn’t we found the defects in earlier phases ? (Apply Systems Engineering) Controlling

  47. Is it Right Time to Release ?

  48. What if Software is released Early ?

  49. Exercise

  50. Thank you

More Related