1 / 22

Advancing Requirements-Based Testing Models to Reduce Software Defects

Advancing Requirements-Based Testing Models to Reduce Software Defects. Craig Hale, Process Improvement Manager and Presenter Mara Brunner, B&M Lead Mike Rowe, Principal Engineer Esterline Control Systems - AVISTA. Software Requirements-Based Testing Defect Model.

affrica
Télécharger la présentation

Advancing Requirements-Based Testing Models to Reduce Software Defects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advancing Requirements-Based Testing Models to Reduce Software Defects Craig Hale, Process Improvement Manager and Presenter Mara Brunner, B&M Lead Mike Rowe, Principal Engineer Esterline Control Systems - AVISTA

  2. Software Requirements-Based Testing Defect Model • Focus: requirements-based test (RBT) reviews • Quality imperative, but cost impacts • Large amount of historical data • Model: defects per review based on number of requirements • Suspected review size a factor • Used for every review • Looked at controllable factors to improve reviews effectiveness • Stakeholders: • Customers • Project leads and engineers • Baselines and models team

  3. Model Goals • Improve overall quality of safety-critical systems • Focus on improving review process • Maximize defect detection rate • Minimize defect escapes • Reduce defect injection rate • Reduce cost of poor quality • Defect process performance baselines split • Application type – avionics, medical, etc. • Embedded vs. non • Complexity level

  4. Factors • 2011 Metrics • 738 reviews over three years • 19,201 requirements • Customers: 10, projects: 21, jobs: 36 • 2012 Metrics • 337 reviews over one year • 2,940 requirements • Customers: 5, projects: 7, jobs: 11 • Y Variables • Number of defects per review (D/R) - discrete: ratio data type • Defects per requirement (D/Rq) - continuous: ratio data type

  5. Predicted Outcomes • Expected defects in the review per number of requirements • Important to understand if exceeding expected defects • Valuable to understand if all defects were detected • Inverse relationship of defects/requirement detected and review size

  6. Modeling Techniques • Non-linear regression vs. linear regression vs. power function • Standard of error estimate varied considerably • Partitioned into nine intervals • Monte Carlo simulation • Standard of error estimate did not change by more than 0.000001 for ten iterations • Determined standard of error estimate for each partition

  7. Factors and Correlation Tables D = Defects PT = Preparation Time R = Review Rq= Requirement

  8. Data Collection: Requirements Count 2011

  9. Data Collection: Partitioning of Reviews 2011

  10. Output from Model 2011 4 Requirements 20 Requirements

  11. Pilot Results 2011 • Determined to automate model • Needed statistical formula for variance • More guidance on what to do when out of range

  12. Results, Benefits and Challenges • Points to decreasing variation in defects • Provides early indicator to fix processes and reduce defect injection rate • Indicates benefits for small reviews and grouping • Challenged with gaining buy-in, training and keeping it simple

  13. Hypothesis Test for Defects/Rqmt and Review Size

  14. Potential New Model Element – Years of Experience • Purpose: Investigate the relationship between a reviewer’s years of experience and the quality of reviews that they perform • Expected Results: Engineers with more experience would be better reviewers • Factors: Data studied from 1-Jun-2011 through 25-May-2012 • 337 internal reviews • 11 jobs • 7 projects • 5 different customers

  15. Data Collection: Requirements Count

  16. Data Collection: Defects per Review

  17. Data Collection: Review Prep Time per Review

  18. Data Collection: Review Prep Time per Rqmt per Defect

  19. Potential New Model Element – Years of Experience • Findings: • Analyzed trend between the independent variable and total years of experience • The review process showed stability with no significant impact per years of experience

  20. Summary • What worked well • Utilizing historical data to predict outcomes • Encouragement of smaller data item reviews • Improving the defect detection rate of data item reviews • Future plans: Continue to enhance the model • Requirement complexity • Expand lifecycles • Expand activities • Safety criticality

More Related