1 / 15

Comparison of Reading Techniques Checklist vs. scenario-based reading

Comparison of Reading Techniques Checklist vs. scenario-based reading. Stefan Biffl Institut für Softwaretechnik Technische Universität Wien. Inputs Artifact Requirements and High-level design (UML) 35 pages, 9000 words, 6 diagrams Natural language and UML Ticketing Information System

eze
Télécharger la présentation

Comparison of Reading Techniques Checklist vs. scenario-based reading

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comparison of Reading Techniques Checklist vs. scenario-based reading Stefan Biffl Institut für Softwaretechnik Technische Universität Wien

  2. Inputs Artifact Requirements and High-level design (UML) 35 pages, 9000 words, 6 diagrams Natural language and UML Ticketing Information System Inspectors Limited inspection experience No reading technique experience Some development experience Goals Compare Reading techniques 4 individual techniques 2 team techniques Effectiveness & Efficiency Effort Individual and team performance Basis for determination of product quality Experience for future inspection planning Defect classes: severity levels, document part Inputs and Goals for Inspection Method Selection

  3. Reading Techniques Checklist 3 scenario-based reading techniques with perspective-based and traceability-based parts Auxiliary material Defect classification Forms for defect collection: Defect classification plus time data Questionnaires for inspector data and feedback on inspection Tools for data collection and analysis Inspector qualification Inspection experience Development experience Domain experience Development and problem-solving ability grading by supervisors Inspection Method

  4. Process of Defect Detection, Estimation, and Matching • 169 inspectors in 31 teams: 16 CBR, 15 Scenario-based reading • Two inspection cycles, defect classes, defect detection time logging

  5. Reading Techniques used in the Experiment • Checklist (CBR) • 7 quality sections • Very general notation • SBR Techniques: combination of PBR and TBR • Perspectives: User (SBR-A), Designer (SBR-D) and Tester (SBR-T) • Vertical and horizontal checks of consistency • Partitioning of the artifact by scenarios (not physical)

  6. Three roles User Designer Tester For each role: Perspective-based reading Several steps of model construction Questions to be answered after each step Traceability-based reading Two checks for each role Instructions for marking and checking Questions to be answered after checking Structure of the scenario-basedReading Techniques used

  7. Individual Defect Detection Rate by RT role all major Biffl S., Grechenig T., and Halling M., "Using Reading Techniques to Support Inspection Quality", 2000.

  8. Detection Rate for Object Modelby Reading Technique • Different focus of perspectives can be seen. • Perspectives Designer and Tester show similar performance as CBR. • Inspectors with higher qualification find more defects.

  9. Group Effectivenessfor all defects - one and both cycles One Cycle Both Cycles

  10. Group Effectiveness for major defects – one and both cycles One Cycle Both Cycles

  11. Synthetic Group Effectiveness all/major defects for one inspection cycle all defects major defects

  12. Inspection Outcome • Reading techniques • CBR find more defects overall. • SBR find more major defects. • The SBR techniques focuson different sets of defects. • Team meeting yields no net effectiveness gain. • Inspector capability • Statistical model for the effectiveness of a nominal team • Teams with reading technique mix perform better than the best team that uses only one reading technique • Optimal net gain for given duration

  13. Synthetic Nominal Teams: Effectiveness to find Major Defects Effectiveness to find major detects for CBR teams and optimal RT mixes for teams of 1 to 10 inspectors

  14. Nominal Teams optimized for Product Quality vs. Net Gain Budget 20 to 200 with product quality.

  15. Further work • Reading techniques: Focus on defect detection tasks. • Investigate effectiveness and duration of these tasks: which tasks help find which defects (time stamps). • Influence of inspector task expertise (from a pretest) on individual effectiveness and efficiency. • Tool support for distributed inspection teams:Data collection, defect analysis, and task execution.

More Related