660 likes | 947 Vues
Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at ABB Inc. Paul Luo Li (Carnegie Mellon University) James Herbsleb (Carnegie Mellon University) Mary Shaw (Carnegie Mellon University) Brian Robinson (ABB Research).
E N D
Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at ABB Inc. Paul Luo Li (Carnegie Mellon University) James Herbsleb (Carnegie Mellon University) Mary Shaw (Carnegie Mellon University) Brian Robinson (ABB Research)
Field defects matter Molson, Montreal, Canada Heineken Spain, Madrid, Spain Citizen Thermal Energy, Indianapolis, Indiana, USA Shajiao Power Station, Guandong, China
ABB’s risk mitigation activities include… • Remove potential field defects by focusing systems/integration testing • Higher quality product • Earlier defect detection (when its cheaper to fix)
ABB’s risk mitigation activities include… • Remove potential field defects by focusing systems/integration testing • Higher quality product • Earlier defect detection (when its cheaper to fix) • Plan for maintenance by predicting the number of field defects within the first year • Faster response for customers • Less stress (e.g. for developers) • More accurate budgets
ABB’s risk mitigation activities include… • Remove potential field defects by focusing systems/integration testing • Higher quality product • Earlier defect detection (when its cheaper to fix) • Plan for maintenance by predicting the number of field defects within the first year • Faster response for customers • Less stress (e.g. for developers) • More accurate budgets • Plan for future process improvement efforts by identifying important characteristics (i.e. category of metrics) • More effective improvement efforts
Practical issues • How to conduct analysis with incomplete information? • How to select an appropriate modeling method for systems/integration testing prioritization and process improvement planning? • How to evaluate the accuracy of predictions across multiple releases in time? Experience on how we got to the results
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy using forward prediction evaluation • Empirical results • Conclusions
Product A Real-time monitoring system Growing code base of ~300 KLOC 13 major/minor/fix-pack releases ~127 thousand changes committed by ~ 40 different people Dates back to 2000 (~5 years) Product B Tool suite for managing real-time modules Stable code base of ~780 KLOC 15 major/minor/fix-pack releases ~50 people have worked on the project Dates back to 1996 (~9 years) We examined two systems at ABB
We collected data from… • Request tracking system (Serena Tracker) • Version control system (Microsoft Visual Source Safe) • Experts (e.g. team leads and area leads)
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy through time using forward prediction evaluation • Empirical results • Conclusions
Predictors (metrics available before release) Product (Khoshgoftaar and Munson, 1989) Development (Ostrand et al., 2004) Deployment and usage (Mockus et al., 2005) Software and hardware configurations (Li et al., 2006) Breakdown of field defect modeling Inputs
Breakdown of field defect modeling Inputs Modeling method Metrics-based methods
Breakdown of field defect modeling Inputs Modeling method Outputs Field defects
Modeling process Inputs Modeling method Outputs Take historical Inputs and Outputs to construct model
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy through time using forward prediction evaluation • Empirical results • Conclusions
Outputs • Field defects: valid customer reported problems attributable to a release in Serena Tracker • Relationships • What predictors are related to field defects? • Quantities • What is the number of field defects?
Outputs • Field defects: valid customer reported problems attributable to a release in Serena Tracker • Relationships • Plans for improvement • Targeted systems testing • Quantities • Maintenance resource planning Remember these objectives
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy through time using forward prediction evaluation • Empirical results • Conclusions
Product metrics Lines of code Fanin Halstead’s difficulty … Development metrics Open issues Deltas Authors … Deployment and usage (DU) metrics … we’ll talk more about this Software and hardware configuration (SH) metrics Sub-system Windows configuration … Inputs (Predictors)
Insight 1: use available information • ABB did not officially collect DU information (e.g. the number of installations) • Do analysis without the information? • We collected data from available data sources that provided information on possible deployment and usage • Type of release • Elapsed time between releases • Improved validity • More accurate models • Justification for better data collection
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy through time using forward prediction evaluation • Empirical results • Conclusions
Methods to establish relationships • Rank Correlation (for improvement planning) • Single predictor • Defect modeling (for improvement planning and for systems/integration test prioritization) • Multiple predictors that complement each other
Insight 2: select a modeling method based on explicability and quantifiability • Previous work use accuracy, however… • To prioritize product testing • Identify faulty configurations • Quantify relative fault-proneness of configurations • For process improvement • Identify characteristics related to field defects • Quantify relative importance of characteristics
Insight 2: select a modeling method based on explicability and quantifiability • Previous work use accuracy, however… • To prioritize product testing • Identify faulty configurations • Quantify relative fault-proneness of configurations • For process improvement • Identify characteristics related to field defects • Quantify relative importance of characteristics Explicability
Insight 2: select a modeling method based on explicability and quantifiability • Previous work use accuracy, however… • To prioritize product testing • Identify faulty configurations • Quantify relative fault-proneness of configurations • For process improvement • Identify characteristics related to field defects • Quantify relative importance of characteristics Quantifiability
Insight 2: select a modeling method based on explicability and quantifiability • Previous work use accuracy, however… • To prioritize product testing • Identify faulty configurations • Quantify relative fault-proneness of configurations • For process improvement • Identify characteristics related to field defects • Quantify relative importance of characteristics Not all models have these qualities e.g. Neural Networks, models with Principal Component Analysis
The modeling method we used • Linear modeling with model selection • 39% less accurate than Neural Networks (Khoshgoftaar et al.) example only: not a real model
The modeling method we used • Linear modeling with model selection Function (Field defects) = B1*Input1 + B2 *Input2 + B3 *Input4 Explicability: distinguish the effects of each predictor example only: not a real model
The modeling method we used • Linear modeling with model selection Function (Field defects) = B1*Input1 + B2 *Input2 + B3 *Input4 Quantifiability: compare the effects of predictors example only: not a real model
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy through time using forward prediction evaluation • Empirical results • Conclusions Skipping ahead… Read the paper
Systems/Integration test prioritization Log (Field defects) = B1 Input1 + B2 Input2 …
Systems/Integration test prioritization • Select a modeling method based on explicability and quantifiability
Systems/Integration test prioritization • Select a modeling method based on explicability and quantifiability
Systems/Integration test prioritization • Select a modeling method based on explicability and quantifiability
Systems/Integration test prioritization • Select a modeling method based on explicability and quantifiability • Use available information • Improved validity • More accurate model • Justification for better data collection
Systems/Integration test prioritization • Experts validated results • Quantitative justification for action • ABB found additional defects
Talk outline • ABB systems overview • Field defect modeling overview • Outputs • Inputs • Insight 1: Use available information • Modeling methods • Insight 2: Select a modeling method based on explicability and quantifiability • Insight 3: Evaluate accuracy through time using forward prediction evaluation • Empirical results • Conclusions
Risk mitigation activities enabled • Focusing systems/integration testing • Found additional defects • Plan for maintenance by predicting the number of field defects within the first year • Do not yet know if results are accurate enough for planning purposes • Plan for future process improvement efforts • May combine with prediction method to enable process adjustments
Experiences recapped • Use available information when direct/preferred information is unavailable • Consider explicability and quantifiability of a modeling method when objectives are improvement planning and test prioritization • Use forward prediction evaluation procedure to assesses accuracy of prediction for multiple releases in time Details on insights and results in our paper
Thanks to: Ann Poorman Janet Kaufman Rob Davenport Pat Weckerly Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at ABB Inc. Paul Luo Li (Carnegie Mellon University) James Herbsleb (Carnegie Mellon University) Mary Shaw (Carnegie Mellon University) Brian Robinson (ABB Research)
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding Release 1 Release 2 Release 3 Release 4
Insight 3 : use forward prediction evaluation • Accuracy is the correct criterion when predicting the number of field defects for maintenance resource planning • Current accuracy evaluation methods are not well-suited for multi-release systems • Cross-validation • Random data with-holding • Only a non-random sub-set is available • Predicting for a past release is not the same as predicting for a future release Not realistic!