html5-img
1 / 58

Introducing Bayesian Nets in AgenaRisk An example based on Software Defect Prediction

Introducing Bayesian Nets in AgenaRisk An example based on Software Defect Prediction. Typical Applications. Predicting reliability of critical systems Software defect prediction Aircraft accident traffic risk Warranty return rates of electronic parts

duff
Télécharger la présentation

Introducing Bayesian Nets in AgenaRisk An example based on Software Defect Prediction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introducing Bayesian Nets in AgenaRisk An example based on Software Defect Prediction

  2. Typical Applications • Predicting reliability of critical systems • Software defect prediction • Aircraft accident traffic risk • Warranty return rates of electronic parts • Operational risk in financial institutions • Hazards in petrochemical industry

  3. Typical Applications • Predicting reliability of critical systems • Software defect prediction • Aircraft accident traffic risk • Warranty return rates of electronic parts • Operational risk in financial institutions • Hazards in petrochemical industry

  4. Typical Applications • Predicting reliability of critical systems • Software defect prediction • Aircraft accident traffic risk • Warranty return rates of electronic parts • Operational risk in financial institutions • Hazards in petrochemical industry

  5. Typical Applications • Predicting reliability of critical systems • Software defect prediction • Aircraft accident traffic risk • Warranty return rates of electronic parts • Operational risk in financial institutions • Hazards in petrochemical industry

  6. Typical Applications • Predicting reliability of critical systems • Software defect prediction • Aircraft accident traffic risk • Warranty return rates of electronic parts • Operational risk in financial institutions • Hazards in petrochemical industry

  7. Typical Applications • Predicting reliability of critical systems • Software defect prediction • Aircraft accident traffic risk • Warranty return rates of electronic parts • Operational risk in financial institutions • Hazards in petrochemical industry

  8. A Bayesian Net for predicting air traffic incidents

  9. A Detailed Example • What follows is a demo of a simplified version of a Bayesian net model to provide more accurate predictions of software defects • Many organisations worldwide have now used models based around this one

  10. Operational defects Predicting software defects The number of operational defects (i.e. those found by customers) is what we are really interested in predicting

  11. Residual Defects Operational defects Predicting software defects We know this is clearly dependent on the number of residual defects.

  12. Residual Defects Operational usage Operational defects Predicting software defects But it is also critically dependent on the amount of operational usage. If you do not use the system you will find no defects irrespective of the number there.

  13. Defects Introduced Residual Defects Operational usage Operational defects Predicting software defects The number of residual defects is determined by the number you introduce during development….

  14. Defects Introduced Defects found and fixed Residual Defects Operational usage Operational defects Predicting software defects …minus the number you successfully find and fix

  15. Defects Introduced Defects found and fixed Residual Defects Operational usage Operational defects Predicting software defects Obviously defects found and fixed is dependent on the number introduced

  16. Defects Introduced Problem complexity Defects found and fixed Residual Defects Operational usage Operational defects Predicting software defects The number introduced is influenced by problem complexity…

  17. Design process quality Defects Introduced Problem complexity Defects found and fixed Residual Defects Operational usage Operational defects Predicting software defects ….and design process quality

  18. Design process quality Defects Introduced Problem complexity Testing Effort Defects found and fixed Residual Defects Operational usage Operational defects Predicting software defects Finally, how many defects you find is influenced not just by the number there to find but also by the amount of testing effort

  19. A Model in action Here is that very simple model with the probability distributions shown

  20. A Model in action We are looking at an individual software component in a system

  21. A Model in action The prior probability distributions represent our uncertainty before we enter any specific information about this component.

  22. A Model in action So the component is just as likely to have very high complexity as very low

  23. A Model in action and the number of defects found and fixed in testing is in a wide range where the median value is about 20.

  24. A Model in action As we enter observations about the component the probability distributions update

  25. Here we have entered the observation that this component had 0 defects found and fixed in testing

  26. Note how the other distributions changed.

  27. The model is doing forward inference to predict defects in operation…..

  28. ..and backwards inference to make deductions about design process quality.

  29. but actually the most likely explanation is very low testing quality.

  30. …and lower than average complexity.

  31. But if we find out that the complexity is actually high…..

  32. https://intranet.dcs.qmul.ac.uk/courses/coursenotes/DCS235/ then the expected number of operational defects increases

  33. https://intranet.dcs.qmul.ac.uk/courses/coursenotes/DCS235/ and we become even more convinced of the inadequate testing

  34. https://intranet.dcs.qmul.ac.uk/courses/coursenotes/DCS235/ So far we have made no observation about operational usage.

  35. https://intranet.dcs.qmul.ac.uk/courses/coursenotes/DCS235/ If, in fact, the operational usage is high…

  36. Then we have an example of a component with no defects in test ..

  37. …but probably many defects in operation.

  38. But suppose we find out that the test quality was very high.

  39. Then we completely revise out beliefs

  40. We are now pretty convinced that the module will be fault free in operation

  41. …And the ‘explanation’ is that the design process is likely to be very high quality

  42. A Model in action we reset the model and this time use the model to argue backwards

  43. A Model in action Suppose we know that this is a critical component that has a requirement for 0 defects in operation…

  44. The model looks for explanations for such a state of affairs.

  45. The most obvious way to achieve such a result is to not use the component much.

  46. But if we know it will be subject to high usage…

  47. Then the model adjusts the beliefs about the other uncertain variables.

  48. A combination of lower than average complexity…..

  49. …Higher than average design quality…..

  50. and much higher than average testing quality …..

More Related