1 / 11

Automated Adaptive Ranking and Filtering of Static Analysis Alerts

Automated Adaptive Ranking and Filtering of Static Analysis Alerts. Sarah Heckman Laurie Williams November 10, 2006. Contents. Motivation Research Objective AWARE Ranking and Filtering Alert Ranking Factors Experiment Progress & Future Work Conclusions. Motivation.

pooky
Télécharger la présentation

Automated Adaptive Ranking and Filtering of Static Analysis Alerts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Adaptive Ranking and Filtering of Static Analysis Alerts Sarah Heckman Laurie Williams November 10, 2006 ISSRE 2006 | November 10, 2006

  2. Contents • Motivation • Research Objective • AWARE Ranking and Filtering • Alert Ranking Factors • Experiment • Progress & Future Work • Conclusions

  3. Motivation • Programmers tend to make the same mistakes • Static analysis tools are useful for finding these recurring mistakes • However, static analysis tools have a high rate of false positives

  4. Research Objective To improve the correctness and security of a system by continuously, automatically, and efficiently providing adaptively ranked and filtered static analysis alerts to software engineers during development.

  5. AWARE • Automated Warning Application for Reliability Engineering • Ranks static analysis alerts by the probability an alert is a true fault • Ranking is adjusted by • Filtering alerts • Fixing alerts

  6. Alert Ranking Factors • Type Accuracy: Categorization of alerts based on observed accuracy of alert type • Code Locality: Alerts reported by static analysis tools cluster by locality • Generated Test Failure: Failing test cases derived from static analysis alerts provide a concrete fault condition

  7. Experiment (1) • Questions to Investigate • Does AWARE’s initial ranking perform better than a random ordering of alerts for various initial TA values? • Number of initial false positives • Average number of false positives between true positives • How many false positives must be filtered before all of the true positives reach the top of the ranking? • Number of alerts filtered before all true positives reach the top of the list

  8. Experiment (2) • RealEstate Example • 775 uncommented, non-blank LOC • Analyzed without annotations • Check ‘n’ Crash Results • 28 alerts, 27 analyzed • 2 alerts were true positives

  9. Experimental Results and Limitations • AWARE ranks TP alerts at the top of list and has a lower average occurrence of FPs between TPs. • Between 11 – 25% of alerts required filtering before all TPs reached the top of the ranking • Limitations • Small sample size • The initial ranking value for TA were unrealistic

  10. Progress & Future Work • Current Work: • Development of AWARE tool for Eclipse IDE and Java • Use of AWARE in graduate level class • Future Work: • Industrial case study • Extend AWARE to gather alerts from C/C++ static analyzers • AWARE Research site: • http://agile.csc.ncsu.edu/aware

  11. Questions? Sarah Heckman: sarah_heckman@ncsu.edu

More Related