metrics for measuring the effectiveness of software testing tools n.
Skip this Video
Loading SlideShow in 5 Seconds..
Metrics for Measuring the Effectiveness of Software-Testing Tools PowerPoint Presentation
Download Presentation
Metrics for Measuring the Effectiveness of Software-Testing Tools

Metrics for Measuring the Effectiveness of Software-Testing Tools

194 Vues Download Presentation
Télécharger la présentation

Metrics for Measuring the Effectiveness of Software-Testing Tools

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Metrics for Measuring the Effectiveness of Software-Testing Tools Comp 587 Parker Li Bobby Kolski

  2. Introduction • Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects of software-testing. • Automated testing tools vary in their: • Underlying approach • Quality • Ease-of-use • How does a project manager choose the best suitable testing tool?

  3. Software Quality Metrics • The history of software metrics began with counting the number of lines of code (LOC). • It was assumed that more lines of code implied more complex programs, which in turn were more likely to have errors. • However, software metrics have evolved well beyond the simple measures introduced in the 1960s.

  4. Procedural (Traditional) Software Metrics • Since the first introduction of LOC, metrics for traditional or procedural source code have increased in number and complexity. • Cyclomatic complexity • e is the number of edges, n is the number of nodes and p is number of unconnected parts of G. • Offers an estimate of the reliability, testability, and maintainability of a program, based on measuring the number of linearly independent paths through the program.

  5. Procedural (Traditional) Software Metrics (Cont.) • Function Point is a measure of the size of computer applications and the projects that build them.

  6. Object-Oriented Software Metrics • Weighted-methods-per-class (WMC) – the sum of the individual complexities of the methods within that class. • Depth of inheritance tree (DIT) – defined as the maximum length from the node to the root of a class tree. Deeper a class is in the inheritance hierarchy, more complex it gets. • Number of Children (NOC) – Immediate subclasses of class. Large NOC implies great amount of inheritance and resue. • Coupling between object classes (CBO) – When a class inherits methods, instance variables, they are coupled. Greater number = complexity

  7. Object-Oriented Software Metrics • Number of key classes (NKC) • Number of subsystems (NSUB) • Class size (CS) • Number of operations added by a subclass (NOA) • Average method size • Average number of instance variables • Class hierarchy nesting level

  8. Which Software-Testing Tool to use? • Does it offer critical support for planning tests and monitoring test progress? • Does it have an analysis feature where it assesses the characteristics of the software quality? • Does it offer guiding dynamic testing? • How mature is the tool? Is it easy to implement for your system?

  9. Metrics for Tools that support testing procedure software • Human Interface Design (HID) – Tools with well-designed human interfaces enable easy, efficient, and accurate setting of tool configuration. • Tool Management (TM) – the tool should ensure proper management of information. • Ease of Use (EU) – the tool should be easy to use by both new and familiar users. • User Control (UC) – the tool should allow extensive and detail testing coverage rather than one bulky one.

  10. 3 Tools for Software Testing • LDRA Testbed • ParasoftTestbed • C++ Test • CodeWizard • Inspire++ • TelelogicTestbed • Logiscope

  11. LRDA Testbed • Coverage Report (Dynamic Analysis) • Branch Coverage • Statement Coverage • Metrics • Cyclomatic Complexity (Static Analysis) • Quality Report

  12. LRDA Testbed – Branch/Decision coverage Part 1 (Part of dynamic testing)

  13. LRDA Testbed – Code/branch/decision coverage Part 2 (Part of dynamic testing)

  14. ParasoftTestbed • C++ Test • White-Box • Black-Box • Regression testing • Code Wizard • 170 industry-accepted C/C++ coding standards • Inspire++ • C/C++ runtime error checking

  15. ParasoftTestbed (metrics)

  16. ParasoftTestbed C++ Test Bug Detective (static analysis)

  17. TelelogicTestbed • Logiscope • TestChecker – mentioned in the article • RuleChecker – mentioned in the article • Audit * • Reviewer * • * included in the current version of the tool and shown for completeness but not covered further in the slides.

  18. TelelogicLogiscope – Bought by IBM – The current version is Rational TelelogicLogiscope

  19. Exercising the Software-Testing Tools (Comparison of the tools themselves) • Human Interface Design (HID) Comparison • Keyboard-to-mouse switches (KMS) • Input fields per functions (IFPF) • Average length of input fields (ALIF) • Button recognition factor (BR)

  20. Exercising the Software-Testing Tools (Comparison of the tools themselves) • Test case generation (TCG) • Automated test case generation (ATG) • Test case reuse functionality (TRF) • Special note: LDRA does not • automatically generate test • cases but does provide • user-friendly features such • as pull-down menus for created • test cases therefore it was • assigned an eight for its level of ATG.

  21. Analysis of Results (The output of the tools) • LRDA – For Object Oriented tests there were issues with the tool, so no results were obtained. • High levels of knots indicate that the code is disjointed • LDRA appends the Motor Industry Software Reliability Association (MISRA) C Standard and the Federal Aviation Authority’s DO-178B standard.

  22. Importance of this article • Software Metrics have come a long way since the LOC era. • Software Metrics can be use not only to measure the quality of software, but also the effectiveness of the software testing tool. • Choosing a software testing tool is not simple.

  23. Questions?