1 / 20

Product Metrics SEII-Lecture 23

Product Metrics SEII-Lecture 23. Dr. Muzafar Khan Assistant Professor Department of Computer Science CIIT, Islamabad. Recap. Measurement and quality assessment Framework for product metrics Measure, measurement, and metrics Formulation, collection, analysis, interpretation, feedback

hisano
Télécharger la présentation

Product Metrics SEII-Lecture 23

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Product MetricsSEII-Lecture 23 Dr. Muzafar Khan Assistant Professor Department of Computer Science CIIT, Islamabad.

  2. Recap • Measurement and quality assessment • Framework for product metrics • Measure, measurement, and metrics • Formulation, collection, analysis, interpretation, feedback • Principles for metrics characterization and validation • Metrics for requirements model • Function-based metrics • Metrics for specification quality • Metric for design model • Architectural design metrics • Metric for object-oriented design

  3. Class-Oriented Metrics [1/3] • Weighted methods per class (WMC) • n methods of complexity c1, c2, … cn for a class C • WMC = ∑ci for i = 1 to n • If complexity increases, more efforts are required • Limited reuse • Counting methods apparently seems straightforward • Consistent counting approach is required • Depth of the inheritance tree (DIT) • Maximum length from the node to the root • If DIT grows, lower-level classes inherit many methods • Many methods may be reused • Leads to design complexity

  4. Class-Oriented Metrics [2/3] • Number of Children (NOC) • Immediate subordinate classes • NOC grows • Reuse increases • Abstraction of parent class may be diluted • Testing effort increases • Coupling between class objects (CBO) • If coupling increases • Reusability decreases • Testing and modification complicated • CBO as low as reasonable

  5. Class-Oriented Metrics [3/3] • Response for a class (RFC) • Methods potentially executed in response to message received by a class object • RFC increases, design complexity and testing increases • Lack of cohesion in methods (LCOM) • Number of methods that access one or more of the same attributes • If no methods access same attribute, LCOM is zero • If LCOM high, complexity of design increases

  6. Component-Level Design Metrics [1/3] • Metrics for conventional components focus on internal characteristics of a component • Cohesion metrics • Data slice • Backward walk through a module to look data values • Data tokens • Variables defined • Glue tokens • Data tokens lies on data slice • Superglue tokens • Data tokens common to every data slice • Stickiness • The relative stickiness of glue token directly proportional to the number of data slices that it binds

  7. Component-Level Design Metrics [2/3] • Coupling metrics • Data and control flow coupling • di = number of input data parameters • ci= number of input control parameters • do= number of output data parameters • co= number of output control data parameters • Global coupling • gd= number of global variable used as data • gc= number of global variable used as control • Environmental coupling • w = number of modules called • r = number of modules calling the module • mc = k/M where M = di+ (a * ci) + do+ (b * co) + gd+ (c * gc) + w + r

  8. Component-Level Design Metrics [3/3] • Complexity metrics • Cyclomatic complexity • Number of independent logical paths • Variations of cyclomatic complexity

  9. Operation-Oriented Metrics • Average operation size (OSavg) • Number of lines of code or number of messages sent by the operation • If number of messages sent increases, most probably responsibilities are not well allocated within a class • Operation complexity (OC) • Complexity metrics for conventional software • OC should be kept as low as possible • Average number of parameters per operation (NPavg) • Larger number of parameters, complex collaboration • NPavg should be kept as low as possible

  10. Design Metrics for WebApps – Interface Metrics [1/2] • Layout complexity • Number of distinct regions defined for an interface • Layout region complexity • Average number of distinct links per region • Recognition complexity • Average number of distinct items the user must look at before making navigation or data input decision • Recognition time • Average time (in seconds) that it takes a user to select the appropriate action for a given task • Typing effort • Average number of key strokes required for a specific function

  11. Interface Metrics [2/2] • Mouse pick effort • Average number of mouse picks per function • Selection complexity • Average number of links that can be selected per page • Content acquisition time • Average number of words of text per web page • Memory load • Average number of distinct data items that the user must remember to achieve specific objective

  12. Aesthetic Design Metrics [1/2] • Word count • Total number of words that appear on a page • Body text percentage • Percentage of words that are body versus display text (i.e. headers) • Emphasized body text % • Portion of body text that is emphasized (e.g., bold, capitalized) • Text cluster count • Text areas highlighted with color, bordered regions, rules, or lists • Link count • Total links on a page

  13. Aesthetic Design Metrics [2/2] • Page size • Total bytes for the page as well as elements, graphics, and style sheets • Graphic percentage • Percentage of page bytes that are for graphics • Graphics count • Total graphics on a page (not including graphics specified in scripts, applets, and objects) • Color count • Total colors employed • Font count • Total fonts employed (i.e. face + size + bold + italic)

  14. Content Metrics • Page wait • Average time required for a page to download at different connection speeds • Page complexity • Average number of different types of media used on page, not including text • Graphic complexity • Average number of graphics media per page • Audio complexity • Average number of audio media per page • Video complexity • Average number of video media per page • Animation complexity • Average number of animations per page • Scanned image complexity • Average number of scanned images per page

  15. Navigation Metrics • For static pages • Page-linking complexity • Number of links per page • Connectivity • Total number of internal links, not including dynamically generated links • Connectivity density • Connectivity divided by page count

  16. Metrics for Source Code • n1 = number of distinct operators that appear in a program • n2 = number of distinct operands that appear in a program • N1 = total number of operator occurrences • N2 = total number of operand occurrences • Overall program length (N) and volume (V) N = n1log2n1 + n2 log2n2 V = N log2 (n1+ n2)

  17. Metrics for Object-Oriented Testing [1/2] • Lack of cohesion in methods (LCOM) • If LCOM is high, more states must be tested • Percent public and protected (PAP) • Percentage of public and protected class attributes • High value for PAP increases the side effects among classes • Public access to data members (PAD) • The number of classes (or methods) that access another class’s attributes • Violation of encapsulation

  18. Metrics for Object-Oriented Testing [2/2] • Number of root classes (NOR) • The number of distinct class hierarchies • Test should be developed for each root class and corresponding hierarchy • If NOR increases, testing effort also increases • Fan-in (FIN) • Indication of multiple inheritance • FIN > 1 should be avoided • Number of children (NOC) and depth of inheritance tree (DIT)

  19. Metrics for Maintenance • MT= number of modules in the current release • Fc= number of modules in the current release that have been changed • Fa= number of modules in the current release that have been added • Fd= number of modules from the preceding release that were deleted in the current release • Software maturity index SMI = MT- (Fa+ Fc + Fd) / MT • 1 indicates the product stability

  20. Summary • Class-oriented metrics • Weighted methods per class, depth of the inheritance tree, number of children, coupling, response for class, lack of cohesion • Component-level design metrics • Cohesion, coupling, and complexity • Operation-oriented metrics • Average operation size, operation complexity average number of parameters per operation • Design metrics for WebApps • Metrics for source code • Metrics for object-oriented testing • Metrics for maintenance

More Related