1 / 18

Design Metrics CS 406 Software Engineering I Fall 2001

Design Metrics CS 406 Software Engineering I Fall 2001. Aditya P. Mathur. Last update: October 23, 2001. Metrics. Metrics are useful in measuring the complexity and “goodness” of a design. A large number metrics have been proposed for OO designs .

truly
Télécharger la présentation

Design Metrics CS 406 Software Engineering I Fall 2001

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design MetricsCS 406 Software Engineering IFall 2001 Aditya P. Mathur Last update: October 23, 2001

  2. Metrics • Metrics are useful in measuring the complexity and “goodness” of a design. • A large number metrics have been proposed for OO designs. • Some of these have been validated experimentally, others are mere proposals or have received little or no validation. Software Design

  3. Effort • Assumption: The effort in developing a class is determined by the number of methods. • Hence the overall complexity of a class can be measured as a function of the complexity of its methods. Proposal: Weighted Methods per class (WMC) Software Design

  4. Let denote the complexity of method WMC • Let class C have methods M1, M2, .....Mn. • How to measure WMC? Software Design

  5. WMC: validation • Most classes tend to have a small number of methods, are simple, and provide some specific abstraction and operations. • WMC metric has a reasonable correlation with fault-proneness of a class. Software Design

  6. Depth of inheritance tree [1] • Depth of a class in a class hierarchy determines potential for re-use. Deeper classes have higher potential for re-use. • Inheritance increases coupling. Changing classes becomes harder. • Depth of Inheritance (DIT) of class C is the length of the shortest path from the root of the inheritance tree to C. Software Design

  7. Depth of inheritance tree [2] • In case of multiple inheritance DIT is the maximum length of the path from the root to C. • Depth of Inheritance (DIT) of class C is the length of the shortest path from the root of the inheritance tree to C. Software Design

  8. DIT evaluation • Basili et al. study,1995. • Chidamber and Kemerer study, 1994. • Most classes tend to be close to the root. • Maximum DIT value found to be 10. • Most classes have DIT=0. • DIT is significant in predicting error proneness of a class. Higher DIT leads to higher error-proneness. Software Design

  9. DIT evaluation • Basili et al. study,1995. • Chidamber and Kemerer study, 1994. • Most classes tend to be close to the root. • Maximum DIT value found to be 10. • Most classes have DIT=0. • DIT is significant in predicting error proneness of a class. Higher DIT leads to higher error-proneness. Software Design

  10. Number of children (NOC) • NOC is the number of immediate subclasses of C. • Higher values of NOC suggest reuse of the definitions in the super-class in a larger number of subclasses. • Higher NOC suggests the extent of influence of a class on other elements of a design. Higher influence demands higher quality of that class. Software Design

  11. Validation of NOC • Classes generally have a small NOC value. • Vast majority have NOC=0. • Larger NOC value is associated with lower probability of detecting faults in that class. Software Design

  12. Coupling between classes (CBC) • Class C1 is coupled to class if at least one method of C1 uses a method or an instance variable of C2. • Coupling is usually easy to identify though often pointers may make it difficult. • CBC of C=total number of other classes to which C is coupled. Software Design

  13. Validation of CBC • Most classes are self contained and have CBC=0. • Interface classes tend to have higher CBC values. • CBC is significant in predicting fault-proneness of classes. Software Design

  14. Response for a class (RFC) • Response set of class C is the total number of methods that can be invoked when a message is sent to an object of C. This includes all methods of C and any methods executed outside of C as a result of this message. • RFC of class C is the cardinality of the response set of C. • Note that even when CBC=1 RFC may be high. This indicates that the “volume” of interaction is high. Software Design

  15. Validation of RFC • Most classes tend to invoke a small number of methods (low RFC values). • Classes for interface objects tend to have larger RFC values. • RFC is very significant in predicting the fault-proneness of a class. Software Design

  16. Lack of cohesion in methods (LCOM) • Let I1 and I2 denote sets of instance variables of methods M1 and M2, respectively, in class C. • M1 and M2 are considered similar, or cohesive, if I1 and I2 are not disjoint. • Let Q be the set of all cohesive method pairs. • Let P be the set of all non-cohesive method pairs. • LCOM=|P| - |Q| if |P| > |Q|, 0 otherwise. Software Design

  17. LCOM • A larger number of cohesive pairs implies smaller LCOM. • A high value of LCOM suggests that a class is trying to support multiple abstractions. Perhaps the class needs to be partitioned into smaller and more cohesive classes. • LCOM is found to be not very significant in predicting fault-proneness. Software Design

  18. Summary • What are OO metrics? • Metrics for complexity, coupling, and cohesion Software Design

More Related