1 / 26

USABILITY AS A QUALITY METRIC

USABILITY AS A QUALITY METRIC. By Shashank Jagirdar For Dr. Frank Tsui. What is usability?. How integrated is the usability metric into software engineering?

milica
Télécharger la présentation

USABILITY AS A QUALITY METRIC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. USABILITY AS A QUALITY METRIC By ShashankJagirdar For Dr. Frank Tsui

  2. What is usability?

  3. How integrated is the usability metric into software engineering? • Most of the software developers do not apply correctly any particular metrics in usability measurement of the system, so usability is not strongly integrated into software engineering practices. • Relationship of CK metrics with usability to assess the quality of software system and also find the effects of relationship between CK metrics and software usability.

  4. Many other object oriented metrics were proposed. They are tabularized as below:

  5. Chidamber and Kemerer (CK) are the mostly referenced researchers. • They defined six metrics viz. Weighted Methods per Class (WMC), Response sets for Class (RFC), Lack of Cohesion in Methods (LCOM), Coupling Between Object Classes (CBO), Depth of Inheritance Tree of a class (DIT) and Number of Children of a class (NOC).

  6. The concept of object-oriented programming very much relates the design and implementation phases of software system, which is directly related to the usability of system. • CK metrics were defined to measure design complexity in relation to their impact on quality attributes such as functionality, reliability, usability, maintainability etc. • Many studies have been done to validate CK metrics. But, the focus of this session is on Usability as a measure for quality of a software.

  7. Relation between CK metrics and Usability. • Weighted Methods per Class: • The larger the number of methods in a class, the greater the impact on children since they inherit all the methods defined in a class. • A class with more member functions than its peers is considered to be more complex and therefore more error prone • Ubα 1 / (WMC)

  8. Response Set for Classes (RFC): • The RFC is the number of all the methods that can potentially be executed (directly or indirectly) in response to a message to an object of that class or by some method in the class. This includes all methods accessible within class hierarchy. • Ubα 1 / (RFC)

  9. Lack of Cohesion between Methods: • High cohesion indicates good class of subdivision and it indicates less error during the development process so increase the usability of software system. • The LCOM is the number of disjoint set of local methods. It was found that lower productivity of system arises due to high values in LCOM causes greater rework and greater design effort. • Ubα 1 / (LCOM)

  10. Coupling between object classes (CBO): • Coupling is a measure of interdependence of two objects. • The CBO for a class is measured by counting the number of other classes to which it is coupled. • Two classes are coupled if methods of one use methods and/or instance variables of the other. • Ubα 1 / (CBO)

  11. Depth of Inheritance tree of a class (DIT): • Harrison used the DIT metric in an empirical study, demonstrating that system without inheritance is easier to modify and understand. • The deeper the class is within the hierarchy, the greater the number of methods it is likely to inherit, making it more complex to predict its behavior and therefore, more fault-prone. • Ubα 1 / (DIT)

  12. Number of children of a class (NOC): • Basiliobserved that the larger the NOC, the lower the probability of fault detection. • They concluded that Classes with large number of children are considered to be difficult to modify and usually require more testing because of the effects on changes on all the children. • Such classes are also considered more complex and fault-prone. This indicates that the larger the NOC, the lower the probability of usable the software system. Ubα 1 / (NOC)

  13. These results were verified by the authors Ajay Rana and Sanjay Kumar Dubey for two java test packages. • The first package, package1, had 19 classes. • The second package, package2, had 14 classes. • Only 4 of CK metrics were tested in this test. • The four metrics are: • WMC • RFC • DIT • NOC

  14. The data collected was summarized as below:

  15. In our analysis, from 19 classes of package1, 17 classes have RFC threshold between 0-30. • Only two classes contain RFC threshold more then 50. This result indicates that only 2 classes have to be modified to reduce complexity. 13 classes from package2 have RFC with threshold 20 and only 1 class has RFC with threshold more than 20. So it validates the hypothesis 2 about RFC i.e. more RFC indicates less usability. • According to CK, a class with large RFC indicates the class is more complex and it’s harder to maintain.

  16. UTUM TEST PACKAGE • Usability has three main aspects: • Effectiveness • Efficiency • Satisfaction • User experience not taken into account as a metric for measure of quality. • The UTUM package deals with user experience as more encompassing than usability itself.

  17. Tests conducted earlier either yielded results that were trivial or failed. • The test package first used in 2005. • Performance metric based on use case completion. • Still, the test leader’s observation was NOT taken into account.

  18. TEST PROCESS • The leaders oversee the entire test scenario and make notes of observations that they think might affect the result in any way. • The testers are asked to fill in various forms to collect data that may influence the result of a use case. • Total of 6 use cases were tested and there were 48 testers. • Hardware Evaluation forms help collect information regarding attitudes to look and feel of specific devices.

  19. Complexity of the testing process:

  20. Metrics, their use and presentation • Calculating an average of the relative efficiency and the specific efficiency gives the performance efficiency metric which is a response to the statement “This telephone is efficient for accomplishing the given task”. • The User Satisfaction Metric, calculated on the basis of the SUS is a response to the statement “This telephone is easy to use”. • The SUS is a quick and dirty usability scale based on ISO 9241:11 [2], resulting in a number that expresses a measure of the overall usability of the system as a whole

  21. To illustrate the effectiveness-efficiency metric, design effectiveness, the average of the task effectiveness metric for all of the test cases for a single use case, is contrasted with task efficiency, the average of the performance efficiency metric for all of the test cases for the same use case.

  22. To illustrate the satisfaction-efficiency metric, the average of the user satisfaction metric for all test cases is contrasted with the average performance efficiency metric for all of the use cases and all test cases. Increased usability is shown as movement towards the upper right of the quadrant. The satisfaction-efficiency metric is referred to as Total UTUM, and is seen a useful illustration of total usability.

  23. They can e.g. show Total UTUM for a complete test series, Total UTUM by gender, Total UTUM by age groups, the correlation between Hardware Evaluation and satisfaction, or between effectiveness and efficiency.

  24. Summary of UTUM: • Quick and efficient • Transferable • Handles Complexity • Customer Driven • More quantitative Data.

  25. QUESTIONS?

More Related