1 / 84

Software Engineering Fall 2005

Lecture 13 Product Metrics Based on: Software Engineering, A Practitioner’s Approach, 6/e, R.S. Pressman. Software Engineering Fall 2005. Overview. Software engineers use product metrics to help them assess the quality of the design and construction of the software product being built.

Télécharger la présentation

Software Engineering Fall 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 13Product MetricsBased on: Software Engineering, A Practitioner’s Approach, 6/e, R.S. Pressman Software Engineering Fall 2005

  2. Overview • Software engineers use product metrics to help them assess the quality of the design and construction of the software product being built. • The process of using product metrics begins by deriving the software measures and metrics that are appropriate for the software representation under consideration. Then data are collected and metrics are computed. • The metrics are computed and compared to pre-established guidelines and historical data. The results of these comparisons are used to guide modifications made to work products arising from analysis, design, coding, or testing.

  3. 1. Software Quality • Software quality is conformance to: - explicitly stated functional and performance requirements; - explicitly documented development standards; - implicit characteristics that are expected of all professionally developed software.

  4. Software Quality Principles • Conformance to software requirements is the foundation from which quality is measured. • Specified standards define a set of development criteria that guide the manner in which software is engineered. • Software quality is a suspect when a software product conforms to its explicitly stated requirements and fails to conform to the customer's implicit requirements (e.g., ease of use).

  5. 1.1. McCall's Quality Factors • Focus on three software quality factors: • The software’s operational characteristics. • The ability to change the software. • The adaptability of the software to new environments. • Admittedly, some of the measures are only arrived at subjectively.

  6. M a i n t a i n a b i l i t y M a i n t a i n a b i l i t y P o r t a b i l i t y P o r t a b i l i t y F l e x i b i l i t y F l e x i b i l i t y R e u s a b i l i t y R e u s a b i l i t y T e s t a b i l i t y T e s t a b i l i t y I n t e r o p e r a b i l i t y I n t e r o p e r a b i l i t y P R O D U C T R E V I S I O N P R O D U C T R E V I S I O N P R O D U C T T R A N S I T I O N P R O D U C T T R A N S I T I O N P R O D U C T O P E R A T I O N P R O D U C T O P E R A T I O N C o r r e c t n e s s C o r r e c t n e s s U s a b i l i t y U s a b i l i t y E f f i c i e n c y E f f i c i e n c y I n t e g r i t y R e l i a b i l i t y I n t e g r i t y R e l i a b i l i t y McCall’s Triangle of Quality

  7. McCall's Quality Factors I • Correctness: The extent to which a program satisfies its specification and fulfills the customer’s mission objectives. • Reliability: The extent to which a program can be expected to perform its intended function with required precision. • Efficiency: The amount of computing resources and code required by a program to perform its function. • Integrity: The extent to which access to software or data by unauthorized persons can be controlled. • Usability: The effort required to learn, operate, prepare input for, and interpret output of a program.

  8. McCall's Quality Factors II • Maintainability: The effort required to locate and fix an error in a program. • Flexibility: The effort required to modify and operational program. • Testability: The effort required to test a program to ensure that it performs its intended function. • Portability: The effort required to transfer the program from one hardware and/or software environment to another. • Reusability: The extent to which a program (or parts) can be reused in other applications. • Interoperability: The effort required to couple one system to another.

  9. A Comment McCall’s quality factors were proposed in the early 1970s. They are as valid today as they were in that time. It’s likely that software built to conform to these factors will exhibit high quality well into the 21st century, even if there are dramatic changes in technology.

  10. 1.2. ISO 9126 Quality Factors • International organization for standardization (ISO) adopted ISO9126 as the standard for software quality (ISO, 1991). • It was developed in an attempt to identify quality attributes for computer software.

  11. ISO 9126 Quality Factors Functionality Reliability Usability Efficiency Maintainability Portability ISO 9126 ISO 9126 is the software product evaluation standard from the International Organisation for Standardisation. This international standard defines six characteristics that describe, with minimal overlap, software quality.

  12. ISO 9126 Quality Attributes I • Functionality. The degree to which the software satisfies stated needs as indicated by: suitability, accuracy, interoperability, compliance and security. • Reliability. The amount of time that the software is available for use as indicated by: fault tolerance, recoverability, maturity. • Usability. The degree to which the software is easy to use as indicated by: learnability, operability, understandability.

  13. ISO 9126 Quality Attributes II • Efficiency. The degree to which the software makes optimal use of system resources as indicated by: time & resource bahaviour. • Maintainability. The ease with which repair may be made to the software as indicated by: changeability, stability, testability, analyzability. • Portability. The ease with which the software can be transposed from one environment to another as indicated by: adaptability, installability, conformance, replaceability.

  14. Quality Issues in Software Systems • The ISO’s ISO 9126 standard is now widely adopted • Other industry standards such as IEEE have adjusted their standards to comply with the ISO standards

  15. 2. Framework for Product Metrics Benefits of product metrics • Assist in the evaluation of the analysis and evaluation model • Provide indication of procedural design complexity and source code complexity • Facilitate design of more effective testing

  16. Terminology • Measure: Quantitative indication of the extent, amount, dimension, or size of some attribute of a product or process. • Metrics: The degree to which a system, component, or process possesses a given attribute. Relates several measures (e.g. average number of errors found per person hour). • Indicators: A combination of metrics that provides insight into the software process, project or product. • Direct Metrics: Immediately measurable attributes (e.g. line of code, execution speed, defects reported). • Indirect Metrics: Aspects that are not immediately quantifiable (e.g. functionality, quantity, reliability). • Faults: • Errors: Faults found by the practitioners during software development. • Defects: Faults found by the customers after release.

  17. Product Metrics • Focus on the quality of deliverables • Product metrics are combined across several projects to produce process metrics • Metrics for the product: • Measures of the Analysis Model • Complexity of the Design Model • Internal algorithmic complexity • Architectural complexity • Data flow complexity • Code metrics

  18. Metrics Guidelines • Use common sense and organizational sensitivity when interpreting metrics data • Provide regular feedback to the individuals and teams who have worked to collect measures and metrics. • Don’t use metrics to appraise individuals • Work with practitioners and teams to set clear goals and metrics that will be used to achieve them • Never use metrics to threaten individuals or teams • Metrics data that indicate a problem area should not be considered “negative.” These data are merely an indicator for process improvement • Don’t obsess on a single metric to the exclusion of other important metrics

  19. Normalization for Metrics • How does an organization combine metrics that come from different individuals or projects? Depend on the size and complexity of the project • Normalization: compensate for complexity aspects particular to a product • Normalization approaches: • Size oriented (lines of code approach) • Function oriented (function point approach)

  20. Why Opt for FP Measures? • Independent of programming language. Some programming languages are more compact, e.g. C++ vs. Assembler • Use readily countable characteristics of the “information domain” of the problem • Does not “penalize” inventive implementations that require fewer LOC than others • Makes it easier to accommodate reuse and object-oriented approaches • Original FP approach good for typical Information Systems applications (interaction complexity) • Variants (Extended FP and 3D FP) more suitable for real-time and scientific software (algorithm and state transition complexity)

  21. Measurement Principles • The objectives of measurement should be established before data collection begins; • Each technical metric should be defined in an unambiguous manner; • Metrics should be derived based on a theory that is valid for the domain of application (e.g., metrics for design should draw upon basic design concepts and principles and attempt to provide an indication of the presence of an attribute that is deemed desirable); • Metrics should be tailored to best accommodate specific products and processes

  22. Measurement Process • Formulation. The derivation of software measures and metrics appropriate for the representation of the software that is being considered. • Collection. The mechanism used to accumulate data required to derive the formulated metrics. • Analysis.The computation of metrics and the application of mathematical tools. • Interpretation. The evaluation of metrics results in an effort to gain insight into the quality of the representation. • Feedback. Recommendations derived from the interpretation of productmetrics transmitted to the software team.

  23. Metrics Characterization and Validation Principles • A metric should have desirable mathematical properties. • The value of a metric should increase when positive software traits occur or decrease when undesirable software traits are encountered. • Each metric should be validated empirically in several contexts before it is used to make decisions.

  24. Measurement Collection and Analysis Principles • Automate data collection and analysis whenever possible • Use valid statistical techniques to establish relationships between internal product attributes and external quality characteristics • Establish interpretive guidelines and recommendations for each metric

  25. Goal-Oriented Software Measurement • The Goal/Question/Metric (GQM) paradigm emphasizes the need to: • (1) establish an explicit measurement goal that is specific to the process activity or product characteristic that is to be assessed; • (2) define a set of questions that must be answered in order to achieve the goal; and • (3) identify well-formulated metrics that help to answer these questions.

  26. Attributes of Effective Software Metrics • Simple and computable. It should be relatively easy to learn how to derive the metric, and its computation should not demand inordinate effort or time . • Empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive notions about the product attribute under consideration. • consistent and objective. The metric should always yield results that are unambiguous. • Consistent in its use of units and dimensions. The mathematical computation of the metric should use measures that do not lead to bizarre combinations of unit. • Programming language independent. Metrics should be based on the analysis model, the design model, or the structure of the program itself. • An effective mechanism for quality feedback. That is, the metric should provide a software engineer with information that can lead to a higher quality end product

  27. Metrics for the Analysis Model • Function-based metrics: use the function point as a normalizing factor or as a measure of the “size” of the specification. • System size: measures of the overall size of the system defined in terms of information available as part of the analysis model. • Specification metrics: used as an indication of quality by measuring number of requirements by type.

  28. Metrics for the Design Model • Architecture metrics • Component-level metrics • Specialized OO Design Metrics

  29. Source Code Metrics • Halstead metrics • Complexity metrics • Length metrics

  30. Testing Metrics • Statement and branch coverage metrics • Defect-related metrics • Testing effectiveness • In-process metrics

  31. 3. Metrics for the Analysis Model • Function-pint (FP) metrics (Albrecht) • Specification quality metrics (Davis)

  32. Function-Point Metric • A function point (FP) is a synthetic metric that is comprised of the weighted totals of the inputs, outputs, inquiries, logical files, and interfaces belonging to an application. • The function point measure can be computed without forcing the specification to conform to a particular specification model or technique.

  33. Function-Point Metric Usage • The FP can be used to: - estimate the cost or effort required to design, code, and test the software; - predict the number of errors that will be encountered during testing; and - forecast the number of components and/or the number of projected source lines in the implemented system.

  34. Using FP • Errors per FP • Defects per FP • Cost per FP • Pages of documentation per FP • FP per person month

  35. Function Point: Information Domain Values • Number of external inputs (EIs): those items provided by the user to the program (such as file names and menu selections). Inputs should be distinguished from inquiries, which are counted separately. • Number of external outputs (EIs): those items provided to the user by the program (such are reports, screens, error messages, etc.). Individual data items within a report are not counted separately. • Number of external inquiries(EQs): interactive inputs requiring a response. Each distinct enquiry is counted. • Number of internal logical files (ILFs): logical master files (i.e. a logical grouping of data that may be part of a large database or a separate file) in the system. • Number of external interface files (EIFs): logical grouping of data that resides external to the application but provides data that may be of use to the application.

  36. Function Point • Once the information domain values are collected, the table in the next Figure is completed • Complexity value is associated with each count and is subjective to the experience and expertise of the engineer conducting the metrics. Each item identified is assigned a subjective "complexity" rating on a three-point ordinal scale: "simple," "average," or "complex." Then, a weight is assigned to the item, based on a FP complexity weights table. • The function point count total is computed by multiplying each raw count by the weight and summing all values

  37. Weighting factor Information Domain Value Count simple average complex = External Inputs ( EIs ) x 3 4 6 = External Outputs ( EOs ) x 4 5 7 = x External Inquiries ( EQs ) 3 4 6 x = Internal Logical Files ( ILFs ) 7 10 15 = x 5 7 10 External Interface Files ( EIFs ) Count total Function Points

  38. Value (Complexity) Adjustment Factors (VAF) Rated on a scale of 0 (not important) to 5 (very important): • Does the system require reliable backup and recovery? • Are data communications required? • Are there distributed processing functions? • Is performance critical? • System to be run in an existing, heavily utilized environment? • Does the system require on-line data entry? • On-line entry requires input over multiple screens or operations? • Are the master files updated on-line? • Are the inputs, outputs, files, or inquiries complex? • Is the internal processing complex? • Is the code designed to be reusable? • Are conversion and instillation included in the design? • Multiple installations in different organizations? • Is the application designed to facilitate change and ease-of-use?

  39. Computing Function Points FP = å (count x weight) x C or FP = count total X C where, C - complexity multiplier C = 0.65 +(0.01 x å (Fi))

  40. Computing Function Points (Summary) Analyze information domain of the application and develop counts Establish count for input domain and system interfaces Assign level of complexity (simple, average, complex) or weight to each count Weight each count by assessing complexity Grade significance of external factors (value adjustment factors), Fi, such as reuse, concurrency, OS, ... Assess the influence of global factors that affect the application FP = å (count x weight) x C where complexity multiplier C = (0.65+0.01 x N) degree of influence N = å (Fi) Compute function points

  41. Function Point Advantages and Disadvantages • The FP measure has the advantage that it is a technology-independent measure of software size that can be applied early in the requirements and design phase to provide useful information, which can be effectively used for measuring a variety of aspects related to the economic productivity of software. • However, there are also several problems with the function-points measure. There is the problem of subjectivity in the technology factor, problems with double counting internal complexity, problems with counter-intuitive values for the complexity rating of "average," problems with accuracy, inappropriateness for early life cycle use because it requires a full software system specification, changing requirements, technology dependence, application domain, subjective weighting, and measurement theory. • A major disadvantage of function points is the potentially wide variation in function point calculations that may be generated by inexperienced practitioners and/or in response to unique programming applications.

  42. Function Point • Many enhancements and modification have been made to the original Function Point Metric since its first publication in October 1979. • In 1986 Software Productivity Research (SPR) developed a new enhanced function point called feature point, that includes a new parameter, algorithm, in addition in addition to the five standard function point parameters. • There is an International Function Point User’s Group (IFPUG), several books, and recent publications that are dedicated to the study and application of Function Point related measures.

  43. Example: SafeHome Functionality Test Sensor Password SafeHome System User Sensors Zone Setting Zone Inquiry Messages Sensor Inquiry User Sensor Status Panic Button (De)activate (De)activate Monitor and Response System Password, Sensors, etc. Alarm Alert System Config Data

  44. Example: SafeHome FP Calculations • 3 EIs: password, panic button, activate/deactivate; • 2 EOs: messages and sensor status; • 2. EQs: zone inquiry and sensor inquiry; • 1 ILF: system configuration file; • 4 EIFs: test sensor, zone setting, activate/deactivate and alarm alert. These data, along with the appropriate complexity are entered into the next Figure. For this example we assume that å(Fi) = 46 (a moderatly complex product).

  45. Example: SafeHome FP Calculation weighting factor count simple avg. complex measurement parameter 9 3 number of user inputs X 3 4 6 = 8 2 number of user outputs X 4 5 7 = number of user inquiries 2 6 X 3 4 6 = 1 7 number of files X 7 10 15 = 20 4 number of ext.interfaces X 5 7 10 = 50 count-total 1.11 complexity multiplier 56 function points

  46. Example I: How to use FP values • Based on the FP value, the project team can estimate the overall implemented size of the SafeHome user interaction function. • Assume that past data indicates: - that one FP translates into 60 lines of codes; - 12 FPs are produced for each person-month of effort. • The project manager can plan the project then based on the analysis model rather than preliminary estimates.

  47. Example II: How to use FP values • Assume that past project have found an average of 3 errors per function point during analysis and design reviews and 4 errors per function point during unit and integration testing. • These data can help software engineers asses the completeness of their review and testing activities.

  48. 4. Representative Design Metrics • Design metrics for software are available, but the vast majority of software engineers continue to be unaware of their existence.

  49. 4.1. Architectural Design Metrics • Focus on characteristics of the program architectural structure and the effectiveness of modules or components within the architecture. • They are ‘black box’ in the sense that they do not require any knowledge of the inner workings of a particular software components.

  50. Architectural Design Metrics • Card and Glass define three software design complexity measures: - Structural complexity (based on module fan-out); - Data complexity (based on module interface inputs and outputs); - System complexity (sum of structural and data complexity).

More Related