1 / 102

Chapter 12

Chapter 12. Evaluating Products, Processes, and Resources Shari L. Pfleeger Joann M. Atlee 4 th Edition 4 th Edition. Contents. 12.1 Approaches to evaluation 12.2 Selecting an evaluation technique 12.3 Assessment vs. prediction 12.4 Evaluating products 12.5 Evaluating processes

Télécharger la présentation

Chapter 12

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 12 Evaluating Products, Processes, and Resources Shari L. Pfleeger Joann M. Atlee 4th Edition 4th Edition

  2. Contents 12.1 Approaches to evaluation 12.2 Selecting an evaluation technique 12.3 Assessment vs. prediction 12.4 Evaluating products 12.5 Evaluating processes 12.6 Evaluating resources 12.7 Information systems example 12.8 Real-time example 12.9 What this chapter means for you

  3. Chapter 12 Objectives • Feature analysis, case studies, surveys and experiments • Measurement and validation • Capability maturity, ISO 9000 and other process models • People maturity • Evaluating development artifacts • Return on investment

  4. 12.1 Approaches to Evaluation • Measure key aspects of products, processes and resources • Determine whether we have met goals for productivity, performance, quality and other desirable attributes

  5. 12.1 Approaches to Evaluation Categories of Evaluation • Feature analysis: Rate and rank attributes • Survey: Document relationships • Case study • Formal experiment

  6. 12.1 Approaches to Evaluation Feature Analysis Example: Buying a design tool • List five key attributes that the tool should have • Identify three possible tools and rate the criteria • Ex: 1 (bad) to 5 (excellent) • Examine the scores, creating a total score based on the importance of each criterion • Based on the score, we select the tool with the highest score (t-OO-1)

  7. Design tool ratings 12.1 Approaches to Evaluation Buying a design tool (continued)

  8. 12.1 Approaches to evaluation Feature Analysis (continued) • Very subjective • Ratings reflect rater’s biases • Does not really evaluate behavior in terms of cause and effect

  9. 12.1 Approaches to Evaluation Surveys • Survey: Retrospective study • To document relationships and outcomes in a given situation • Software engineering survey: Record data • To determine how project participants reacted to a particular method, tool or technique • To determine trends or relationships • Capture information related to products or projects • Document the size of components, number of faults, effort expended, etc.

  10. 12.1 Approaches to Evaluation Surveys (continued) • No control over the situation at hand due to retrospective nature • To manipulate variables, we need case studies and experiments

  11. 12.1 Approaches to Evaluation Case Studies • Identify key factors that may affect an activity’s outcome and then document them • Inputs, constraints, resources and outputs • Involve sequence of steps: conception, hypothesis setting, design, preparation, execution, analysis dissemination and decision making • Hypothesis crucial step • Compare one situation with another

  12. 12.1 Approaches to Evaluation Case Study Types • Sister project: Each is typical and has similar values for the independent variables • Baseline: Compare single project to organizational norm • Random selection: Partition single project into parts • One part uses the new technique and other parts do not • Better scope for randomization and replication • Greater control

  13. 12.1 Approaches to Evaluation Formal Experiments • Independent variables manipulated to observe changes in dependent variables • Methods used to reduce bias and eliminate confounding factors • Replicated instances of activity often measured • Instances are representative: Sample over the variables (whereas case study samples from the variables)

  14. 12.1 Approaches to Evaluation Evaluation Steps • Setting the hypothesis: Deciding what we wish to investigate, which is expressed as a hypothesis we want to test • When quantifying hypothesis, use terms that are unambiguous to avoid surrogate measures • Maintaining control over variables: Identifying variables that can affect the hypothesis and decide how much control we have over the variables • Making investigation meaningful: Choosing an apt evaluation type • The result of formal experiment is more generalizable, while a case study or survey result is specific

  15. 12.2 Selecting an Evaluation Technique • Formal experiments: Research in the small • Case studies: Research in typical • Surveys: Research in the large

  16. 12.2 Selecting an Evaluation TechniqueKey Selection Factors • Level of control over the variables • Degree of risk involved • Degree to which the task can be isolated from the rest of the development process • Degree to which we can replicate the basic situation • Cost of replication

  17. 12.2 Selecting an Evaluation TechniqueWhat to Believe • When results conflict, how do we know which study to believe? • Using series of questions, represented by the game board • How do you know if the result/observation is valid? • Using the table of common pitfalls in evaluation • In practice, easily controlled situations do not occur always

  18. 12.2 Selecting an Evaluation TechniqueInvestigation and Evaluation Game Board

  19. 12.2 Selecting an Evaluation TechniqueCommon Pitfalls inEvaluation

  20. 12.3 Assessment vs. Prediction • Evaluation always involves measurement • Measure: Mapping from a set of real-world entities and attributes to a mathematical model • Software measures capture information about product, process or resource attributes • Not all software measures are valid • Software measurement validation is done by two kinds of systems: • Measurement/Assessment systems • Prediction systems

  21. 12.3 Assessment vs. Prediction • Assessment system examines an existing entity by characterizing it numerically • Prediction system predicts characteristic of a future entity; involves a mathematical model with associated prediction procedures • deterministic prediction (we always get the same output for an input) • stochastic prediction (output varies probabilistically) • When is measure valid? • When is prediction valid?

  22. 12.3 Assessment vs. PredictionValidating Prediction Systems • Comparing the model’s performance with known data in a given environment • Stating a hypothesis about the prediction, and then looking at data to see whether the hypothesis is supported or refuted

  23. 12.3 Assessment vs. PredictionSidebar 12.1 Comparing Software Reliability Predictions

  24. 12.3 Assessment vs. PredictionValidating Prediction Systems (continued) • Acceptable accuracy in validating models depends on several factors • Novice or experienced estimators • Deterministic or stochastic prediction • Models pose challenges when we design an experiment or case study • Predictions affect the outcome • Prediction systems need not be complex to be useful

  25. 12.3 Assessment vs. PredictionValidating Measures • Assuring that the measure captures the attribute properties it is supposed to capture • Demonstrating that the representation condition holds for the (numerical) measure and its corresponding attributes • NOTE: When validating a measure, view it in the context in which it is used.

  26. 12.3 Assessment vs. PredictionSidebar 12.2 Lines of Code and Cyclomatic Number • The number of lines of code is a valid measure of program size, however, it is not a valid measure of complexity • On the other hand, there are many studies that exhibit a significant correlation between lines of code and cyclomatic number

  27. 12.3 Assessment vs. PredictionA Stringent Requirement for Validation • A measure (e.g., LOC) can be • An attribute measure (e.g., program size) • An input to a prediction system (e.g., predictor of number of faults) • A measure should not be rejected if it is not part of a prediction system • If a measure is valid for assessment only, it is called valid in the narrow • If a measure is valid for assessment and useful for prediction, it is called valid in the wide sense • Statistical correlation not same as cause & effect

  28. 12.4 Evaluating Products • Examining a product to determine if it has desirable attributes • Asking whether a document, file, or system has certain properties, such as completeness, consistency, reliability, or maintainability • Product quality models • Establishing baselines and targets • Software reusability

  29. 12.4 Evaluating Products Product Quality Models • Product quality model: Suggests ways to tie together different quality attributes • Gives a broader picture of quality • Some product quality models: • Boehm’s model • ISO 9126 • Dromey’s Model

  30. 12.4 Evaluating Products Boehm’s Quality Model

  31. 12.4 Evaluating Products Boehm’s Quality Model (continued) • Reflects an understanding of quality where the software • Does what the user wants it do • Is easily ported • Uses computer resources correctly and efficiently • Is easy for the user to learn and use • Is well-designed, well-coded, and easily tested and maintained

  32. 12.4 Evaluating Products ISO 9126 Quality Model • A hierarchical model with six major attributes contributing to quality • Each right-hand characteristic is related to only to exactly one left-hand attribute • The standard recommends measuring the right-hand characteristics directly, but no details of how to measure

  33. 12.4 Evaluating Products ISO 9126 Quality Model (continued)

  34. 12.4 Evaluating Products ISO 9126 Quality Characteristics

  35. 12.4 Evaluating Products • Both models articulated what to look for in the products • No rational for • Why some characteristics are included/excluded ? • How the hierarchical structure was formed ? • No guidance for • How to compose lower-level characteristics into higher level ones?

  36. 12.4 Evaluating Products Dromey’s Quality Model • Based on two issues • Many product properties appear to influence software quality • There is little formal basis for understanding which lower-level attributes affect the higher level-ones • Product quality depends on • Choice of components • The tangible properties of components and component composition: Correctness, internal, contextual, descriptive properties

  37. 12.4 Evaluating Products Dromey’s Quality Model Attributes • High-level attributes be those that are of high priority • Ex: The six attributes of ISO 9126 + reusability + process maturity • Attributes of reusability: machine independence, separability, configurability • Process maturity attributes: • client orientation • well-definedness • assurance • effectiveness

  38. 12.4 Evaluating Products Dromey’s Quality Model Framework • Linking product properties to quality attributes

  39. 12.4 Evaluating Products Dromey’s Quality Model Example

  40. 12.4 Evaluating Products Dromey’s Quality Model Example (continued) • The model is based on the following five steps • Identify a set of high-level quality attributes • Identify product components • Identify and classify the most significant, tangible, quality-carrying properties for each component • Propose a set of axioms for linking product properties to quality attributes • Evaluate the model, identify its weaknesses and refine or recreate it

  41. 12.4 Evaluating Products Establishing Baseline and Targets • Evaluate a product by comparing with a baseline • A baseline describes the usual or typical result in an organization or category • Baselines are useful for managing expectations • A target is a variation of a baseline • Defines minimal acceptable behavior • Reasonable when it reflects an understanding of a baseline

  42. 12.4 Evaluating ProductsQuantitative Targets For Managing US Defense Projects

  43. 12.4 Evaluating Products Software Reusability • Software reuse: The repeated use of any part of a software system • Documentation • Code • Design • Requirements • Test cases • Test data • Done if commonality exists among different modules

  44. 12.4 Evaluating Products Types of Reuse • Producer reuse: Creating components for someone else to use • Consumer reuse: Using components developed for some other product • Black-box reuse: Using an entire product without modification. • Clear- or white-box reuse: Modifying product (to fit in specific needs) before reusing it. More common

  45. 12.4 Evaluating Products Reuse Approaches • Compositional reuse: Uses components as building blocks; development done from bottom up • Components stored in reuse repository • Generative reuse: Designs components specifically for an application domain; design is top-down • Promises a high potential payoff • Domain analysis: Identifies and describes areas of commonality that make an app. domain ripe for reuse • Comp. reuse: Identify common lower-level functions • Gen. reuse: Define domain architecture; specify interface standards

  46. 12.4 Evaluating ProductsScope of Reuse • Vertical reuse: Involves the same application area or domain • Horizontal reuse: Cuts across domains

  47. 12.4 Evaluating ProductsAspects of Reuse

  48. 12.4 Evaluating ProductsReuse Technology • Component classification: Collection of reusable components are organized and catalogued according to a classification scheme • hierarchical • faceted classification

  49. 12.4 Evaluating ProductsExample of A Hierarchical Classification Scheme • Inflexible: New topic can be added easily only at the lowest level

  50. 12.4 Evaluating ProductsFaceted Classification Scheme • A facet is a kind of descriptor that helps to identify the component • Example of the facets of reusable code • An application area • A function • An object • A programming language • An operating system

More Related