1 / 63

Early Systems Costing

Early Systems Costing. Prof. Ricardo Valerdi Systems & Industrial Engineering rvalerdi@arizona.edu Dec 15, 2011 INCOSE Brazil Systems Engineering Week INPE/ITA São José dos Campos, Brazil. Take-Aways. Cost ≈ f(Effort) ≈ f(Size) ≈ f(Complexity)

spence
Télécharger la présentation

Early Systems Costing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Early Systems Costing Prof. Ricardo Valerdi Systems & Industrial Engineering rvalerdi@arizona.edu Dec 15, 2011 INCOSE Brazil Systems Engineering Week INPE/ITA São José dos Campos, Brazil

  2. Take-Aways • Cost ≈ f(Effort) ≈ f(Size) ≈ f(Complexity) • Requirements understanding and “ilities” are the most influential on cost • Early systems engineering yields high ROI when done early and well Two case studies: • SE estimate with limited information • Selection of process improvement initiative

  3. Cost Commitment on Projects Blanchard, B., Fabrycky, W., Systems Engineering & Analysis, Prentice Hall, 2010.

  4. Cone of Uncertainty 4x 2x Relative Size Range x 0.5x Initial Operating Capability OperationalConcept Life Cycle Objectives Life Cycle Architecture 0.25x Feasibility Plans/Rqts. Design Develop and Test Phases and Milestones Boehm, B. W., Software Engineering Economics, Prentice Hall, 1981.

  5. The Delphic Sybil Michelangelo Buonarroti Capella Sistina, Il Vaticano (1508-1512)

  6. Systems Engineering Effort vs. Program Cost NASA data (Honour 2002)

  7. COSYSMO Origins Systems Engineering (Warfield 1956) 1950 Software Cost Modeling COSYSMO (Boehm 1981) 1980 CMMI* (Humphrey 1989) 1990 *Capability Maturity Model Integrated (Software Engineering Institute, Carnegie Mellon University) Warfield, J. N., Systems Engineering, United States Department of Commerce PB111801, 1956. Boehm, B. W., Software Engineering Economics, Prentice Hall, 1981. Humphrey, W. Managing the Software Process. Addison-Wesley, 1989.

  8. How is Systems Engineering Defined? • Product Realization • Implementation Process • Transition to Use Process • Technical Evaluation • Systems Analysis Process • Requirements Validation Process • System Verification Process • End Products Validation Process • Acquisition and Supply • Supply Process • Acquisition Process • Technical Management • Planning Process • Assessment Process • Control Process • System Design • Requirements Definition Process • Solution Definition Process EIA/ANSI 632, Processes for Engineering a System, 1999.

  9. COSYSMO Data Sources

  10. Modeling Methodology

  11. Results of Bayesian Update: Using Prior and Sampling Information

  12. COSYSMO Scope • Addresses first four phases of the system engineering lifecycle (per ISO/IEC 15288) • Considers standard Systems Engineering Work Breakdown Structure tasks (per EIA/ANSI 632) Conceptualize Operate, Maintain, or Enhance Replace or Dismantle Transition to Operation Oper Test & Eval Develop

  13. COSYSMO Operational Concept # Requirements # Interfaces # Scenarios # Algorithms Size Drivers COSYSMO Effort (schedule, risk) Effort Multipliers • Application factors • 8 factors • Team factors • 6 factors Calibration WBS guided by EIA/ANSI 632

  14. Software Cost Estimating Relationship MM = Man months a = calibration constant S = size driver E = scale factor c = cost driver(s) KDSI = thousands of delivered source instructions Boehm, B. W., Software Engineering Economics, Prentice Hall, 1981.

  15. COSYSMO Model Form Where: PMNS = effort in Person Months (Nominal Schedule) A = calibration constant derived from historical project data k = {REQ, IF, ALG, SCN} wx = weight for “easy”, “nominal”, or “difficult” size driver = quantity of “k” size driver E = represents diseconomies of scale EM = effort multiplier for the jth cost driver. The geometric product results in an overall effort adjustment factor to the nominal effort.

  16. Size Drivers vs. Effort Multipliers • Size Drivers: Additive, Incremental • Impact of adding a new item inversely proportional to current size • 10 -> 11 rqts = 10% increase • 100 -> 101 rqts = 1% increase • Effort Multipliers: Multiplicative, system-wide • Impact of adding a new item independent of current size • 10 rqts + high security = 40% increase • 100 rqts + high security = 40% increase

  17. 4 Size Drivers • Number of System Requirements • Number of System Interfaces • Number of System Specific Algorithms • Number of Operational Scenarios Weighted by complexity and degree of reuse

  18. Number of System Requirements This driver represents the number of requirements for the system-of-interest at a specific level of design. The quantity of requirements includes those related to the effort involved in system engineering the system interfaces, system specific algorithms, and operational scenarios. Requirements may be functional, performance, feature, or service-oriented in nature depending on the methodology used for specification. They may also be defined by the customer or contractor. Each requirement may have effort associated with is such as V&V, functional decomposition, functional allocation, etc. System requirements can typically be quantified by counting the number of applicable shalls/wills/shoulds/mays in the system or marketing specification. Note: some work is involved in decomposing requirements so that they may be counted at the appropriate system-of-interest.

  19. Counting Rules Example COSYSMO example for sky, kite, sea, and underwater levels where: Sky level: Build an SE cost model Kite level: Adopt EIA 632 as the WBS and ISO 15288 as the life cycle standard Sea level: Utilize size and cost drivers, definitions, and counting rules Underwater level: Perform statistical analysis of data with software tools and implement model in Excel Source: Cockburn 2001

  20. Size Driver Weights

  21. UNDERSTANDING FACTORS Requirements understanding Architecture understanding Stakeholder team cohesion Personnel experience/continuity COMPLEXITY FACTORS Level of service requirements Technology Risk # of Recursive Levels in the Design Documentation Match to Life Cycle Needs OPERATIONS FACTORS # and Diversity of Installations/Platforms Migration complexity Cost Driver Clusters • PEOPLE FACTORS • Personnel/team capability • Process capability • ENVIRONMENT FACTORS • Multisite coordination • Tool support

  22. Stakeholder team cohesion Represents a multi-attribute parameter which includes leadership, shared vision, diversity of stakeholders, approval cycles, group dynamics, IPT framework, team dynamics, trust, and amount of change in responsibilities. It further represents the heterogeneity in stakeholder community of the end users, customers, implementers, and development team.

  23. Technology Risk The maturity, readiness, and obsolescence of the technology being implemented. Immature or obsolescent technology will require more Systems Engineering effort.

  24. Migration complexity This cost driver rates the extent to which the legacy system affects the migration complexity, if any. Legacy system components, databases, workflows, environments, etc., may affect the new system implementation due to new technology introductions, planned upgrades, increased performance, business process reengineering, etc.

  25. Cost Driver Rating Scales

  26. Cost Drivers Ordered by Effort Multiplier Ratio (EMR)

  27. Effort Profiling Transition to Operation Operational Test & Evaluation Conceptualize Develop ISO/IEC 15288 ANSI/EIA 632 Acquisition & Supply Technical Management System Design Product Realization Technical Evaluation

  28. Benefits of Local Calibration Before local calibration After local calibration

  29. Prediction Accuracy PRED(30) PRED(25) PRED(20) PRED(30) = 100% PRED(25) = 57%

  30. Human Systems Integration in the Air Force Joint HSI Personnel Integration Working Group. “Human System Integration (HSI) Activity in DoD Weapon Acquisition Programs: Part II Program Coverage and Return on Investment.” April 16, 2007. U.S. Air Force Scientific Advisory Board. “Report on Human Systems Integration in Air Force Weapon Systems Development and Acquisition.” July, 2004.

  31. HSI Early in F119 Development AF Acquisition Community-led requirements definition studies 40% fewer parts than previous engines

  32. Parametric Cost Estimation

  33. Parametric Cost Estimation Architecture Understanding Operational Scenarios Technology Risk Algorithms Interfaces Tool Support Requirements Personnel/team capability # and diversity of installations/platforms

  34. Illustrative Example Nominal System 100 Difficult Requirements 200 Medium Requirements 200 Easy Requirements SE Effort = 300 Person-months

  35. Illustrative Example Nominal System Size impact 110 Difficult Requirements 5th/95th Percentile Manpower 20 minute component replacement 210 Medium Requirements Tools reduction Interactive Technical Manuals 200 Easy Requirements SE Effort = 327 Person-months

  36. Illustrative Example Nominal System Cost impact 110 Difficult Requirements effect of effort multipliers: High Level of service requirements High HSI Tools support 210 Medium Requirements 200 Easy Requirements SE Effort = 368 Person-months

  37. ROI Calculation Note: time value of money excluded • Cost: 68 Person Months @ $10,000/person month • $680,000 of Human Systems Integration • Benefit: assuming 30 year life cycle • Manpower (one less maintainer): $200,000 * 30 = $6,000,000 • Human factors (40% fewer parts): $300,000 * 30 = $9,000,000 • Safety (one less repair accident): = $50,000 • Survivability (one less engine failure): = $500,000 = $15,550,000 • Return on Investment

  38. Relative ROI Brodman, J. G., Johnson, D. L., Return on Investment from Software Process Improvement as Measured by U.S. Industry, Crosstalk – The Journal of Defense Software, 1996. Rico, D. F., ROI of Software Process Improvement: Metrics for Project Managers and Software Engineers, J. Ross Publishing, Boca Raton, FL, 2004. Boehm, B. W., Valerdi, R. and Honour, E., “The ROI of Systems Engineering: Some Quantitative Results for Software-Intensive Systems,” Systems Engineering, 11(3), 221-234, 2008.

  39. Case Study I: Albatross Budgetary Estimate • You, as the Albatross SE lead, have just been asked to provide your program manager a systems engineering budgetary estimate for a new, believed-to-be critical function to the current baseline • This new functionality would add some new, nearly-standalone, capability to your Albatross program, your best educated guess by looking at the emailed requirements provided by your customer is that it adds about 10% to the requirements baseline of 1,000 and two new interfaces, you guess we need at least one new Operational Scenario. • The customer also stated that they really need this capability to be integrated into the next delivery (5 months from now) • The PM absolutely has to have your SE cost estimate within the next two hours, as the customer representative needs a not-to-exceed (NTE) total cost soon after that! • Information that may prove useful in your response • Albatross is well into system I&T, with somewhat higher than expected defects • Most of the baseline test procedures are nearing completion • The SE group has lost two experienced people in the past month to attrition • So far, the Albatross customer award fees have been excellent, with meeting of schedule commitments noted as a key strength

  40. Case Study I: In-Class Discussion Questions • What are some of the risks? • What additional questions could you ask of your PM? • What additional questions could the PM (or the SE Lead) ask of the Customer Representative? • What role could the Albatross PM play in this situation? • Is providing only “a number” appropriate in this situation? • What additional assumptions did you make that can be captured by COSYSMO?

  41. Case Study II: Selection of Process Improvement Initiative • You are asked by your supervisor to recommend which process improvement initiative should be pursued to help your project, a 5-year $100,000,000 systems engineering study. The options and their implementation costs are: Lean $1,000,000 Six Sigma $600,000 CMMI $500,000 • Which one would you chose?

  42. Case Study II: Selection of Process Improvement Initiative Assumptions • Implementing Lean on your project will greatly improve your team’s documentation process. Using COSYSMO, you determine that the Documentation cost driver will move from “Nominal” to “Low” yielding a 12% cost savings (1.00 – 0.88 = 0.12). • Implementing Six Sigma will greatly improve your team’s requirements process. Using COSYSMO, you determine that the Requirements Understanding cost driver will move from “Nominal” to “High” yielding a 23% cost savings (1.00 – 0.77 = 0.23) • Implementing CMMI will greatly improve team communication. Using COSYSMO, you determine that the Stakeholder Team Cohesion cost driver will move from “Nominal” to “High” yielding a 19% cost savings (1.00 – 0.81 = 0.19)

  43. Cost Driver Rating Scales

  44. Case Study II: Selection of Process Improvement Initiative Assumptions • The systems engineers on your project spend: 5% of their time doing documentation 30% of their time doing requirements-related work 20% of their time communicating with stakeholders • What is the financial benefit of each process improvement initiative? • How does the financial benefit vary by project size?

  45. Return-on-Investment Calculation(benefit – cost) / cost Best option is Six Sigma!

  46. Return-on-Investment vs. Project Size Six Sigma CMMI Lean

  47. Reuse is a Universal Concept

  48. Reusable Artifacts • System Requirements • System Architectures • System Description/Design Documents • Interface Specifications for Legacy Systems • Configuration management plan • Systems engineering management plan • Well-Established Test Procedures • User Guides and Operation Manuals • Etc.

More Related