1 / 17

COCOMO II Calibration

smi Software Metrics, Inc. COCOMO II Calibration. Brad Clark Software Metrics Inc. Don Reifer Reifer Consultants Inc. 22nd International Forum on COCOMO and Systems / Software Cost Modeling USC Campus, Los Angeles, CA, 31 Oct to 2 Nov 2007. Topics. I. Globbing study results

monet
Télécharger la présentation

COCOMO II Calibration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. smiSoftware Metrics, Inc. COCOMO II Calibration Brad Clark Software Metrics Inc. Don Reifer Reifer Consultants Inc. 22nd International Forum on COCOMO and Systems / Software Cost Modeling USC Campus, Los Angeles, CA, 31 Oct to 2 Nov 2007

  2. Topics I. Globbing study results • Glob: (informal noun) a lump of a semi-liquid substance • COCOMO Globbing: lumping projects together based on a common attributes II. COCOMO Model calibration status

  3. Globbing Motivation Create pre-set cost driver ratings for different application domains Select an application domain and then create a set of driver ratings for it based on common characteristics Allows estimators to generate estimates quickly Permits estimators to create a knowledge base on the basis of application domain characteristics Create calibration groups within the data Hypothesis: projects in the database are so diverse that “local” calibration within a group should improve model accuracy and precision 3

  4. Globbing data into application domains is based on productivities and size What productivity do we use: raw or adjusted? Why would we consider adjusting actual effort? Because there are business decisions that impact the project and are independent of an application domain Which COCOMO II drivers are business-driven? How do we create size ranges? Approach -1 4

  5. Business-driven drivers (these are independent of the application) Analyst capability Programmer capability Personnel continuity Application experience Platform experience Language and tool experience Use of software tools Multi-site development Required development schedule All Scale Drivers Application-driven drivers Required software reliability Database size Product complexity Developed for Reuse (maybe not) Documentation match to life-cycle needs Execution time constraint Main storage constraint Platform volatility Approach -2 5

  6. Size Buckets • When Size and Productivity are compared, there are three areas where productivity changes at different rates: • Small: 2 to 25 KSLOC • Medium: 25 to 100 KSLOC • Large:100 to 1,000 KSLOC

  7. Analysis Approach -1 Data from COCOMO II 2000 calibration (161projects) were used to conduct the study The effort was adjusted to remove the business-driven drivers: PM’ Data points were divided into four different groups based on productivities observed in the data • Glob-1: Defense-like applications (real-time; complex) • Glob-2: Telecom-like applications (high reliability) • Glob-3: Scientific-like applications (compute-intensive) • Glob-4: Business-like applications (data-intensive) Each Glob was segregated into three size buckets based on COCOMO II model productivity rates • Small: 2 to 25 KSLOC • Medium: 25 to 100 KSLOC • Large: 100+ KSLOC 7

  8. Analysis Approach -2 Globbing around application-type and size has been successfully demonstrated in other cost models. We defined application-types as follows: We use such definitions because we can compare results against the norms reported in the Crosstalk Mar 2002 article “Let the Numbers Do The Talking”

  9. COCOMO II 2000 Effort Prediction Accuracy As a comparison to the Globbing results, the COCOMO II model accuracy reported in 2000 was: Note: 1. PRED(X) = Y% means that Y% of the predicted values fall within X% of the actual values

  10. Results -1 Projects in each Glob and Size Bucket were used to create a new COCOMO II constant, A, for each group (B set to 0.91).

  11. Results -2 COCOMO II constants, A and B, were created for each group.

  12. Usage Example Example Project: • Estimated 56,000 SLOC • Application-type: Telecom-like (Glob-2, constant A=3.12) Results: • Preset drivers: • RELY: High • DATA: High • TIME: High • STOR: High • All other drivers: Nominal • 382 PM, 24 Months, 16 Average Staffing Level As more is known (but not until then), COCOMO II drivers can be adjusted to reflect that information

  13. Globbing Conclusions Purpose was to create pre-set cost driver ratings for different application domains and a local calibration based on the domain An application domain is not characterized by all COCOMO II drivers, i.e. only a subset of the drivers describe an application domain (the question is which subset?) Results for a new COCOMO II calibrated constant, A, for each group makes the most sense and are reasonably accurate. Need more data to calibrate A and B (10-12 projects for each group) Size buckets account for possible changes in B 13

  14. Globbing Next Steps Get feedback on this idea from you, the conference attendees • Please share your thoughts with Don or Brad The key to this approach is identifying the correct Globs and the cost-driver setting for each Glob. • Use a consensus approach to setting the cost drivers for the different Globs • Name and describe the Globs • Re-run analysis Re-apply this technique to new project data in the repository 14

  15. Topics I. Globbing study results • Glob: (informal noun) a lump of a semi-liquid substance • COCOMO Globbing: lumping projects together based on a common attribute II. COCOMO Model calibration status

  16. COCOMO II Calibration Status Please check the CSSE website under “Past Events” for the results of this work. http://csse.usc.edu/csse/event/past.html

  17. For More Information Brad Clark Software Metrics Inc. (703) 754-0115 Brad@software-metrics.com Don Reifer Reifer Consultants Inc. (310) 530-4493 dreifer@earthlink.net

More Related