1 / 91

SE 325/425 Principles and Practices of Software Engineering Autumn 2006

SE 325/425 Principles and Practices of Software Engineering Autumn 2006. James Nowotarski 17 October 2006. Today’s Agenda. Topic Duration Testing recap 20 minutes Project planning & estimating 60 minutes *** Break Current event reports 30 minutes Software metrics 60 minutes.

nam
Télécharger la présentation

SE 325/425 Principles and Practices of Software Engineering Autumn 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SE 325/425Principles and Practices of Software EngineeringAutumn 2006 James Nowotarski 17 October 2006

  2. Today’s Agenda Topic Duration • Testing recap 20 minutes • Project planning & estimating 60 minutes *** Break • Current event reports 30 minutes • Software metrics 60 minutes

  3. Verification & Validation • Testing is just part of a broader topic referred to as Verification and Validation (V&V) • Pressman/Boehm: • Verification: Are we building the product right? • Validation: Are we building the right product?

  4. Error origination defect fault Stage Containment Planning & Managing Communication project initiation requirements Modeling analysis design Construction code test Deployment delivery support Error detection error

  5. Acceptance Test System Test Requirements Functional Design Unit Test Technical Design Detailed Design Code V-Model Integration Test Legend: Validation Flow of Work Testing: Test that the product implements the specification Verification

  6. Test Coverage Metrics • Statement coverage:Goal is to execute each statement at least once. • Branch coverageGoal is to execute each branch at least once. • Path coverageWhere a path is a feasible sequence of statements that can be taken during the execution of the program. 5/10 = 50% 2/6  33% ¼  25% Where does the 4 come from? What % of each type of coverage does this test execution provide? = tested = not tested

  7. Example of pair programming • “Since then, [Adobe’s] Mr. Ludwig has adopted Fortify software and improved communication between his team of security experts and programmers who write software. A few years ago, each group worked more or less separately: The programmers coded, then the quality-assurance team checked for mistakes. Now, programmers and security types often sit side by side at a computer, sometimes lobbing pieces of code back and forth several times a day until they believe it is airtight. The result: ‘Issues are being found earlier,’ Mr. Ludwig says. But, he adds, ‘I'm still trying to shift that curve.’ “ Vara, V. (2006, May 4). Tech companies check software earlier for flaws. Wall Street Journal. Retrieved October 16, 2006, from http://online.wsj.com/public/article/SB114670277515443282-qA6x6jia_8OO97Lutaoou7Ddjz0_20060603.html?mod=tff_main_tff_top

  8. V-Model Acceptance Test System Test Black box Integration Test White box Unit Test Code

  9. Flow Graph Notation Sequence Case if while until Where each circle represents one or more nonbranching set of source code statements.

  10. Continued… 2 1 i = 1;total.input = total.valid = 0;sum = 0;DO WHILE value[i] <> -999 AND total.input < 100 increment total.input by 1; IF value[i] >= minimum and value[i] <= maximum THEN increment total.valid by 1; sum = sum + value[i] ELSE skip ENDIF increment I by 1;ENDDO IF total.valid > 0 THEN average = sum / total.valid; ELSE average = -999;ENDIF END average 3 4 6 5 7 8 9 10 11 12 13

  11. 1 2 3 10 4 12 11 5 13 6 7 8 9 Steps for deriving test cases • Use the design or code as a foundation and draw corresponding flow graph. 2. Determine the cyclomatic complexity of the resultant flow graph. V(G) = 17 edges – 13 nodes + 2 = 6V(G) = 5 predicate nodes + 1 = 6.

  12. 1 2 3 10 4 12 11 5 13 6 7 8 9 Steps for deriving test cases • Determine a basis set of linearly independent paths. Path 1: 1-2-10-11-13Path 2: 1-2-10-12-13Path 3: 1-2-3-10-11-13Path 4: 1-2-3-4-5-8-9-2…Path 5: 1-2-3-4-5-6-8-9-2…Path 6: 1-2-3-4-5-6-7-8-9-2… • Prepare test cases that will force execution of each path in the basis set.

  13. Today’s Agenda Topic Duration • Testing recap 20 minutes • Project planning & estimating 60 minutes *** Break • Current event reports 30 minutes • Software metrics 60 minutes

  14. People trump process “A successful software methodology (not new, others have suggested it):(1) Hire really smart people(2) Set some basic direction/goals(3) Get the hell out of the wayIn addition to the steps about, there's another key: RETENTION” http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html

  15. Our focus Project Management Planning & Managing Communication project initiation requirements Modeling analysis design Construction code test Deployment delivery support

  16. Planning & Managing Project Management Institute • Scope • Time • Cost • People • Quality • Risk • Integration (incl. change) • Communications • Procurement

  17. requirements Users Negotiate reqts work breakdown structure negotiated requirements Decom- pose productivity rate Estimate size deliverable size Estimate resources 5 3 4 2 1 workmonths Develop schedule Iterate as necessary schedule Today’s focus

  18. Work Breakdown Structure • Breaks project into a hierarchy. • Creates a clear project structure. • Avoids risk of missing project elements. • Enables clarity of high level planning.

  19. requirements Users Negotiate reqts work breakdown structure negotiated requirements Decom- pose Estimate size deliverable size Estimate resources 1 2 5 3 4 workmonths Develop schedule Iterate as necessary schedule Today’s focus productivity rate

  20. Units of Size • Lines of code (LOC) • Function points (FP) • Components

  21. LOC How many physical source lines are there in this C language program? #define LOWER 0 /* lower limit of table */ #define UPPER 300 /* upper limit */ #define STEP 20 /* step size */ main() /* print a Fahrenheit->Celsius conversion table */ { int fahr; for(fahr=LOWER; fahr<=UPPER; fahr=fahr+STEP) printf(“%4d %6.1f\n”, fahr, (5.0/9.0)*(fahr-32)); }

  22. LOC Need standards to ensure repeatable, consistent size counts • IncludeExclude • Executable  • Nonexecutable • Declarations  • Compiler directives  • Comments • On their own lines  • On lines with source  • . . .

  23. A Case Study • Computer Aided Design (CAD) for mechanical components. • System is to execute on an engineering workstation. • Interface with various computer graphics peripherals including a mouse, digitizer, high-resolution color display, & laser printer. • Accepts two & three dimensional geometric data from an engineer. • Engineer interacts with and controls CAD through a user interface. • All geometric data & supporting data will be maintained in a CAD database. • Required output will display on a variety of graphics devices. Assume the following major software functions are identified

  24. Estimation of LOC • CAD program to represent mechanical parts • Estimated LOC = (Optimistic + 4(Likely)+ Pessimistic)/6

  25. LOC • “Lines of code is a useless measurement in the face of code that shrinks when we learn better ways of programming” (Kent Beck)

  26. Function Points • A measure of the size of computer applications • The size is measured from a functional, or user, point of view. • It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application. • Can be subjective • Can be estimated EARLY in the software development life cycle • Two flavors: • Delivered size = total application size delivered, including packages, assets, etc. • Developed size = portion built for the release

  27. Computing Function Points 5 15 8 32 10 40 8 80 2 10 177

  28. 0 1 2 3 4 5 No influence Incidental Moderate Average Significant Essential Calculate Degree of Influence (DI) 3 • Does the system require reliable backup and recovery? • Are data communications required? • Are there distributed processing functions? • Is performance critical? • Will the system run in an existing, heavily utilized operational environment? • Does the system require on-line data entry? • Does the on-line data entry require the input transaction to be built over multiple screens or operations? • Are the master files updated on-line? • Are the inputs, outputs, files, or inquiries complex? • Is the internal processing complex? • Is the code designed to be reusable? • Are conversion and installation included in the design? • Is the system designed for multiple installations in different organizations? • Is the application designed to facilitate change and ease of use by the user? 4 1 3 2 4 3 3 2 1 3 5 1 1

  29. The FP Calculation: • Inputs include: • Count Total • DI =  Fi (i.e., sum of the Adjustment factors F1.. F14) • Calculate Function points using the following formula:FP = UFP X [ 0.65 + 0.01 X  Fi ] • In this example:FP = 177 X [0.65 + 0.01 X (3+4+1+3+2+4+3+3+2+1+3+5+1+1)FP = 177 X [0.65 + 0.01 X (36)FP = 177 X [0.65 + 0.36]FP = 177 X [1.01]FP = 178.77 TCF: Technical complexity factor

  30. Reconciling FP and LOC http://www.theadvisors.com/langcomparison.htm

  31. Components Criteria: Simple – Medium – Hard –

  32. Bottom-up estimating • Divide project into size units (LOC, FP, components) • Estimate person-hours per size unit • Most projects are estimated in this way, once details are known about size units

  33. Project Management Planning & Managing Communication project initiation requirements Modeling analysis design Construction code test Deployment delivery support top down estimating bottom up estimating

  34. Using FP to estimate effort: • If for a certain project • FPEstimated = 372 • Organization’s average productivity for systems of this type is 6.5 FP/person month. • Burdened labor rate of $8000 per month • Cost per FP • $8000/6.5  $1230 • Total project cost • 372 X $1230 = $457,650

  35. Empirical Estimation Models • Empirical data supporting most empirical models is derived from a limited sample of projects. • NO estimation model is suitable for all classes of software projects. • USE the results judiciously. • General model: E = A + B X (ev)cwhere A, B, and C are empirically derived constants.E is effort in person monthsev is the estimation variable (either in LOC or FP)

  36. Be sure to include contingency The earlier “completed programs” size and effort data points in Figure 2 are the actual sizes and efforts of seven software products built to an imprecisely-defined specification [Boehm et al. 1984]†. The later “USAF/ESD proposals” data points are from five proposals submitted to the U.S. Air Force Electronic Systems Division in response to a fairly thorough specification [Devenny 1976]. http://sunset.usc.edu/research/COCOMOII/index.html

  37. Some famous words from Aristotle It is the mark of an instructed mind to rest satisfied with the degree of precision which the nature of a subject admits, and not to seek exactness when only approximation of the truth is possible…. Aristotle(384-322 B.C.)

  38. GANTT Schedule • View Project in Context of time. • Critical for monitoring a schedule. • Granularity 1 –2 weeks.

  39. Gantt Example 1: Suppose a project comprises five activities: A,B,C,D, and E. A and B have no preceding activities, but activity C requires that activity B must be completed before it can begin. Activity D cannot start until both activities A and B are complete. Activity E requires activities A and C to be completed before it can start. If the activity times are A: 9 days; B: 3 days; C: 9 days; D: 5 days; and E: 4 days, determine the shortest time necessary to complete this project. Identify those activities which are critical in terms of completing the project in the shortest possible time. http://acru.tuke.sk/doc/PM_Text/PM_Text.doc

  40. Gantt Example 2: Construct a Gantt chart which will provide an overview of the planned project. How soon could the project be completed? Which activities need to be completed on time in order to ensure that the project is completed as soon as possible? http://acru.tuke.sk/doc/PM_Text/PM_Text.doc

  41. Estimating Schedule Time • Rule of thumb (empirical) Schedule Time (months) = 3.0 * person-months1/3

  42. People trump process One good programmer will always outcode 100 hacks in the long run, no matter how good of a process or IDE you give them http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html

  43. Today’s Agenda Topic Duration • Testing recap 20 minutes • Project planning & estimating 60 minutes *** Break • Current event reports 30 minutes • Software metrics 60 minutes

  44. Why Measure? • “You can’t control what you can’t measure” (Tom Demarco) • “Show me how you will measure me, and I will show you how I will perform” (Eli Goldratt) • “Anything that can’t be measured doesn’t exist” (Locke, Berkeley, Hume)

  45. Our focus Director - IS/IT Manager, Systems Development & Maintenance Manager, Computer Operations Manufacturing Systems Financial Systems Customer Fulfillment Systems Scope of our discussion Sample IT Organization

  46. Examples of systems development metrics

  47. = Is a single project release (Average elapsed months =14.8, n=33). Industry Average line is determined from Software Productivity Research Example: Speed of delivery 70 60 50 40 Elapsed Months 30 20 10 0 0 2000 4000 6000 8000 10000 12000 Developed Function Points

  48. Example: Schedule reliability 60% 50% 40% 30% Schedule Variance above commitment 20% = Is a single project release (n=33). Industry Average line is determined from Software Productivity Research 10% 0% 2000 4000 6000 8000 10000 12000 Developed Function Points

  49. Faults reported over the first three months in operations (n=27) An estimated industry average for faults found in the first three months of operations. The assumption is that half the total faults are found in the first three months in operation. This average is one half of the industry average of the total faults from C. Jones, Applied Software Measurement, 1996, p.232. Example: Software quality 7000 6000 5000 4000 3000 Faults (3 months) 2000 1000 0 0 2000 4000 6000 8000 10000 12000 Developed Function Points

More Related