1 / 53

Critical Topic Review

Critical Topic Review. Instructor: Mike O’Dell. Classic Mistakes. Instructor: Mike O’Dell. This presentations was derived from the textbook used for this class: McConnell, Steve, Rapid Development , Chapter 3. Why Projects Fail - Overview. Five main reasons: Failing to communicate

kayo
Télécharger la présentation

Critical Topic Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Critical Topic Review Instructor: Mike O’Dell

  2. Classic Mistakes Instructor: Mike O’Dell This presentations was derived from the textbook used for this class: McConnell, Steve, Rapid Development, Chapter 3.

  3. Why Projects Fail - Overview • Five main reasons: • Failing to communicate • Failing to create a realistic plan • Lack of buy-in • Allowing scope/feature creep • Throwing resources at a problem

  4. Categories of Classic Mistakes • People-related • Process-related • Product-related • Technology-related

  5. Classic Mistakes Enumerated 1. Undermined motivation: • The Big One - Probably the largest single factor in poor productivity • Motivation must come from within 2. Weak personnel: • The right people in the right roles 3. Uncontrolled problem employees: • Problem people (or just one person) can kill a team and doom a project • The team must take action… early • Consider the Welch Grid

  6. Classic Mistakes Enumerated 4. Heroics: • Heroics seldom work to your advantage • Honesty is better than empty “can-do” 5. Adding people to a late project: • Productivity killer • Throwing people at a problem seldom helps 6. Noisy, crowded offices: • Work environment is important to productivity • Noisy, crowded conditions lengthen schedules

  7. Classic Mistakes Enumerated 7. Friction between developers and customers: • Cooperation is the key • Encourage participation in the process 8. Unrealistic expectations: • Avoid seat-of-the-pants commitments • Realistic expectations is a TOP 5 issue 9. Lack of effective project sponsorship: • Management must buy-in and provide support • Potential morale killer

  8. Classic Mistakes Enumerated 10. Lack of stakeholder buy-in: • Team members, end-users, customers, management, etc. • Buy-in engenders cooperation at all levels 11. Lack of user input: • You can’t build what you don’t understand • Early input is critical to avoid feature creep 12. Politics placed over substance: • Being well regarded by management will not make your project successful

  9. Classic Mistakes Enumerated 13. Wishful thinking: • Not the same as optimism • Don’t plan on good luck! • May be the root cause of many other mistakes 14. Overly optimistic schedules: • Wishful thinking? 15. Insufficient risk management: • Identify unique risks and develop a plan to eliminate them • Consider a “spiral” approach for larger risks

  10. Classic Mistakes Enumerated 16. Contractor failure: • Relationship/cooperation/clear SOW 17. Insufficient planning: • If you can’t plan it… you can’t do it! 18. Abandonment of planning under pressure: • Path to failure • Code-and-fix mentality takes over… and will fail

  11. Classic Mistakes Enumerated 19. Wasted time during fuzzy front end: • That would be now! • Almost always cheaper and faster to spend time upfront working/refining the plan 20. Shortchanged upstream activities: • See above… do the work up front! • Avoid the “jump to coding” mentality 21. Inadequate design: • See above… do the required work up front!

  12. Classic Mistakes Enumerated 22. Shortchanged quality assurance: • Test planning is a critical part of every plan • Shortcutting 1 day early on will likely cost you 3-10 days later • QA me now, or pay me later! 23. Insufficient management controls: • Buy-in implies participation & cooperation 24. Premature or overly frequent convergence: • It’s not done until it’s done!

  13. Classic Mistakes Enumerated 25. Omitting necessary tasks from estimates: • Can add 20-30% to your schedule • Don’t sweat the small stuff! 26. Planning to catch up later: • Schedule adjustments WILL be necessary • A month lost early on probably cannot be made up later 27. Code-like-hell programming: • The fast, loose, “entrepreneurial” approach • This is simply… Code-and-Fix. Don’t!

  14. Classic Mistakes Enumerated 28. Requirements gold-plating: • Avoid complex, difficult to implement features • Often, they add disproportionately to schedule 29. Feature creep: • The average project experiences 25% change • Another killer mistake! 30. Developer gold-plating: • Use proven stuffto do your job • Avoid dependence on the hottest new tools • Avoid implementing all the cool new features

  15. Classic Mistakes Enumerated 31. Push-me, pull-me negotiation: • Schedule slip = feature addition 32. Research-oriented development: • Software research schedules are theoretical, at best • Try not to push the envelop unless you allow for frequent schedule revisions • If you push the state of the art… it will push back! 33. Silver-bullet syndrome: • There is no magic in product development • Don’t plan on some new whiz-bang thing to save your bacon (i.e., your schedule)

  16. Classic Mistakes Enumerated 34. Overestimated savings from new tools or methods: • Silver bullets probably won’t improve your schedule… don’t overestimate their value 35. Switching tools in the middle of the project: • Version 3.1…version 3.2… version 4.0! • Learning curve, rework inevitable 36. Lack of automated source control: • Stuff happens… enough said!

  17. Recommendation: Develop a Disaster Avoidance Plan • Get together as a team sometime soon and make a list of “worst practices” that you should avoid in your project. • Include specific mistakes that you think could/will be made by your team • Post this list on the wall in your lab space or where ever it will be visible and prominent on a daily basis • Refer to it frequently and talk about how you will avoid these mistakes

  18. Your Plan: Estimation Instructor: Mike O’Dell This presentations was derived from the textbook used for this class, McConnell, Steve, Rapid Development, Chapter 8, further expanded on by Mr. Tom Rethard for this course.

  19. The Software-Estimation Story • Software/System development, and thus estimation, is a process of gradual refinement. • Can you build a 3-bedroom house for $100,000? (Answer: It depends!) • Some organizations want cost estimates to within ± 10% before they’ll fund work on requirements definition. (Is this possible?) • Present your estimate as a range instead of a “single point in time” estimate. • The tendency of most developers is to under-estimate and over-commit!

  20. Estimate-Convergence Graph Project Cost(effort and size) Project Schedule 4 1.6 2 1.25 High Estimate 1.5 1.15 1.25 1.1 1.0 1.0 0.8 0.9 0.67 0.85 Low Estimate 0.5 0.8 0.25 0.6 Initialproductdefinition Approvedproductdefinition Requirementsspecification Productdesignspecification Detaileddesignspecification Productcomplete

  21. Estimation tips • Avoid off-the-cuff estimates • Allow time for the estimate (do it right!) • Use data from previous projects • Use developer-based estimates • Estimate by walk-through • Estimate by categories • Estimate at a low-level of detail • Don’t forget/omit common tasks • Use software estimation tools • Use several different techniques, and compare the results • Evolve estimation practices as the project progresses

  22. Function-Point Estimation • Based on number of • Inputs(screens, dialogs, controls, messages) • Outputs(screens, reports, graphs, messages) • Inquiries(I/O resulting in a simple, immediate output) • Logical internal files(Major logical groups of end-user data, controlled by program) • External interface files(Files controlled by other programs that this program uses. Includes logical data that enters/leaves program)

  23. Function-Point Multipliers Function Points Program Low Medium High Characteristic Complexity Complexity Complexity Number of inputs  3  4  6 Number of outputs  4  5  7 Inquiries  3  4  6 Logical internal files  7  10  15 External interface files  5  7  10 Sum these to get an “unadjusted function-point total” Multiply this by an “influence multiplier” (0.65 to 1.35),based on 14 factors from data communication to ease ofinstallation. All of this gives a total function-point count. Use this with Jones’ First-Order Estimation Practice, orcompare to previous projects for an estimate

  24. Estimate Presentation Styles • Plus-or-minus qualifiers “6 months, +3 months, -2 months” • Ranges “5-9 months” • Risk quantification “6 months... +1 month for late subcontractor, +0.5 month for staff sickness, etc...” • Cases Best case April 1 Planned case May 15 Current case May 30 Worst case July 15 • Coarse dates and time periods “3rd quarter 97” • Confidence factors April 1 5% May 15 50% July 1 95%

  25. Schedule Estimation • Rule-of-thumb equation • schedule in months = 3.0 * man-months 1/3 This equation implies an optimal team size. • Use estimation software to compute the schedule from your size and effort estimates • Use historical data from your organization • Use McConnell’s Tables 8-8 through 8-10 to look up a schedule estimate based on the size estimate • Use the schedule estimation step from one of the algorithmic approaches (e.g., COCOMO) to get a more fine tunes estimate than the “Rule of thumb” equation.

  26. Shortest Possible Schedule Probability of Completing Exactly on the Scheduled Date Table 8.8High Risk of late completion. Shortest possible schedule Scheduled Completion Date Impossibleschedule • This tables assumes: • - Top 10% of talent pool, all motivated, no turnover • - entire staff starts working on Day 1, & continue until project released • - advanced tools available to everyone • - most time-efficient development methods used • - requirements completely known, and do not change

  27. Efficient Schedules (Table 8-9) • This table assumes: • Top 25% of talent pool • Turnover < 6% per year • No significant personnel conflicts • Using efficient development practices from Chap 1-5 • Note that less effort required on efficient schedule tables • For most projects, the efficient schedules represent “best-case”

  28. Nominal Schedules (Table 8-10) • This table assumes: • Top 50% of talent pool • Turnover 10-12% per year • Risk-management less than ideal • Office environment only adequate • Sporadic use of efficient development practices • Achieving nominal schedule may be a 50/50 bet.

  29. Estimate Refinement • Estimate can be refined only with a more refined definition of the software product • Developers often let themselves get trapped by a “single-point” estimate, and are held to it (Case study 1-1) • Impression of a slip over budget is created when the estimate increases • When estimate ranges decrease as the project progresses, customer confidence is built-up.

  30. Conclusions • Estimate accuracy is directly proportional to product definition. • Before requirements specification, product is very vaguely defined • Use ranges for estimates and gradually refine (tighten) them as the project progresses. • Measure progress and compare to your historical data • Refine… Refine… Refine…

  31. Feature-Set Control The slides in this presentation are derived from materials in the textbook used for CSE 4316/4317, Rapid Development: Taming Wild Software Schedules, by Steve McConnell. Instructor: Mike O’Dell

  32. The Problem • Products are initially stuffed with more features (requirements) than can be reasonably accommodated • Features continue to be added as the project progresses (“Feature-Creep”) • Features must be removed/reduced or significantly changed late in a project

  33. Sources of Change • End-users: driven by the “need” for additional or different functionality • Marketers: driven by the fact that markets and customer perspectives on requirements change (“latest and greatest” syndrome) • Developers: driven by emotional/ intellectual desire to build the “best” widget

  34. Effects of Change • Impact in every phase: design, code, test, documentation, support, training, people, planning, tracking, etc. • Visible effects: schedule, budget, product quality • Hidden effects: morale, pay, promotion, etc. • Costs are typically 50 -200 times less if changes are made during requirements phase, than if you discover them during implementation

  35. Change Control • Goals: • Allow change that results in the best possible product in the time available. Disallow all other changes • Allow everyone affected by a proposed change to participate in assessing the impact • Broadly communicate proposed changes and their impact • Provide an audit trail for all decisions (i.e., document them well) • A process to accomplish the above as efficiently as possible

  36. Late Project Feature Cuts • Goal: Eliminate features in an effort save the project’s schedule • Fact: Project may (will) fall behind for many reasons other than feature-set control • Fact: Removing features too late incurs additional costs and schedule impact • Approach: analyze the cost of removal and reusability, then strip out unused code, remove documentation, eliminate test cases, etc.

  37. Risk Management Instructor: Mike O’Dell This presentations was derived from the textbook used for this class: McConnell, Steve, Rapid Development, Chapter 5.

  38. Why Do Projects Fail? • Generally, from poor risk management • Failure to identify risks • Failure to actively/aggressively plan for, attack and eliminate “project killing” risks • Risk comes in different shapes and sizes • Schedule risks (short to long) • Cost risks (small to large) • Technology risks (probable to impossible)

  39. Elements of Risk Management • Managing risk consists of: identifying, addressing and eliminating risks • When does this occur? • WORST – Crisis management/Fire fighting : addressing risk after they present a big problem • BAD – Fix on failure : finding and addressing as the occur. • OKAY – Risk Mitigation : plan ahead and allocate resources to address risk that occur, but don’t try to eliminate them before they occur • GOOD – Prevention : part of the plan to identify and prevent risks before they become problems • BEST – Eliminate Root Causes : part of the plan to identify and eliminate the factors that make specific risks possible

  40. IDENTIFICATION RISK ASSESSMENT ANALYSIS PRIORITIZATION RISK MANAGEMENT PLANNING RISK CONTROL RESOLUION MONITORING Elements of Risk Management • Effective Risk Management is made up of: • Risk Assessment: identify, analyze, prioritize • Risk Control: planning, resolution, monitoring

  41. Risk Monitoring • Risks and potential impact will change throughout the course of a project • Keep an evolving “TOP 10 RISKS” list • See Table 5-7 for an example • Review the list frequently • Refine… Refine… Refine… • Put someone in charge of monitoring risks • Make it a part of your process & project plan

  42. Overview:Software System ArchitectureSoftware System Test Mike O’Dell Based on an earlier presentation by Bill Farrior, UTA, modified by Mike O’Dell

  43. What is System Design? • A progressive definition of how a system will be constructed: • Guiding principles/rules for design (Meta-architecture) • Top-level structure, design abstraction (Architecture Design) • Details of all lowest-level design elements (Detailed Design) CSE 4317

  44. What is Software Architecture? • A critical bridge between what a system will do/look like, and how it will be constructed • A blueprint for a software system and howit will be built • An abstraction: a conceptual model of what must be done to construct the software system • It is NOT a specification of the details of the construction CSE 4317

  45. What is Software Architecture? • The top-level breakdown of how a system will be constructed: • design principles/rules • high-level structural components • high-level data elements (external/internal) • high-level data flows (external/internal) • Discussion: Architectural elements of the new ERB CSE 4317

  46. System Architecture Design Process • Define guiding principles/rules for design • Define top-level components of the system structure (“architectural layers”) • Define top-level data elements/flows (external and between layers) • Deconstruct layers into major functional units (“subsystems”) • Translate top-level data elements/flows to subsystems CSE 4317

  47. Layers/Services: application: supporting network applications ftp, smtp, http transport: host-host data transfer tcp, udp network: routing of datagrams from source to destination ip, routing protocols link: data transfer between neighboring network elements E.g., Ethernet, 802.11 WLAN physical: bits “on the wire” application transport network link physical Layer Example: The Internet Protocol Stack Architecture 1: Introduction

  48. ICMP protocol • error reporting • router “signaling” • IP protocol • addressing conventions • datagram format • packet handling conventions • Routing protocols • path selection • RIP, OSPF, BGP routing table Subsystem Example: The Internet Network Layer Transport layer: TCP, UDP Network layer Link layer Physical layer 4: Network Layer

  49. Criteria for a Good Architecture(The Four I’s) • Independence – the layers are independent of each other and each layer’s functions are internally-specific and have little reliance on other layers. Changes in the implementation of one layer should not impact other layers. • Interfaces/Interactions – the interfaces and interactions between layers are complete and well-defined, with explicit data flows. • Integrity – the whole thing “hangs together”. It’s complete, consistent, accurate… it works. • Implementable – the approach is feasible, and the specified system can actually be designed and built using this architecture. CSE 4317

  50. How do you Document a Software Architecture? • Describe the “rules” : meta-architecture • guiding principles, vision, concepts • key decision criteria • Describe the layers • what they do, how they interact with other layers • what are they composed of (subsystems) CSE 4317

More Related