1 / 91

Performance and availability of computers and networks

Performance and availability of computers and networks. Instructor: Prof. Kishor S. Trivedi Visiting Prof. Of Computer Science and Engineering, IITK Prof. Department of Electrical and Computer Engineering Duke University Durham, NC 27708-0291 Phone: 7576 e-mail: kst@ee.duke.edu

zeke
Télécharger la présentation

Performance and availability of computers and networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance and availability of computers and networks Instructor: Prof. Kishor S. Trivedi Visiting Prof. Of Computer Science and Engineering, IITK Prof. Department of Electrical and Computer Engineering Duke University Durham, NC 27708-0291 Phone: 7576 e-mail: kst@ee.duke.edu URL: www.ee.duke.edu/~kst IIT Kanpur

  2. Outline • Introduction • Reliability, Availability, Security, Performance, Performability • Methods of Evaluation • Evaluation Vs. Optimization • Model construction, parameterization,solution,validation, interpretation • Introductory Probability (chapters 1-5) • Markov Chains (chapters 7-8) • Queuing networks (chapter 9) • Statistical techniques (chapter 10-11)

  3. Textbooks • Probability & Statistics with reliability, queuing, and computer science applications, K. S. Trivedi, Prentice-Hall, 1982 (Indian edition also available). • Probability & Statistics with reliability, queuing, and computer science applications, K. S. Trivedi, second edition, John Wiley & Sons, 2001. • Performance and reliability analysis of computer systems: An Example-Based Approach Using the SHARPE Software Package, Sahner, Trivedi, Puliafito, Kluwer Academic Publishers, 1996.

  4. Performance Evaluation • Workload: traffic arrival rates, service time distributions • Resource Contention & Scheduling • Concurrency and Synchronization • Timeliness (Have to Meet Deadlines) • Measures: Thruput, response time (mean & dist.), loss probability • Low-level (Cache, memory interference: ch. 7) • System-level (CPU-I/O, multiprocessing: ch. 8,9) • Network-level (protocols, handoff in wireless: ch. 7,8)

  5. Definition of Reliability • Recommendations E.800 of the International Telecommunications Union (ITU-T) defines reliability as follows: • “The ability of an item to perform a required function under given conditions for a given time interval.” • In this definition, an item may be a circuit board, a component on a circuit board, a module consisting of several circuit boards, a base transceiver station with several modules, a fiber-optic transport-system, or a mobile switching center (MSC) and all its subtending network elements. The definition includes systems with software.

  6. Definition of Availability • Availability is closely related to reliability, and is also defined in ITU-T Recommendation E.800 as follows:[1] "The ability of an item to be in a state to perform a required function at a given instant of time or at any instant of time within a given time interval, assuming that the external resources, if required, are provided." • An important difference between reliability and availability is that reliability refers to failure-free operation during an interval, while availability refers to failure-free operation at a given instant of time, usually the time when a device or system is first accessed to provide a required function or service

  7. High Reliability/Availability/Safety • Traditional applications (long-life/life-critical/safety-critical) • Space missions, aircraft control, defense, nuclear systems • New applications (non-life-critical/non-safety-critical, business critical) • Banking, airline reservation, e-commerce applications, web-hosting, telecommunication • Scientific applications (non-critical)

  8. Motivation: High Availability • Scott McNealy, Sun Microsystems Inc. • "We're paying people for uptime.The only thing that really matters is uptime, uptime, uptime, uptime and uptime. I want to get it down to a handful of times you might want to bring a Sun computer down in a year. I'm spending all my time with employees to get this design goal” • SUN Microsystems – SunUP & RASCAL program for high-availability • Motorola - 5NINES Initiative • HP, Cisco, Oracle, SAP - 5nines:5minutes Alliance • IBM – Cornhusker clustering technology for high-availability, eLiza, autonomic computing • Microsoft – Trustable computing initiative • John Hennessey – in IEEE Computer • Microsoft – Regular full page ad on 99.999% availability in USA Today

  9. Motivation – High Availability

  10. Need for a new term • Reliability is used in a generic sense • Reliability used as a precisely defined mathematical function • To remove confusion, IFIP WG 10.4 has proposed Dependability as an umbrella term

  11. SECURITY ATTRIBUTES AVAILABILITY RELIABILITY SAFETY CONFIDENTIALITY FAULT PREVENTION INTEGRITY FAULT REMOVAL MAINTAINABILITY DEPENDABILITY MEANS FAULT TOLERANCE FAULT FORECASTING FAULTS THREATS ERRORS FAILURES Dependability– Umbrella term Trustworthiness of a computer system such that reliance can justifiably be placed on the service it delivers

  12. IFIP WG10.4 • Failure occurs when the delivered service no longer complies with the specification • Error is that part of the system state which is liable to lead to subsequent failure • Fault is adjudged or hypothesized cause of an error Faults are the cause of errors that may lead to failures Fault Error Failure

  13. Dependability:Reliability, Availability,Safety, Security • Redundancy: Hardware (Static,Dynamic), Information, Time, software • Fault Types: Permanent (needs repair or replacement), Intermittent (reboot/restart or replacement), Transient (retry), Design : Heisenbugs, Aging related bugs Bohrbugs • Fault Detection, Automated Reconfiguration • Imperfect Coverage • Maintenance: scheduled, unscheduled

  14. Software Fault Classification • Many software bugs are reproducible, easily found and fixed during the testing and debugging phase • Other bugs that are hard to find and fix remain in the software during the operational phase • may never be fixed, but if the operation is retried or the system is rebooted, the bugs may not manifest themselves as failures, i.e., their manifestation is non-deterministic and dependent on the software reaching very rare states • Jim Gray: Bohrbugs & Heisenbugs

  15. Heisenbugs “Aging” related bugs Software Fault Classification Bohrbugs Depends on state of the env. e.g., system resources Analogous to det. FSM Analogous to non-det. FSM

  16. Software (OS, recovery s/w, applications) Bohrbugs Heisenbugs “Aging” related bugs Test/ Debug Retry opn. Des./Data Diversity Restart app. Reboot node Design/ Development Operational Software Fault Classification

  17. Failure Classification (Cristian) • Failures • Omission failures (Send/receive failures) • Crash failures • Infinite loop • Timing failures • Early • Late (performance or dynamic failures) • Response failures • Value failures • State-transition failures

  18. Dimensions of Software Reliability FAULT Heisenbugs Aging-related bugs Bohrbugs Operating System Middleware (Recovery Software) Application Software LAYER Time redundancy Replication REDUNDANCY No redundancy Diversity Information redundancy PHASE Testing/Debugging phase Operational phase MODEL Measurement-based model Analytical/Simulation model

  19. Steady-state reliability estimation • Software to be deployed for operational use • Corresponds to Heisenbugs: assume no bugs are fixed in the field • Estimation method: same as hardware (ch. 10) • Interested in estimating parameters from observed data

  20. Software Reliability Growth Modelestimation in testing/debugging phase • Failure data is collected during testing • Calibrate a reliability growth model using failure data; this model is then used for prediction • Many SRGMs exist (NHPP,HGRGM, etc.) • (Most models) Assume faults are fixed instantaneously upon detection • See Chapters 8,10

  21. Security • Security intrusions cause a system to fail • Security Failure • Integrity: Destruction/Unauthorized modification of information • Confidentiality: Theft of information • Availability: e.g., Denial of Services (DoS) • Similarity (as well as differences) between: • Malicious vs. accidental faults • Security vs. reliability/availability • Intrusion tolerance vs. fault tolerance

  22. The Need of Performability Modeling • New technologies, services & standards need new modeling methodologies • Pure performance modeling: too optimistic! Outage-and-recovery behavior not considered • Pure availability modeling: too conservative! Different levels of performance not considered

  23. for a specified operational time Reliability Availability at any given instant Performance under failures Survivability “ilities” besides performance Performability measures of the systems ability to perform designated functions R.A.S.-ability concerns grow. High-R.A.S. not only a selling point for equipment vendors and service providers. But, regulatory outage report required by FCC for public switched telephone networks (PSTN) may soon apply to wireless.

  24. Evaluation vs Optimization • Evaluation of system for desired measures given a set of parameters • Sensitivity Analysis • Bottleneck analysis • Reliability importance • Optimization • Static:Linear,integer,geometric,nonlinear, multiobjective; constrained or unconstrained • Dynamic: Dynamic programming, Markov decision

  25. PURPOSE OF EVALUATION • Understanding a system • Observation Operational environment Controlled environment • Reasoning A model is a convenient abstraction • Predicting behavior of a system • Need a model • Accuracy based on degree of extrapolation

  26. PURPOSE OF EVALUATION(Continued) These famous quotes bring out the difficulty of prediction based on models: • “All Models are Wrong; Some Models are Useful”George Box • “Prediction is fine as long as it is not about the future” Mark Twain

  27. MEASURES TO BE EVALUATED • Dependability • Reliability: R(t), System MTTF • Availability: Steady-state, Transient • Downtime • Performance • Throughput, Blocking Probability, Response Time “Does it work, and for how long?'' “Given that it works, how well does it work?''

  28. MEASURES TO BE EVALUATED (Continued) • Composite Performance and Dependability • Need Techniques and Tools That Can Evaluate • Performance, Dependability and Their Combinations “How much work will be done(lost) in a given interval including the effects of failure/repair/contention?''

  29. Methods of EVALUATION • Measurement-Based Most believable, most expensive Not always possible or cost effective during system design • Statistical techniques are very important here • Model-Based

  30. Methods of EVALUATION(Continued) • Model-Based Less believable, Less expensive 1. Discrete-Event Simulation vs. Analytic 2. State-Space Methods vs. Non-State-Space Methods 3. Hybrid: Simulation + Analytic (SPNP) 4. State Space + Non-State Space (SHARPE)

  31. Methods of EVALUATION(Continued) • Measurements + Models Vaidyanathan et al ISSRE 99

  32. QUANTITATIVE EVALUATION TAXONOMY Closed-form solution Numerical solution using a tool

  33. Note that • Both measurements & simulations imply statistical analysis of outputs (ch. 10,11) • Statistical inference • Hypothesis testing • Design of experiments • Analysis of variance • Regression (linear, nonlinear) • Distribution driven simulation requires generation of random deviates (variates) (ch. 3, 4, 5)

  34. Modeling Steps • Model construction • Model parameterization • Model solution • Result interpretation • Model Validation

  35. MODELING AND MEASUREMENTS: INTERFACES • Measurements supply Input Parameters to Models (Model Calibration or Parameterization) Confidence Intervals should be obtained Boeing, Draper, Union Switch projects • Model Sensitivity Analysis can suggest which Parameters to Measure More Accurately: Blake, Reibman and Trivedi: SIGMETRICS 1988.

  36. MODELING AND MEASUREMENTS: INTERFACES • Model Validation 1. Face Validation 2. Input-Output Validation 3. Validation of Model Assumptions (Hypothesis Testing) • Rejection of a hypothesis regarding model assumption based on measurement data leads to an improved model

  37. MODELING AND MEASUREMENTS: INTERFACES • Model Structure Based on Measurement Data • Hsueh, Iyer and Trivedi; IEEE TC, April 1988 • Gokhale et al, IPDS 98; • Vaidyanathan et al, ISSRE99

  38. MODELING TAXONOMY

  39. ANALYTIC MODELING TAXONOMY NON-STATE SPACE MODELING TECHNIQUES SP reliability block diagrams Non-SP reliability block diagrams

  40. State Space Modeling Taxonomy discrete-time Markov chains Markovian models continuous-time Markov chains Markov reward models (discrete) State space models Semi-Markov process non-Markovian models Markov regenerative process Non-Homogeneous Markov

  41. MODELING THROUGHOUT SYSTEM LIFECYCLE • System Specification/Design Phase Answer “What-if Questions'' • Compare design alternatives (Bedrock, Wireless handoff) • Performance-Dependability Trade-offs (Wireless Handoff) • Design Optimization (optimizing the number of guard channels)

  42. MODELING THROUGHOUT SYSTEM LIFECYCLE (Continued) • Design Verification Phase Use Measurements + Models E.g. Fault/Injection + Availability Model Union Switch and Signals, Boeing, Draper • Configuration Selection Phase: DEC, HP • System Operational Phase: IDEN Project Workload based adaptive rejuvenation • It is fun!

  43. MODELER'S DILEMMA Should I Use Discrete-Event Simulation? • Point Estimates and Confidence Intervals • How many simulation runs are sufficient? • What Specification Language to use? • C, SIMULA, SIMSCRIPT, MODSIM, GPSS, RESQ, SPNP v6, Bones, SES workbench, ns, opnet

  44. MODELER'S DILEMMA (Continued) • Simulation: + Detailed System Behavior including non-exponential behavior + Performance, Dependability and Performability Modeling Possible - Long Execution Time (Variance Reduction Possible) • Importance Sampling, importance splitting, regenerative simulation. • Parallel and Distributed Simulation - Many users in practice do not realize the need to calculate confidence intervals

  45. MODELER'S DILEMMA (Continued) Should I Use Non-State-Space Methods? • Model Solved Without Generating State Space • Use: Order Statistics, Mixing, Convolution (chapters 1-5) • Common Dependability Model Types: also called Combinatorial Models • Series-Parallel Reliability Block Diagrams • Non-Series-Parallel Block Diagrams (or Reliability Graphs) • Fault-Trees Without Repeated Events • Fault-Trees With Repeated Events

  46. Combinatorial analytic models • Reliability block diagrams, Fault trees and Reliability graphs • Commonly used for reliability and availability • These model types are similar in that they capture conditions that make a system fail in terms of the structural relationships between the system components.

  47. RBD example

  48. Combinatorial Models • Combinatorial modeling techniques like RBDs and FTs are easy to use and assuming statistical independence solve for system availability and system MTTF • Each component can have attached to it • A probability of failure • A failure rate • A distribution of time to failure • Steady-state and instantaneous unavailability

  49. Non-State Space Modeling Techniques • Possible to compute (given component failure/repair rates:) • System Reliability • System Availability (Steady-state, instantaneous) • System MTTF

  50. Non-State Space Modeling Techniques (Continued) • Assuming: • Failures are statistically independent • As many repair units as needed • Relatively good algorithms are available for solving such models so that 100 component systems can be handled.

More Related