1 / 59

Network Performance Analysis Strategies

Network Performance Analysis Strategies . Dr Shamala Subramaniam Dept. Communication Technology & Networks Faculty of Computer Science & IT, UPM e-mail : shamala@fsktm.upm.edu.my . Overview of Performance Evaluation. Intro & Objective The Art of Performance Evaluation

arleen
Télécharger la présentation

Network Performance Analysis Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Performance Analysis Strategies Dr Shamala Subramaniam Dept. Communication Technology & Networks Faculty of Computer Science & IT, UPM e-mail : shamala@fsktm.upm.edu.my

  2. Overview of Performance Evaluation • Intro & Objective • The Art of Performance Evaluation • Professional Organizations, Journals, and conferences. • Performance Projects • Common Mistakes and How to Avoid Them • Selection of Techniques and Metrics

  3. Why ?

  4. Intro & Objective • Performance is a key criterion in the design, procurement, and use of computer systems. • Performance  Cost • Thus, computer systems professionals need the basic knowledge of performance evaluation techniques.

  5. Intro & Objective • Objective: • Select appropriate evaluation techniques, performance metrics and workloads for a system. • Conduct performance measurements correctly. • Use proper statistical techniques to compare several alternatives. • Design measurement and simulation experiments to provide the most information with least effort. • Perform simulations correctly.

  6. Modeling • Model – used to describe almost any attempt to specify a system under study. Everyday connotation – physical replica of a system. • Scientific – a model is a name given to a portrayal of interrelationships of parts of a system in precise terms. The portrayal can be interpreted in terms of some system attributes and is sufficiently detailed to permit study under a variety of circumstances and to enable the system’ s future behavior to be predicted.

  7. Usage of Models • Performance evaluation of a transaction processing system (Salsburg, 1988) • A study of the generation and control of forest fires in California (Parks, 1964) • The determination of the optimum labor along a continuous assembly line in a factory (Killbridge and Webster, 1966) • An analysis of ship boilers (Tysso, 1979)

  8. A Taxonomy of Models • Predictability • Deterministic – all data and relationships are given in certainty. Efficiency of an engine based on temperature, load and fuel consumption. • Stochastic - at least some of the variables involved have a value which is made to vary in an unpredictable or random fashion. Example – financial planning. • Solvability • Analytical – simple • Simulation – complicated or an appropriate equation cannot be found.

  9. A Taxonomy of Models • Variability • Whether time is incorporated into the model • Static – specific time (financial) • Dynamic – any time value (food cycle) • Granularity • Granularity of their treatment in time. • Discrete events – clearly some events (packet arrival) • Continuous models – impossible to distinguish between specific events taking place (trajectory of a missile).

  10. The Art of Performance Modeling • There are 3 ways to compare performance of two systems • Table 1.1 System Workload 1 Workload 2 Average A 20 10 15 B 10 20 15

  11. The Art of Performance Modeling (cont.) • Table 1.2 – System B as the Base System Workload 1 Workload 2 Average A 2 0.5 1.25 B 1 1 1

  12. The Art of Performance Modeling (cont.) • Table 1.3 – System A as the Base System Workload 1 Workload 2 Average A 1 1 1 B 2 0.5 1.25

  13. The Art of Performance Modeling (cont.) Ratio Game

  14. Performance Projects I hear and forget. I see and I remember. I do and I understand – Chinese Proverb

  15. Performance Projects • The best way to learn a subject is to apply the concepts to a real-system • The project should encompass: • Select a computer sub-system : a network congestion control, security, database, operating systems. • Perform some measurements. • Analyze the collected data. • Simulate AND Analytically model the subsystem • Predict its performance • Validate the Model.n

  16. Professional Organizations, Journals and Conferences • ACM Sigmetrics : Association of Computing Machinery’s. • IEEE Computer Society – The Institute of Electrical and Electronic Engineers (IEEE) Computer Society. • IASTED – The International Association of Science and Technology for Development

  17. Common Mistakes and How to Avoid Them • No Goals • Biased Goals • Unsystematic Approach • Analysis without understanding The Problem • Incorrect Performance Metrics • Unrepresentative Workloads • Wrong Evaluation Techniques • Overlooking Important Parameters • Ignoring Significant Factors

  18. Common Mistakes and How to Avoid Them • Inappropriate Experimental Design • Inappropriate Level of Detail • No Analysis • Erroneous Analysis • No Sensitivity Analysis • Ignoring Errors in Input • Improper Treatment of Outliers • Assuming No Change in the Future • Ignoring Variability

  19. Common Mistakes and How to Avoid Them • Too Complex Analysis • Improper Presentation of Results • Ignoring Social Aspects • Omitting Assumptions and Limitations.

  20. A Systematic Approach • State Goals and Define the System • List Services and Outcomes • Select Metrics • List Parameters • Select Factors to Study • Select Evaluation Technique • Select Workload • Design Experiments • Analyze and Interpret Data • Present Results

  21. Selection of Techniques and Metrics

  22. Overview • Key steps in performance evaluation technique • Selecting evaluation technique • Selecting a metric • Performance metrics • Problem of specifying performance requirements

  23. Selecting an evaluation technique • Three techniques • Analytical modeling • Simulation • Measurement

  24. Criteria for selection: Life-cycle stage • Measurements are possible only if something similar to the proposed system already exists. • For a new concept, analytical modeling and simulation are the only techniques from which to choose. • It is more convincing if analytical modeling and simulation is based on previous measurement.

  25. Criteria for selection: Time required • In most situations, results are required yesterday. Then analytical modeling is probably the only choice. • Simulations take long time • Measurements take longer than analytical modeling. • If any thing go wrong, measurement will. • So time required for measurement varies.

  26. Criteria for selection: Availability of tools • Tools include modeling skills, simulation languages, and measurement instruments. • Many performance analysts are skilled in modeling. They would not touch real system at any cost. • Others are not as proficient in queuing theory and prefer to measure or simulate. • Lack of knowledge of the simulation languages and techniques keeps many analysts away from simulations.

  27. Criteria for selection: Level of accuracy • Analytical modeling requires so many simplifications and assumptions. • Simulations can incorporate more details and require less assumptions than analytical modeling and are often close to reality.

  28. Criteria for selection: Level of accuracy (cont.) • Measurements may not give accurate results simply because many of the environmental parameters such as system configuration, type of workload, and time of measurement and so on. • So, the accuracy of results can vary from very high to none with measurement techniques. • Note that, level of accuracy and corectness of conclusions are not identical.

  29. Criteria for selection: Trade-off evaluation • Goal of performance study: compare different alternatives or to find the optimal parameter value. • Analytical models generally provide the best insights into the effects of various parameters and their interactions.

  30. Criteria for selection: Trade-off evaluation • With simulations it is possible to search the space of parameter values for the optimal combination. • Measurement is least desirable technique in this respect.

  31. Criteria for selection: Cost • Measurement requires real equipment, instruments, and time. It is most costly of the three techniques. • Cost is often the reason of simulating complex systems. • Analytical modeling requires only paper and pencils. Analytical modeling is the cheapest technique. • Can be decided based on cost allocated to the project.

  32. Criteria for selection: Saleability • Convincing others is important. • It is easy to convince with real measurement. • Most people are skeptical of analytical measurements, because they do not understand the techniques.

  33. Criteria for selection: Saleability (cont.) • So validation with other technique is important. • Do not thrust the results of simulation model until they have been validated by analytical modeling or measurements. • Do not thrust the results of an analytical model until they have been validated by a simulation model or measurements. • Do not thrust the results of a measurement until they have been validated by simulation or analytical modeling.

  34. Selecting an evaluation technique: Summary

  35. Selecting Performance Metrics

  36. Selecting performance metrics • For each performance study, a set of performance criteria or metrics must be chosen. • We can prepare this set by preparing the list of services offered by the system. • The outcomes can be classified into three categories • The system may perform correctly • Incorrectly • Refuse to performs the service.

  37. Selecting performance metrics (cont.) • Example: A gateway in a computer network offers a service of forwarding packets to the specified destinations on heterogeneous networks. When presented with the packet • It may forward the packet correctly • It may forward it to wrong destination • It may be down • Similarly a database may answer query correctly, incorrectly, or may be down.

  38. Selecting metrics: correct response • If the system performs the service correctly, its performance is measured • By the time taken to perform the service. • The rate at which the service is performed • And the resources consumed while performing the service. • These three metrics related to time–rate-resource for successful performance and also called responsiveness, productivity and utilization metrics.

  39. Selecting metrics: correct response • For example, the responsiveness of a network gateway is measured by response time: the time interval between arrival of a packet and its successful delivery • The gateway’s productivity is measured with throughput: the number of packets forwarded per unit time. • The utilization gives the indication of the percentage of time the resources of the gateway are busy for the given load level.

  40. Selecting metrics: incorrect response • If the system performs the service incorrectly, its performance is measured • By classifying errors / packet loss • Determining the probabilities of each class of errors. • For example, in case of gateway • We may want to find the probability of single-bit errors, two-bit errors, and so on. • Also, we may want to determine the probability of a packet being partially delivered.

  41. Time (Response time) Request for Service i Rate (Throughput) Done Correctly Resource (Utilization) Done System Probability Error j Done incorrectly Time between errors Duration of the event Cannot do Event k Time between events The possible outcomes of service request

  42. Metrics

  43. Metrics • Most systems offer more than one metrics and the number of metrics grows proportionately. • For many metrics mean value is important • Also, variability is important. • For computer systems, shared my by many users, two types of metrics need to be considered: individual and global. • Individual metrics reflect the utility of each user • Global metrics reflect the system wide utility • Resource utilization, reliability and availability are global metrics.

  44. Metrics • Normally, the decision that optimizes individual metric is different from the one that optimizes system metric. • For example, in computer networks the performance is measured by throughput (packets per second). If the number of packets allowed in the system is constant, increasing the number of packets from one source may lead to increasing its throughput , but it may also decrease someone else’s throughput. • So both system wide throughput and its distribution among individual users must be studied.

  45. Selection of Metrics • Completeness: The set of metrics included in the study should be complete. • All possible outcomes should be reflected in the set of performance metrics. • For example, in a study comparing different protocols on a computer network, one protocol was chosen as the best until it was found that the best protocol lead to the highest number of disconnections. • The probability of disconnection was then added to the set of performance metrics.

  46. User’s request System’s response Time Response time Instantaneous request and response Commonly used performance metrics: response time • Response time is defined as the interval between a user’s request and the system response. • This definition is simplistic since the requests as well as responses are not instantaneous.

  47. Throughput • Throughput is defined as the rate (requests per unit of time) at which the requests can be serviced by the system. • For networks, throughput is measured in packets per second or bits per second.

  48. Knee Throughput Usable capacity Knee capacity Load Response time Load Throughput… • Throughput of the system increases as the load on the system initially increases. • After a certain load the throughput stops decreasing. • In most cases it starts decreasing

More Related