1 / 27

Internet Performance Measurements and Measurement Techniques

Internet Performance Measurements and Measurement Techniques. Jim Kurose Department of Computer Science University of Massachusetts/Amherst http://www.cs.umass.edu/~kurose. Overview. Introduction why and what to measure Measuring per-hop performance tricks, successes, “failures”

phillipsl
Télécharger la présentation

Internet Performance Measurements and Measurement Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Internet Performance Measurements and Measurement Techniques Jim Kurose Department of Computer Science University of Massachusetts/Amherst http://www.cs.umass.edu/~kurose

  2. Overview • Introduction • why and what to measure • Measuring per-hop performance • tricks, successes, “failures” • End-to-end measurements • correlation in end-end loss, delay • “confidence” in measurements • What lies ahead?

  3. per-hop end-to-end What “performance” to measure? • packet delay • packet loss • link or path capacity/availability • where? • over what time scale? • sub-second, minute, hours?

  4. end-end measurements: benchmarking, monitoring (e.g., Imeter) fault identification (e.g., routing instabilities) understanding end-end perf misordering, loss (e.g., tcp studies by Paxson) correlation-time scale for end-end loss, delay use in adaptive applications per-hop measurements: network operations (proprietary?) understanding where in end-end path performance impairments occur use in reliable multicast protocols, active services, network modeling Why measure?

  5. Question: what is loss, delay, capacity at given hop? Question: what per-hop delays does a packet see? Complication: routers do not report performance stats to end users need to infer performance statistics cleverly use little “machinery” that we have develop an inferencing methodology per-hop Measuring Per-hop performance

  6. traceroute, pathchar: use ICMP packets and time-to-live (TTL) field each router decrements TTL on forwarding TTL = 0 results in ICMP error msg back to sender Used to discover all routers on path to destination ttl=3 ttl=2 ttl=1 ICMP err router = x Clever use of existing protocols x

  7. Clever use of exiting protocols (cont.) • ICMP/TTL-field trick also gives link bandwidth: • find min roundtrip delay to x-1 (use many probe pkts) • find min rt delay to hop x • difference gives prop. delay plus transmission delay • vary pkt size to get link bandwidth • gives variable queueing delay, loss of path to x • isolating hop x behavior difficult hop x hop x-1 d/bw data packet (d bits) 2*prop ICMP reply (r bits) r/bw time

  8. Can we measure per-hop delays? • motivation - a typical modeling paper: “We model the network as a single link ….” • is this a valid assumption? • does a packet generally experience “most” of its delay at one link?

  9. send unicast probes along path use “IP options” on probes to gather timestamps packet passing through specified router timestamped problem: only 4 timestamps in each packet solution: send multiple probes at one time: x x x ts(x) ts(x) y y y ts(y) y x x x x Measuring per-hop delays: x probe 1 x x x x probe 2

  10. problem: IP options packets treated differently data packets forwarded on fast path IP options packet detoured (hopefully briefly) solution: send non-option packet with probes only analyze probes when non-option packet delay close to probe delay (hope: negligible options processing delays) x x x Measuring per-hop delays: options forwarding x probe 1 x x x x probe 2 non-option pkt

  11. Consider only probes with e-e queueing delays > 100ms filter cases where probe and option pkt delays “close” (20 ms) Hypothesis: e-e delays of filtered probes from same distribution as all probes hypothesis rejected with negligible probability of being wrong :-( Analyzing the per-hop data

  12. timestamping approach not statistically valid inspiration (!) from another on-going effort: multicast loss question: where in a multicast tree does loss occur? backbone? edges? implications for design of reliable multicast protocols Can we measure per-hop packet delay/loss?

  13. a1 a3 a2 R1 R2 R1 R2 Using multicast to infer per-hop performance • correlation of received mcast pkts provides glimpse inside • simple loss model • independent loss probabilities ak on link k • method • multicast n packets from source • data: list of packets received at each receiver • check consistency of data with independent loss model • analysis: Maximum Likelihood Estimator • find  which maximizes Prob[data |  ]

  14. Multicast inference: evaluation • through ns simulations • 2-8 receivers • different topologies • TCP, on/off background sources • approach tracks probe loss well • good estimate of back- ground traffic loss

  15. Multicast Inference: to-do list Observations: • multicast-based inference promising for loss • applicable to delays Research questions: • what if topology partially unknown? • can we identify bottleneck links? Potential Applications: • Internet weather map • use in adaptive applications UMass collaboration with AT&T, LBNL

  16. End-End Loss Delay Characteristics • Question:time correlation of e-e loss, delay ? • Application: • adjustment of FEC for audio, video, data • playout delay adjustment for audio • analytic models: how many “states” needed in Markovian models? • Approach:collect/analyze point-point, multicast traces of periodically generated UDP probes

  17. Analysis Issues: • stationarity of traces: • look for increasing trends in avg, variance over trace • non-stationary traces not considered • removal of clock skew • algorithm for removing constant clock drift • how “confident” are we in the measured value? • 150 hours of measurement data • there’s an exception to every “typical” result

  18. Analysis Metrics: • delay autocorrelation:dj: measured delay of pkt j • loss autocorrelation:xj= 0 if pkt j received = 0 if pkt j lost • conditional average delay given loss:

  19. Delay Autocorrelation • Note: typically autocorrelation dies down quickly

  20. Conditional Delay Given Loss: • Interesting behavior! • Loss appears to be predictor of near term higher-than average delays

  21. Loss Autocorrelation: • generally: loss correlation timescale < 500 ms • modeling: length of consecutive losses, successful reception can be modeled accurately by 2 or 3 state Markov process

  22. How many states needed in analytic model? • For n-state Markov model, determine transition probabilities from observed data • needed: rigorous hypothesis testing of agreement between model and observed distributions

  23. suppose: we send 10 packets and see 3 lost view loss as a random process is loss rate “really” 30%? could be true loss rate is 20% or 50% ! if we sample more, we’d have more “confidence” in the estimate goal: interval estimator for loss rate e.g.: 95% confident that true loss in range [p1,p2] use: adaptive applications (e.g., using RTCP) “Confidence” in loss probability estimates

  24. 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Loss probability confidence: model Bernoulli loss process: each pkt lost independently with probabilityp Example: n = 10 k = 3 MLE = k/n = 0.3 95% confidence interval around MLE:find [ p1, p2 ] such that Pr{loss[k, n] | p = p1} = Pr{loss[0, k] | p = p2} = 0.025 p1  0.07 p2  0.65

  25. 1 0.8 0.01 0.6 0.02 0.4 0.05 0.10 0.2 MLE = 0.50 0 1000 2000 3000 4000 5000 Loss probability estimation: intervals 95%confidenceintervalwidthrelativeto MLE number of packets sent n

  26. need for statistically rigorous, empirically verified, end-user oriented performance measurement tools and technique research just beginning middleware: network-to-user performance feedback? when and in what form? informed use of performance measurements in: adaptive applications active services What’s ahead?

  27. For More Information ….. • This talk:ftp://gaia.cs.umass.edu/pub/kurose/intel98.ps • Group publications: http://gaia.cs.umass.edu/papers • WWW sites: • Cooperative Association for Internet Data Analysis www.caida.org • National Laboratory for Applied Network Research www.nlanr.net

More Related