1 / 91

Engineering for QoS and the limits of service differentiation

IWQoS June 2000. Engineering for QoS and the limits of service differentiation. Jim Roberts (james.roberts@francetelecom.fr). The central role of QoS. feasible technology. quality of service transparency response time accessibility. service model resource sharing

taite
Télécharger la présentation

Engineering for QoS and the limits of service differentiation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IWQoS June 2000 Engineering for QoS and the limits of service differentiation Jim Roberts (james.roberts@francetelecom.fr)

  2. The central role of QoS feasible technology • quality of service • transparency • response time • accessibility • service model • resource sharing • priorities,... • network engineering • provisioning • routing,... a viable business model

  3. Engineering for QoS: a probabilistic point of view • statistical characterization of traffic • notions of expected demand and random processes • for packets, bursts, flows, aggregates • QoS in statistical terms • transparency: Pr [packet loss], mean delay, Pr [delay > x],... • response time: E [response time],... • accessibility: Pr [blocking],... • QoS engineering, based on a three-way relationship: demand performance capacity

  4. Outline • traffic characteristics • QoS engineering for streaming flows • QoS engineering for elastic traffic • service differentiation

  5. Internet traffic is self-similar • a self-similar process • variability at all time scales • due to: • infinite variance of flow size • TCP induced burstiness • a practical consequence • difficult to characterize a traffic aggregate Ethernet traffic, Bellcore 1989

  6. Traffic on a US backbone link (Thomson et al, 1997) • traffic intensity is predictable ... • ... and stationary in the busy hour

  7. Traffic on a French backbone link • traffic intensity is predictable ... • ... and stationary in the busy hour tue wed thu fri sat sun mon 12h 18h 00h 06h

  8. IP flows • a flow = one instance of a given application • a "continuous flow" of packets • basically two kinds of flow, streaming and elastic

  9. IP flows • a flow = one instance of a given application • a "continuous flow" of packets • basically two kinds of flow, streaming and elastic • streaming flows • audio and video, real time and playback • rate and duration are intrinsic characteristics • not rate adaptive (an assumption) • QoS  negligible loss, delay, jitter

  10. IP flows • a flow = one instance of a given application • a "continuous flow" of packets • basically two kinds of flow, streaming and elastic • streaming flows • audio and video, real time and playback • rate and duration are intrinsic characteristics • not rate adaptive (an assumption) • QoS  negligible loss, delay, jitter • elastic flows • digital documents ( Web pages, files, ...) • rate and duration are measures of performance • QoS  adequate throughput (response time)

  11. variable rate video Flow traffic characteristics • streaming flows • constant or variable rate • compressed audio (O[103 bps]) and video (O[106 bps]) • highly variable duration • a Poisson flow arrival process (?)

  12. Flow traffic characteristics • streaming flows • constant or variable rate • compressed audio (O[103 bps]) and video (O[106 bps]) • highly variable duration • a Poisson flow arrival process (?) • elastic flows • infinite variance size distribution • rate adaptive • a Poisson flow arrival process (??) variable rate video

  13. Modelling traffic demand • stream traffic demand • arrival rate x bit rate x duration • elastic traffic demand • arrival rate x size • a stationary process in the "busy hour" • eg, Poisson flow arrivals, independent flow size traffic demand Mbit/s busy hour time of day

  14. Outline • traffic characteristics • QoS engineering for streaming flows • QoS engineering for elastic traffic • service differentiation

  15. Open loop control for streaming traffic • a "traffic contract" • QoS guarantees rely on • traffic descriptors + admission control + policing • time scale decomposition for performance analysis • packet scale • burst scale • flow scale user-network interface user-network interface network-network interface

  16. Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU

  17. buffer size log Pr [saturation] Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU • buffer size for negligible overflow? • over all phase alignments... • ...assuming independence between flows

  18. Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU • buffer size for negligible overflow? • over all phase alignments... • ...assuming independence between flows • worst case assumptions: • many low rate flows • MTU-sized packets buffer size increasing number, increasing pkt size log Pr [saturation]

  19. Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU • buffer size for negligible overflow? • over all phase alignments... • ...assuming independence between flows • worst case assumptions: • many low rate flows • MTU-sized packets •  buffer sizing for M/DMTU/1 queue • Pr [queue > x] ~ C e -r x buffer size M/DMTU/1 increasing number, increasing pkt size log Pr [saturation]

  20. The "negligible jitter conjecture" • constant rate flows acquire jitter • notably in multiplexer queues

  21. The "negligible jitter conjecture" • constant rate flows acquire jitter • notably in multiplexer queues • conjecture: • if all flows are initially CBR and in all queues: S flow rates < service rate • they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets

  22. The "negligible jitter conjecture" • constant rate flows acquire jitter • notably in multiplexer queues • conjecture: • if all flows are initially CBR and in all queues: S flow rates < service rate • they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets • M/DMTU/1 buffer sizing remains conservative

  23. bursts Burst scale: fluid queueing models • assume flows have an intantaneous rate • eg, rate of on/off sources packets arrival rate

  24. packets bursts arrival rate Burst scale: fluid queueing models • assume flows have an intantaneous rate • eg, rate of on/off sources • bufferless or buffered multiplexing? • Pr [arrival rate < service rate] < e • E [arrival rate] < service rate

  25. buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters Pr [rate overload]

  26. buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters longer burst length shorter

  27. buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters more variable burst length less variable

  28. buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters long range dependence burst length short range dependence

  29. Choice of token bucket parameters? • the token bucket is a virtual queue • service rate r • buffer size b r b

  30. b b' non- conformance probability Choice of token bucket parameters? • the token bucket is a virtual queue • service rate r • buffer size b • non-conformance depends on • burst size and variability • and long range dependence r b

  31. Choice of token bucket parameters? • the token bucket is a virtual queue • service rate r • buffer size b • non-conformance depends on • burst size and variability • and long range dependence • a difficult choice for conformance • r >> mean rate... • ...or b very large b b' non- conformance probability r b

  32. time Bufferless multiplexing: alias "rate envelope multiplexing" • provisioning and/or admission control to ensure Pr [Lt>C] < e • performance depends only on stationary rate distribution • loss rate  E [(Lt -C)+] / E [Lt] • insensitivity to self-similarity output rate C combined input rate Lt

  33. Efficiency of bufferless multiplexing • small amplitude of rate variations ... • peak rate << link rate (eg, 1%)

  34. Efficiency of bufferless multiplexing • small amplitude of rate variations ... • peak rate << link rate (eg, 1%) • ... or low utilisation • overall mean rate << link rate

  35. Efficiency of bufferless multiplexing • small amplitude of rate variations ... • peak rate << link rate (eg, 1%) • ... or low utilisation • overall mean rate << link rate • we may have both in an integrated network • priority to streaming traffic • residue shared by elastic flows

  36. Flow scale: admission control • accept new flow only if transparency preserved • given flow traffic descriptor • current link status • no satisfactory solution for buffered multiplexing • (we do not consider deterministic guarantees) • unpredictable statistical performance • measurement-based control for bufferless multiplexing • given flow peak rate • current measured rate (instantaneous rate, mean, variance,...)

  37. Flow scale: admission control • accept new flow only if transparency preserved • given flow traffic descriptor • current link status • no satisfactory solution for buffered multiplexing • (we do not consider deterministic guarantees) • unpredictable statistical performance • measurement-based control for bufferless multiplexing • given flow peak rate • current measured rate (instantaneous rate, mean, variance,...) • uncritical decision threshold if streaming traffic is light • in an integrated network

  38. utilization (r=a/m) for E(m,a) = 0.01 r 0.8 0.6 0.4 0.2 m 0 20 40 60 80 100 Provisioning for negligible blocking • "classical" teletraffic theory; assume • Poisson arrivals, rate l • constant rate per flow r • mean duration 1/m •  mean demand, A = l/m r bits/s • blocking probability for capacity C • B = E(C/r,A/r) • E(m,a) is Erlang's formula: • E(m,a)= •  scale economies

  39. utilization (r=a/m) for E(m,a) = 0.01 r 0.8 0.6 0.4 0.2 m 0 20 40 60 80 100 Provisioning for negligible blocking • "classical" teletraffic theory; assume • Poisson arrivals, rate l • constant rate per flow r • mean duration 1/m •  mean demand, A = l/m r bits/s • blocking probability for capacity C • B = E(C/r,A/r) • E(m,a) is Erlang's formula: • E(m,a)= •  scale economies • generalizations exist: • for different rates • for variable rates

  40. Outline • traffic characteristics • QoS engineering for streaming flows • QoS engineering for elastic traffic • service differentiation

  41. Closed loop control for elastic traffic • reactive control • end-to-end protocols (eg, TCP) • queue management • time scale decomposition for performance analysis • packet scale • flow scale

  42. Packet scale: bandwidth and loss rate • a multi-fractal arrival process

  43. Packet scale: bandwidth and loss rate • a multi-fractal arrival process • but loss and bandwidth related by TCP (cf. Padhye et al.) congestion avoidance loss rate p B(p)

  44. Packet scale: bandwidth and loss rate • a multi-fractal arrival process • but loss and bandwidth related by TCP (cf. Padhye et al.) congestion avoidance loss rate p B(p)

  45. Packet scale: bandwidth and loss rate • a multi-fractal arrival process • but loss and bandwidth related by TCP (cf. Padhye et al.) • thus, p = B-1(p): ie, loss rate depends on bandwidth share congestion avoidance loss rate p B(p)

  46. Packet scale: bandwidth sharing • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally • depending on RTT, protocol implementation, etc. • and differentiated services parameters

  47. Example: a linear network route 0 route 1 route L Packet scale: bandwidth sharing • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally • depending on RTT, protocol implementation, etc. • and differentiated services parameters • optimal sharing in a network: objectives and algorithms... • max-min fairness, proportional fairness, maximal utility,...

  48. Packet scale: bandwidth sharing • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally • depending on RTT, protocol implementation, etc. • and differentiated services parameters • optimal sharing in a network: objectives and algorithms... • max-min fairness, proportional fairness, maximal utility,... • ... but response time depends more on traffic process than the static sharing algorithm! Example: a linear network route 0 route 1 route L

  49. link capacity C fair shares Flow scale: performance of a bottleneck link • assume perfect fair shares • link rate C, n elastic flows  • each flow served at rate C/n

  50. link capacity C fair shares Flow scale: performance of a bottleneck link • assume perfect fair shares • link rate C, n elastic flows  • each flow served at rate C/n • assume Poisson flow arrivals • an M/G/1 processor sharing queue • load, r = arrival rate x size / C  a processor sharing queue

More Related