210 likes | 340 Vues
This paper explores end-to-end performance improvements through traffic aggregation in IP networks. Recognizing the need for Quality of Service (QoS) in mission-critical applications, it evaluates DiffServ and the Integrated Services architecture. Key insights include the impact of arrival and departure rates, jitter, one-way delay, and metrics related to Effective Forwarding (EF). The analysis combines experimental scenarios with assessment of prioritization through Weighted Fair Queuing (WFQ). The findings highlight scalability benefits and underlying challenges in maintaining performance during traffic load variations.
E N D
End-to-End Performancewith Traffic Aggregation Tiziana Ferrari Tiziana.Ferrari@cnaf.infn.it TF-TANT Task Force TNC 2000, Lisbon 23 May 2000 End-to-End Performance with Traffic Aggregation
Overview • Diffserv and aggregation • EF: Arrival and departure rate configuration • Test scenario • Metrics • End-to-end performance (PQ): • EF load • Number of EF streams • EF packet size • WFQ and PQ • Conclusions End-to-End Performance with Traffic Aggregation
Problem statement • Support of end-to-end Quality of Service (QoS) for mission-critical applications in IP networks • Solutions: • Per-flow the Integrated Services architecture • Signalling (RSVP) • Per-class the Differentiated Services • Classification and marking (QoS policies) • Scheduling • Traffic conditioning (policing and shaping) • DSCP • Aggregation • Expedited Forwarding and Assured Forwarding End-to-End Performance with Traffic Aggregation
Aggregation • Benefit: greater scalability, no protocol overhead • Problem: interaction between flows multiplexed in the same class • Jitter: distortion of per-flow inter-packet gap • One-way delay: queuing delay due to non-empty queues • Requirement: maxarrival rate < mindeparture rate End-to-End Performance with Traffic Aggregation
Arrival and departure rate configuration • Maximum arrival rate is proportional to the number of input traffic bundles • One-way delay: maximum queuing delay depends on the number of EF streams and can be arbitrarily large: Del = txMTU + n with priority queuing where n is the number of input streams Experiments of aggregation without shaping and policing MTU Dep_rate End-to-End Performance with Traffic Aggregation
Test network End-to-End Performance with Traffic Aggregation
Test scenario End-to-End Performance with Traffic Aggregation
Metrics • One-way delay (RFC 2679): difference between the wire time at which the last byte of a packet arrives at destination and the wire time at which the first byte is sent out (absolute value) • Jitter (Instantaneous Packet Delay Variation): for two consequent packets i and i-1 IPDV = | Di – Di-1 | • Max Burstiness: minimum queue length at which no tail drop occurs • Packet loss percentage End-to-End Performance with Traffic Aggregation
Traffic profile • Expedited Forwarding: • SmartBits 200, UDP, CBR • UDP CBR streams injected from each site • Background traffic: • UDP, CBR • Permanent congestion in each hop • Packet size according to areal distribution • Scheduling: priority queuing End-to-End Performance with Traffic Aggregation
Best-effort traffic pack size distribution End-to-End Performance with Traffic Aggregation
Tail drop End-to-End Performance with Traffic Aggregation
EF load -Constant packet size (40 by of payload) and number of streams (40) -Variable EF load: [10, 50]% -delay unit: 108.14 msec burstiness is a linear function of the number of pack/sec End-to-End Performance with Traffic Aggregation
EF load (2) One-way delay: both average and distribution almost independent of the EF rate IPDV distribution: moderate improvement with load (tx unit: transmission time of 1 EF packet, 0.424 msec) End-to-End Performance with Traffic Aggregation
Number of EF streams -Constant packet size (40 by of payload) and EF load (32%) -Variable number of EF streams: [1, 100] asymptotic convergence End-to-End Performance with Traffic Aggregation
EF packet size -Constant number of streams (40) and EF load (32%) -Variable EF frame size: 40, 80, 120, 240 bytes (variable pack/sec rate) -delay unit: 113.89 msec moderate increase in burstiness [1632, 1876] bytes delay increase, IPDV decrease End-to-End Performance with Traffic Aggregation
EF packet size (delay) • -large packet size smaller packet rate, different composition of • the TX queue and the corresponding time needed to • empty the queue increases • e.g. • 240 bytes: 240 pack/sec TX queue = BEBEB • queuing time = 16.2 msec 40 bytes: 720 pack/sec TX queue = BEEEB queueing time = 11.747 msec The longer the transmission queue, the larger the effect of the pack/sec rate End-to-End Performance with Traffic Aggregation
EF packet size (IPDV) • IPDV inversely proportional to the burst size • Tradeoff between one-way delay and IPDV End-to-End Performance with Traffic Aggregation
WFQ and PQ: comparison • Constant number of streams (40) • Variable EF frame size: 40, 512 bytes and variable rate: [10, 50]% • WFQ is less burstiness prone (interelaving of BE and EF) End-to-End Performance with Traffic Aggregation
Conclusions and future work • Aggregation produces packet loss due to packet clustering and consequent tail drop • Load: • primary factor, great burstiness, minor effect on one-way delay • Rate (pack/sec): great effect on one-way delay • number of EF streams: small dependency • Tradeoff: shaping (in few key aggregation points) and queue size tuning • EF-based services: viable, validation needed (future work) End-to-End Performance with Traffic Aggregation
References • http://www.cnaf.infn.it/˜ferrari/tfng/ds/ • http://www.cnaf.infn.it/˜ferrari/tfng/qosmon/ • Report of activities (phase 2) http://www.cnaf.infn.it/˜ferrari/tfng/ds/rep2-del.doc • Priority Queuing Applied to Expedited Forwarding: a Measurement-Based Analysis, T. Ferrari, G. Pau, C. Raffaelli, Mar 2000 http://www.cnaf.infn.it/˜ferrari/tfng/ds/pqEFperf.pdf • A Measurement-based Analysis of Expedited Forwarding PHB Mechanisms, T. Ferrari, P. Chimento, Feb 2000, IWQoS 2000 , in print http://www.cnaf.infn.it/˜ferrari/tfng/ds/iwqos2ktftant.doc End-to-End Performance with Traffic Aggregation
Overview of diffserv experiments • Policing: Single- and multi-parameter token buckets with TCP traffic • traffic metering and packet marking (PHB class selectors) • scheduling: WFQ, SCFQ, PQ • capacity allocation between queues, class isolation • queue dimensioning (buffer depth and TCP burst tolerance, tx queue) • per-class service rate configuration • one-way delay and instantaneous packet delay variation • Assured Forwarding: PHB differentiation through WRED • throughput performance : • packet drop probability, number of TCP streams per AF PHB, minimum threshold • Expedited Forwarding: • multiple congestion points • multiple EF aggregation points • variable load, number of streams and packet size End-to-End Performance with Traffic Aggregation