1 / 60

Handout # 13: Active Queue Management; Cells and Packets

Handout # 13: Active Queue Management; Cells and Packets. CSC 2203 – Packet Switch and Network Architectures. Professor Yashar Ganjali Department of Computer Science University of Toronto yganjali@cs.toronto.edu http://www.cs.toronto.edu/~yganjali. TexPoint fonts used in EMF.

basil
Télécharger la présentation

Handout # 13: Active Queue Management; Cells and Packets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Handout # 13: Active Queue Management; Cells and Packets CSC 2203 – Packet Switch and Network Architectures Professor Yashar Ganjali Department of Computer Science University of Toronto yganjali@cs.toronto.edu http://www.cs.toronto.edu/~yganjali TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:

  2. Announcements • Final presentations • Nov. 28th in class • 15 minutes for each talk (including Q&A) • Final report: Dec. 7th University of Toronto – Fall 2012

  3. Active Queue Management*: Motivation • So far we have focused on admissible traffic. • What if an output link is congested? • Majority of flows use TCP for congestion control. • What if someone does not? University of Toronto – Fall 2012 * Thanks to Rong Pan, Cisco Systems.

  4. Part I: Outline • Queue Management • Drop as a way to feedback to TCP sources • Part of a closed-loop • Traditional Queue Management • Drop tail • Problems • Active Queue Management • RED • CHOKe University of Toronto – Fall 2012

  5. Queue Management: Drops/MarksA Feedback Mechanism To Regulate End TCP Hosts • End hosts send TCP traffic. • Network elements, switches/routers, generate drops/marks based on their queue sizes. • Drops/marks are regulation messages to end hosts. • TCP sources respond to drops/marks by cutting down their windows, i.e. sending rate. University of Toronto – Fall 2012

  6. Drop Tail Problems • Lock out • Full queue • Bias against bursty traffic • Global synchronization University of Toronto – Fall 2012

  7. Tail Drop – Lock Out Max Queue Length University of Toronto – Fall 2012

  8. Tail Drop – Full Queue • Only drop packets when queue is full. • Feedback to sources when it is too late • Long steady-state delay University of Toronto – Fall 2012

  9. Bias Against Bursty Traffic • When a smooth flow reaches a full queue it loses few packets. • Bursty flow on the other hand loses many packets. Max Queue Length University of Toronto – Fall 2012

  10. Tail Drop – Global Synchronization • Different flows see packet drops at (almost) the same time. • Flows react similarly at the same point of time. • Thus we get synchronization. Max Queue Length University of Toronto – Fall 2012

  11. Alternative Queue Management Schemes • Drop from front on full queue. • Drop at random on full queue. • Both solve the lock-out problem. • Both have the full-queues problem. University of Toronto – Fall 2012

  12. Active Queue Management – Goals • Solve tail-drop problems • No lock-out behavior • No global synchronization • No bias against bursty flow • Provide better QoS at a router • Low steady-state delay • Lower packet dropping University of Toronto – Fall 2012

  13. Random Early Detection (RED) Arriving packet AvgQsize > Minth? no yes Admit the new packet AvgQsize > Maxth? no yes end Drop the new packet Admit packet with a probability p end end University of Toronto – Fall 2012

  14. RED Dropping Curve 1 Drop Probability maxp 0 minth maxth Average Queue Size University of Toronto – Fall 2012

  15. Effectiveness of RED • Packets are randomly dropped. • Each flow has the same probability of being discarded. • Question. Does it address lock-out and global synchronization? University of Toronto – Fall 2012

  16. Effectiveness of RED • Drop packets probabilistically in anticipation of congestion. • Not when queue is full • Use qavg to decide packet dropping probability. • Allow instantaneous bursts. • Questions. Does it address full-queue and bias against bursty traffic? University of Toronto – Fall 2012

  17. What QoS does RED Provide? • Lower buffer delay: good interactive service. • qavg is controlled to be small. • Given responsive flows: packet dropping is reduced. • Early congestion indication allows traffic to throttle back before congestion. • Given responsive flows: fair bandwidth allocation. University of Toronto – Fall 2012

  18. udp udp tcp tcp tcp udp tcp The Internet Connectionless; Best-Effort Bad News – Unresponsive End Hosts University of Toronto – Fall 2012

  19. Scheduling & Queue Management • What routers want to do? • Isolate unresponsive flows (e.g. UDP) • Provide Quality of Service to all users • Two ways to do it: • Scheduling algorithms • FQ, CSFQ (core stateless), SFQ (stochastic) • Queue management algorithms • RED, FRED, SRED University of Toronto – Fall 2012

  20. No isolation from non-adaptive flows Easy to implement RED Fair Queuing vs. RED Isolation from non-adaptive flows Hard/Expensive to implement FQ University of Toronto – Fall 2012

  21. Enhancing AQM: Fairness FIFO • Provide isolation from unresponsive flows. • Be as simple as RED University of Toronto – Fall 2012

  22. yes Draw a packet at random from queue Flow id same as the new packet id ? no yes AvgQsize > Maxth? Drop both matched packets no yes Drop the new packet Admit packet with a probability p end end end From RED to CHOKe Arriving packet RED CHOKe AvgQsize > Minth? no yes Admit the new packet AvgQsize > Maxth? no yes end Drop the new packet Admit packet with a probability p yes end end University of Toronto – Fall 2012

  23. Random Sampling from Queue UDP flow A randomly chosen packet more likely from the unresponsive flow Adversary can’t fool the system University of Toronto – Fall 2012

  24. Comparison of Flow ID • Compare the flow ID with the incoming packet: • More accurate. • Reduces the chance of dropping packets from a TCP-friendly flows. University of Toronto – Fall 2012

  25. Dropping Mechanism • Drop packets (both incoming and matching samples ) • More arrival -> More Drop • Give users a disincentive to send more University of Toronto – Fall 2012

  26. Simulation Setup S(1) D(1) S(2) 10Mbps 10Mbps D(2) m TCP m TCP Sources Sinks 1Mbps S(m) D(m) S(m+1) D(m+1) n UDP Sources n UDP Sinks S(m+n) D(m+n) University of Toronto – Fall 2012

  27. Network Setup Parameters • 32 TCP flows, 1 UDP flow • All TCP’s maximum window size = 300 • All links have a propagation delay of 1ms • FIFO buffer size = 300 packets • All packets sizes = 1 Kbyte • RED: (minth, maxth) = (100, 200) packets University of Toronto – Fall 2012

  28. 74.1% 23.0% 99.6% 32 TCP, 1 UDP (one sample) University of Toronto – Fall 2012

  29. 32 TCP, 5 UDP (5 samples) University of Toronto – Fall 2012

  30. Rk R2 R1 How Many Samples to Take? Maxth avg minth Different samples for different Qlenavg • # samples  when Qlenavg close to minth • # samples when Qlenavg close to maxth University of Toronto – Fall 2012

  31. Summary • Traditional Queue Management • Drop Tail, Drop Front, Drop Random • Problems: lock-out, full queue, global synchronization, bias against bursty traffic • Active Queue Management • RED: can’t handle unresponsive flows • CHOKe: penalize unresponsive flows University of Toronto – Fall 2012

  32. References • R. Pan, B. Prabhakar, K. Psounis, “CHOKe, a stateless active queue management scheme for approximating fair bandwidth allocation”, Proceedings of the IEEE INFOCOM, 2:942-951, Tel Aviv, Israel, March 2000. • Ao Tang, Jiantao Wang, Steven H. Low, “Understanding CHOKe: throughput and spatial characteristics”, IEEE/ACM Transactions on Networking, Vol. 12, No. 4, 2004. University of Toronto – Fall 2012

  33. VOQ11 Output 1 Input 1 VOQ1N VOQN1 Output N Input N VOQNN Cell Switching vs. Packet Switching • Slotted time • Fixed size data units  cells • Input-queued switch • Virtual output queues to avoid HoL blocking Switching Fabric University of Toronto – Fall 2012

  34. Motivation • Packets have different lengths • Splitter module • Combiner module (memory) • Packet delays more important than Cell delays Switch VOQ11 Combiner VOQ1N Splitter VOQN1 VOQNN Can we use packet-based scheduling? University of Toronto – Fall 2012

  35. Part II: Outline • Cell-based algorithms review: • Stability concept • Maximum weight matching algorithm • Packet-based algorithms • PB-MWM and its stability • PB algorithms classification • Work conserving • Waiting • Waiting PB algorithms University of Toronto – Fall 2012

  36. : Number of cells arrived to VOQij up to time n : Number of cells departed from VOQij up to time n : Number of cells queued at VOQij at time n (SLLN) almost surely VOQ11 Output 1 Input 1 VOQ1N VOQN1 Output N Input N VOQNN Notation – Arrival rate Switching Fabric University of Toronto – Fall 2012

  37. Admissibility and Rate Stability • The arrival rate matrix is “admissible” iff • A switch under a matching algorithm is “stable” (rate stable) if, almost surely, University of Toronto – Fall 2012

  38. MWM Algorithm • A matching • MWM: At each time slot, select the matching with maximum weight University of Toronto – Fall 2012

  39. MWM Stability • MWM is stable under i.i.d. Bernoulli traffic. • MWM is stable for any admissible traffic. • Fluid model technique • How about packet based MWM? N. McKeown,V. Ananthram, and J. Walrand, “Achieving 100% throughput in an input-queued switch,” INFOCOM 1996, pp. 296-302. J. G. Dai and B. Prabhakar, “The throughput of data switches with or without speedup,” INFOCOM 2000, pp. 556-564. University of Toronto – Fall 2012

  40. Part II: Outline • Cell-based algorithms review: • Stability concept • Maximum weight matching algorithm • Packet-based algorithms • PB-MWM and its stability • PB algorithms classification • Work conserving • Waiting • Waiting PB algorithms University of Toronto – Fall 2012

  41. Packet-Based Switching • Once the scheduler starts transmitting the first cell of a packet, it continues until the whole packet is received at output port University of Toronto – Fall 2012

  42. Packet-Based Switching • Once the scheduler starts transmitting the first cell of a packet, it continues until the whole packet is received at output port University of Toronto – Fall 2012

  43. Packet-Based Switching • Once the scheduler starts transmitting the first cell of a packet, it continues until the whole packet is received at output port. University of Toronto – Fall 2012

  44. Cell-based to Packet-based • Consider cell-based algorithm X • At each time slot: • Busy ports: middle of sending a packet • Free ports: i/o ports can be assigned freely • PB-X • Keep the assignments used by busy ports • Find a sub-matching for free ports using algorithm X. University of Toronto – Fall 2012

  45. Stability of PB-MWM • PB-MWM is stable under any “regenerative admissible traffic”. • Traffic is called “regenerative” if on average it requires a finite time to reach the state where all ports are free if it keeps using any fixed matching. • Bernoulli i.i.d. is a regenerative traffic. M.A. Marsan, A. Bianco, P. Giaccone, E. Leonardi, and F. Nari, “Packet Scheduling in Input-Queued Cell-based switches,” INFOCOM 2001, pp. 1085-1094 University of Toronto – Fall 2012

  46. Proof Outline • Matching m(n) is “k-imperfect” if • For PB-MWM: • Lemma: A scheduling algorithm is rate stable if the average value of its weight is larger than maximum weight matching minus a bounded constant. University of Toronto – Fall 2012

  47. Question • CB-MWM is stable under any admissible traffic. • PB-MWM is stable under any admissible regenerative traffic. Is the regenerative condition necessary? University of Toronto – Fall 2012

  48. Counter-example University of Toronto – Fall 2012

  49. Counter-example University of Toronto – Fall 2012

  50. Counter-example University of Toronto – Fall 2012

More Related