1 / 70

March 29

March 29. Scheduling ?. What is Packet Scheduling?. Decide when and what packet to send on output link. Classifier. Arbitrator. flow 1. 1. flow 2. Scheduler. 2. flow n. Buffer management. Select the next packet for transmission. Packet Scheduling ?.

sirvat
Télécharger la présentation

March 29

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. March 29 Scheduling ?

  2. What is Packet Scheduling? • Decide when and what packet to send on output link Classifier Arbitrator flow 1 1 flow 2 Scheduler 2 flow n Buffer management • Select the next packet for transmission

  3. Packet Scheduling ? • Packet transmission decision (packet to send ) • desired transmission time ? • QoS? • smallest one first? • Schedulers differ in how they compute desired transmission times • Controls the interaction among • Traffic in the same class • Traffic in the different QoS required classes • Traffic in the admin. purpose class flow1 flow2 Scheduler flown Buffermanagement • Packet storage decision (when to drop)

  4. Basic properties How to isolate flow? to guarantee service to one flow independent of the behavior of other flow How to support of excess traffic and fairness? Work conserving? If you send more than you are entitled to but resources are available, can you take advantage of it and if yes, how much How complex it is? Computation complexity, management complexity How efficient it is? # flows, packets? Characteristics of Scheduling algorithm

  5. Known scheduling ! • FIFO, LIFO • Fair Queueing • Min-Max • bit-by-bit round robin • Weighted bit-by-bit Fair Queueing (WFQ) • WFQ in a fluid flow system  Generalized Processor Sharing (GPS) • Packetized FQ(PGPS)

  6. First-Come First-Served • Algorithm: • Packets are served in the order they arrive • Departure time is arrival time plus time to empty buffer content • First packet comes first packet out, • Properties • Very simple to implement • No flow isolation or bandwidth guarantees • One flow can hog the entire link if unconstrained • Max delay is proportional to buffer size, not packet class

  7. Priority Queuing • Multiple FCFS queues, where high priority queues always transmit before lower priority ones • Departure time is time of arrival plus time to empty buffer content plus variable time (function of buffer content and arrivals in higher priority queues) • Class i is guaranteed to have better delay than class j for i<j • Lower priority classes can be starved • Remains simple to implement (for few classes)

  8. Round-Robin • Packets are classified and sent to “n” queues • Queues are serviced in order 0..n-1 • Problems • Can’t offer bandwidth or delay guarantees • Packets can “park” in a queue, while empty queues are checked for servicing • Insensitive to packet size (inherently unfair) flow 1 flow 2 Scheduler flow n Buffer management

  9. wi = i w1 w2 w3 w4 Weighted Round-Robin • Weighted Round-Robin • Windowed Priority Queuing • Each flow has its own queue and weight wi • Packets sent to “n” queues, like Priority Queuing • Server visits each queue in turn and transmits wi packets (bits) • Limited number of packets processed per queue per servicing round • wi packets for each of the “i” queues flow 1 flow 2 Scheduler flow n Buffer management

  10. Time FIFO Scheduling • Serving in the Queueing system? server

  11. Time FIFO Scheduling • Is it fair? server Delay #of served packets average average Flow i Flow i FIFO favors the mostgreedy flow FIFOis hard to control the delay

  12. Fair Queueing? • Can provide (QOS) • Fairness: Make sure that a given flow gets enough transmission opportunities when it has packets waiting to be transmitted (is backlogged) • Delay: Ensure upper bound on the maximum (average) amount of time a packet can wait in the buffer • Jitter: Provide bound on the delay difference of consecutive packet transmissions (for the same flow) • Loss: is a function of Buffer management • Distribution of excess bandwidth (E) across active sessions • Fair allocation gives each one of N active connections E/N+reserved BW

  13. Example 1 10 Mb/s 0.55 Mb/s A 1.1 Mb/s 100 Mb/s C R1 e.g. an http flow with a given (IP SA, IP DA, TCP SP, TCP DP) 0.55 Mb/s B What is the “fair” allocation: (0.55Mb/s, 0.55Mb/s) or (1Mb/s, 0.1Mb/s)?

  14. Example 2 10 Mb/s ? Mb/s A 1.1 Mb/s R1 100 Mb/s D ? Mb/s B 0.2 Mb/s What is the “fair” allocation? C

  15. Max-Min Fairness • How can an Internet router “allocate” different rates to different flows? • First, let’s see how a router can allocate the “same” rate to different flows…-> Max-Min Fairness Algorithm

  16. Max-Min Fairness AlgorithmA common way to allocate flows N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). • Pick the flow, f, with the smallestrequested rate. • If W(f) < C/N, then set R(f) = W(f). • If W(f) > C/N, then set R(f) = C/N. • Set N = N – 1. C = C – R(f). • If N>0 goto 1.

  17. Max-Min fairness Example 1 Sorted list W(f1) = 0.2 Round 1: Since W(f1)<1.1/3(0.366), then Set R(f1) = 0.2 Round 2: Since W(f2)>0.9/2(0.45), then Set R(f2) = 0.9/2 = 0.45 Round 3: Since W(f3)>0.45/1(0.45), then Set R(f4) = 0.45/1 = 0.45 C=1.1 0.2 W(f2) = 10 0.45 0.45 W(f3) = 100 3 flows share a link of rate 1.1Mbps. Flow f wishes to send at rate W(f), and is allocated rate R(f). C=1.1Mbps, N=3 The smallest flow in the round

  18. Max-Min Fairness Example 2Another example Sorted list W(f1) = 0.1 Round 1: Since W(f1)<1/4(0.25), then Set R(f1) = 0.1 Round 2: Since W(f2)>0.9/3(0.3), then Set R(f2) = 0.9/3 = 0.3 Round 3: Since W(f4)>0.6/2(0.3), then Set R(f4) = 0.6/2 = 0.3 Round 4: Since W(f3)>0.3/1(0.3), then Set R(f3) = 0.3/1 = 0.3 C=1 0.1 0.3 0.3 0.3 W(f2) = 0.5 W(f4) = 5 W(f3) = 10 4 flows share a link of rate 1Mbps. Flow f wishes to send at rate W(f), and is allocated rate R(f). C=1Mbps, N=4 The smallest flow in the round

  19. Max-Min Fairness Example 3Another example Sorted list Max-Min fairness Round 1: Since W(f1)<10/3(3.33), then Set R(f1) = 2 Round 2: Since W(f2)>8/2(4), then Set R(f2) = 8/2 = 4 Round 3: Since W(f3)>4/1(4), then Set R(f4) = 4/1 = 4 8 R(8) = 4 R(6) = 4 R(2) = 2 C=10 4 6 4 2 2 3 flows share a link of rate 10Mbps. Flow f wishes to send at rate W(f), and is allocated rate R(f). C=10Mbps, N=3 The smallest flow in the round

  20. Bit-by-Bit Fair Queueing • Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”. • FIFOs are scheduled one bit at a time, in a round-robin fashion. • This is called Bit-by-Bit Fair Queueing. Flow 1 Bit-by-bit round robin Classification Scheduling Flow N Order of service … …f1, f2,f3, f4, f5,f6, …fN…, f1,…

  21. Weighted Bit-by-Bit Fair Queueing Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round. R(f1) = 1 10 R(f2) = 3 C R1 R(f3) = 3 R(f4) = 3 Order of service for the four queues: … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Also called “Generalized Processor Sharing (GPS)”

  22. R(f):Fair Rate Computation Example • Associate a weight wiwith each flow i • If link congested, compute R(f) such that R(fi)= R(8) = 6 R(6) = 2 R(2) = 2 8 (w1 = 3) 10 6 6 (w2 = 1) 2 2 2 (w3 = 1) The smallest flow in the round Round 1: Since W(f1)<10/5(2), then Set R(f1) = 2 Round 2: Since W(f2)>8/4(2), then Set R(f2) = 8/4 = 2 Round 3: Since W(f3)>6/1(6), then Set R(f4) = 6/1 = 6

  23. Generalized Processor Share-fluid flow FQ Link=C • Red session has packets backlogged between time 0 and 10 • Other sessions have packets continuously backlogged flows 5 1 1 1 1 1 5C/Swi 1C/Swi 0 2 4 6 8 10 15

  24. Generalized Processor Sharing • A work conserving GPS is defined as • where • wi– weight of flow i • Wi(t1, t2) – total service received by flow i during [t1, t2) • W(t1, t2) – total service allocated to all flows during [t1, t2) • B(t) – number of flows backlogged

  25. IN dt2 dt t dt2 dt w1=1/2 w2=1/3 w3=1/6 GPS Example (w1 = 1/2) 6 1 ? (w2 = 1/3) 14 ? ? (w3 = 1/6) 14 OUT 1/2 r2 =2/3 1/3 r3=1/3 1/6

  26. OUT t dt dt2 1/2 r2 =2/3 1/3 r3=1/3 1/6 GPS Example • Three flows with weights/rates w1=1/2, w2=1/3, w3=1/6 • Initially, only flows 2 and 3 are active (dt) • Flows 1, 2, and 3 are ultimately active (dt2)

  27. Properties of GPS • End-to-end delay bounds for guaranteed service [Parekh and Gallager ‘93] • Fair allocation of bandwidth for best effort service [Demers et al. ‘89, Parekh and Gallager ‘92] • Work-conserving for high link utilization

  28. Summary of Fluid Flow Fair Queueing • In a fluid flow system FQ reduces to bit-by-bit round robin among flows • Each flow receives R(fi) , where fi– flow arrival rate • Weighted bit-by-bit Fair Queueing (WFQ) – associate a weight with each flow [Demers and etc.’89] • In a fluid flow system it reduces to bit-by-bit round robin • WFQ in a fluid flow system  Generalized Processor Sharing (GPS) [Parekh & Gallager ’92]

  29. Packet vs. Fluid System • GPS is defined in an idealized fluid flow model • Multiple queues can be serviced simultaneously • No non-preemption unit (preemptive) • Real system are packet systems • One queue is served at any given time • Packet transmission is not preempted • Goal • Define packet algorithms approximating the fluid system • Maintain most of the important properties

  30. Packet Approximation of Fluid System • Standard techniques of approximating fluid GPS • Select packet that finishes first in GPS (assuming that there are no future arrivals) • Important properties of GPS • Finishing order of packets currently in system independent of future arrivals • Implementation based on virtual time • Assign virtual finish time to each packet upon arrival • Packets served in increasing order of virtual times

  31. Packetized GPS Algorithm Problem:We need to serve a whole packet at a time. Solution: • Determine what time a packet, p, would complete if we served flows bit-by-bit. Call this the packet’s finishing time, F(p). • Serve packets in the order of increasing finishing time. Theorem: Packet p will depart before F(p) + Trmax “Packetized Generalized Processor Sharing (PGPS)”

  32. From Fluid to Packets • Deviation : the fluid model (GPS) vs.(PGPS=WFQ) How to minimize it? • Cannot interrupt packet transmission once started • Granularity in how transmission opportunities are allocated • Inability to change decision even if higher priority (allocated rate) packets arrive • Approach • Emulate the fluid system (GPS) as closely as possible • Desired transmission time is finish transmission time in fluid system • Select packet with smallest finish transmission time in the fluid system (assuming there would be no more arrivals after this time)

  33. PGPS (also called Weighted Fair Queueing) select the first packet that finishes in GPS (service order) Fluid GPS system service order (ideal) Approximating GPS with WFQ Virtual Finish time 1 2 3 4 5 1 2 3 4 5 6,7,… 6 10 0 2 4 6 8 10 1 2 3 4 5 6 7 8 9 10

  34. Packetized GPS (Weighted Fair Queueing) GPS emulator that determines virtual departure times (VDT) of packets arrival times and packet lengths VDTs of packets flow 1 flow 2 . . flow n C buffer Packets are transmitted in order of their VDTs

  35. Deterministic analysis of a router queue FIFO delay, d(t) Cumulative bytes Model of router buffer A(t) D(t) A(t) D(t) R B(t) B(t) R A(t): Arrival process time D(t): Service process

  36. Assume same packet size If arrived and queue is available enque Else drop At every tick, if queue is not empty deque and send it Example 25MBs computer and network Router 2MBs in steady state If IMB burst (40msec) , AveRate = 2MB Rho = 2MB/s C = 1MB 25MB in 40msec Capacity 250KB Allow token for 2MB/sec

  37. Want to speedup when large burst comes Token/DeltaT Save token in idle upto n Never discard packet-regulate host

  38. Token capacity = 250KB Token arrives at the rate of 2MB/sec Assume token is full when 1MB burst arrives 40ms? NO C+rhoS= MS S = C/(M-rho) 250KB/(25MB/sec-2MB/sec) 250/(23)sec is about 11sec

  39. Resource Resrvation • Packet Scheduling • And Integrated Service

More Related