1 / 31

Packet Scheduling and Buffer Management in Routers (A Step Toward Quality-of-service)

Packet Scheduling and Buffer Management in Routers (A Step Toward Quality-of-service). Best-effort V.S. Guaranteed-service. The current IP network does not provide any guarantee for the service that a packet will receive.

terah
Télécharger la présentation

Packet Scheduling and Buffer Management in Routers (A Step Toward Quality-of-service)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Packet Scheduling and Buffer Management in Routers(A Step Toward Quality-of-service)

  2. Best-effort V.S. Guaranteed-service • The current IP network does not provide any guarantee for the service that a packet will receive. • The only guarantee is that the IP network will do its best to deliver your packets to their destinations. • This type of service is called “best-effort” service. Most of people are already used to it. E.g. FTP, email, WWW, etc. • However, sometimes some people may want guaranteed service for their performance critical applications. • For example, NCTU may want a minimum bandwidth at all time between its campus and the Internet.

  3. Some Applications Require Quality-of-service to Work Well • End-to-end bandwidth, delay, delay-jitter, loss rate are important for some applications’ performance. • For example, a video conference application may requires a minimum bandwidth of 1.5 Mbps, a maximum delay of 150 ms, a maximum packet loss rate of 2% at all times. Otherwise, people won’t want to use it. • Even for the best-effort applications, we want the available bandwidth in the network to be fairly shared by all best-effort applications. • If there are N users, we would want each of them to receive 1/N bandwidth.

  4. Asking Network Users to Behave Themselves Is Impractical • The most commonly used FIFO packet scheduling and drop-tail buffer management schemes in today’s routers/switches cannot provide any quality-of-service (QoS) guarantee or fairness. • For example, a greedy user (e.g., a UDP or a modified super strong TCP) can easily use up all the bandwidth of a bottleneck link. • Asking all users of the Internet to behave themselves is too impractical. • We got to do something (packet scheduling and buffer management) in the network (routers) to enforce fairness and provide QoS.

  5. Packet Scheduling Determines the Order In Which Packets in the Queue Are Served • Think about this! If there is no packet in the queue, do we need to do anything special other than FIFO? • No. We do not need to do anything special. The fact that no packet is queued up in the router means that there is no congestion at all. The bandwidth is larger than the traffic demand, there is not queueing delay, the delay-jitter is 0, the loss rate is 0%. Why would we need to do special actions? • Packet scheduling scheme can control the service that a packet would receive in the router. • E.g., more bandwidth for a user, less delay for a user’s packets, less loss rate for a user’s packets.

  6. Buffer Management Determines Which Packet to Drop When the Buffer Is Full • When a packet arrives at the router and the buffer is full, which packet should be selected to be dropped? • The incoming one: • Then this scheme is the most commonly used drop-tail scheme. • A packet in the queue: • Then, from which user’s connection? • Clearly, buffer management can control the packet loss rate experienced by a user’s connection. • Actually with the single FIFO scheduling, it also affects bandwidth allocation. • Buffer management and packet scheduling are orthogonal. There are many possible combinations.

  7. The Conservation Law • The simplest possible scheduling discipline is first-come-first-serve (FCFS or FIFO). • As we noted before, FIFO cannot allocate some connections lower mean queueing delays than others. • A more sophisticated scheduling discipline can achieve this objective. • However, the sum of the mean queueing delay received by the set of multiplexed connections, weighted by their share of the link’s load, is independent of the scheduling disciplines. • That is, a scheduling discipline can reduce a particular connection’s mean delay, compared with FIFO, only at the expense of another connection.

  8. Requirements for A Scheduling Discipline • Ease of implementation • If there are N connections passing through the router, we would want the router to make a scheduling decision in O(1), rather than O(N) steps. • Also, we would want the router to keep only O(1), rather than O(N) scheduling states. • Fairness and protection • Want the router to provide fair service and protection to contending users. E.g., FIFO does not provide protection. • Performance bounds • Want the router to provide performance bounds such as delay, bandwidth, loss rate. • Ease and efficiency of admission control • Want the router to quickly decide whether it can meet the new connection’s performance bounds without jeopardizing the performance of existing connections.

  9. There Are Many Definitions of Fairness • For example: • Suppose that connection C1 and C2 are contending for the link L’s bandwidth. • We may want to let C1 and C2 each get ½ of L’s bandwidth on L. (easy to implement) • However, suppose that C1 uses 3 links whereas C2 uses only 2 link in the network, should C1 get 2/5 and C2 get 3/5 of L’s bandwidth on L? (hard to implement) • Which bandwidth allocation is fair? C1 L C2

  10. The Max-min Fairness Definition • Informal Rules: • Resources are allocated in order of increasing demand • No source gets a resource share larger than its demand • Sources with unsatisfied demands get an equal share of the resource. • Formal definition: • X1, x2, x3, .. Xn are resource demands and x1 <= x2 <= x3 <= x4 …<= Xn • Let the server has a capacity of C. • Initially we give C/n to each sources. • If C/n > x1, then distribute the excess (C/n – x1)/(n-1) to source x2, x3, .., xn, else stop. • Now, source 2 receives C/n + (C/n-x1)/(n-1). If this is larger than its demand, distribute the excess to the other n-2 sources, else stop. • Repeat the above process.

  11. An Example of Max-min Bandwidth Allocation • Compute the max-min fair allocation for a set of four sources s1, s2, s3, and s4 with demands 2, 2.6, 4, 5 when the resource has a capability of 10. • 10/4 = 2.5. Because 2.5 > 2 (x1’s demand), we distribute the excess (2.5-2)/3 = 0.166 among s2, s3, and s4. • Now s2 receives 2.5+0.166 = 2.666, which > its demand 2.6, so we distribute the excess (2.666 – 2.6)/2 among s3 and s4. • Now s3 receives 2.5 + 0.166 + 0.033 = 2.7, which is < its demand 4. • So s4 also receive 2.5 + 0.166 + 0.033 = 2.7.

  12. Why Using Max-min ? • Satisfy the users with smaller demands first. • We favor those users who need little bandwidth over those who need much bandwidth. (meeting the demands of the poor people before meeting the demand of the rich people) • Is not this allocation very attractive? • We only give a source the exact bandwidth that it really needs. • So that no bandwidth will be wasted. • The result is a globally fair bandwidth allocation. • Called “Max-min” because it maximizes the minimum share of a source whose demand is not fully satisfied.

  13. An Example of Max-min Bandwidth Allocation 1/4 3/8 1/4 1/4 3/4 3/8 1/4

  14. Performance Bounds • A bandwidth bound requires that a connection receive at least a minimum bandwidth from the network. • A delay bound is a deterministic or statistical bound on some parameters of the delay distribution. (e.g., worst-case delay, mean delay, 99-percentile delay) • A delay-jitter bound requires that the network bound the difference between the largest and smallest delays received by packets on a connection. • A loss bound requires that the fraction of packets lost on a connection be smaller than some bound.

  15. Definitions of Delay and Delay Jitter c J

  16. Delay-jitter Bound Is Important for Elastic Playback Applications • If the delay-jitter is bounded, the receiver can eliminate delay variation in the network by delaying the playback of the first packet by the delay-jitter bound in an elasticity buffer, then playing packets out from the connection a constant time after they were transmitted. • Thus, the larger the delay-jitter bound, the larger the elasticity buffer (the required size is play_rate * the delay-jitter bound). • Thus, if delay-jitter bound is too large, it is only useful for non-interactive streaming application (such as the VOD system), and not useful for interactive applications (such as video conference)

  17. Removing Delay-jitter by Delaying the Playback of the First Packet

  18. Four Degrees of Freedom in Designing a Scheduling Discipline • The number of priority levels • Higher priority packets are always served before lower priority packets. • Thus, higher priority packets can get whatever bandwidth they need, the lowest queueing delay, and 0% packet loss rate. • On the other hand, low priority packets may be starved and get no service at all. • Whether each level is work-conserving or non-work-conserving • The degree of aggregation • Service order within a level

  19. Work-conserving V.S. Non-work-conserving Disciplines • A work-conserving scheduler is idle only when there is no packet awaiting service. • In contrast, a non-work-conserving scheduler may be idle even if it has packets to serve. • Why using a non-work-conserving scheduler? • By idling away time, it can make the traffic arriving at downstream routers more predictable. • Thus, the required buffer size and delay-jitter can be reduced

  20. Work-conserving Scheduler May Cause a Smooth Traffic to Become Bursty Traffic

  21. Aggregation of Traffic Can Reduce States in Routers • If a router keeps per-connection states, it is able to provide protection and guaranteed service for a connection but this is not scalable. • Aggregation can reduce the number of states by classifying several users’ traffic into the same class and treat them the same. • The problem is that different users’ traffic in a class cannot be protected from each other. • Integrated-service (per-connection state) v.s. differentiated service (per-class state) are two active research areas in the Internet community.

  22. Generalized Processor Sharing (GPS) Can Provide Max-min Bandwidth Allocation • Use logical per-connection queueing • Visit each non-empty queue in turn and serve an infinitesimally small amount of data from each queue. • If a connection queue does not have packets, the scheduler just skips it and serves the next queue. • Can provide the max-min fairness. • A non-backlogged connection queue means that the bandwidth that GPS would gives it (1/N) is larger than the connection’s demand. (Take care of the poor first) • However, because GPS skips empty queues, there is no bandwidth waste. • The saved bandwidth is evenly distributed among all other connections because of the round-robin way.

  23. (Weighted) Round-robin Scheduling • Although GPS is the most ideal scheduling scheme, it cannot be implemented. • This is because a packet, once being sent, cannot be interrupted. • RR can reasonably simulate GPS when all connections have the same weights and all packets have the same size. (See the previous max-min bandwidth allocation.) • If connections have different weights to request different proportions of the shared bandwidth , WRR can be used. • Problems: • If packets can be of different sizes, a WRR must know the source’s mean packet size in advance • It is fair only over time scales longer than a round time, which may be very large.

  24. Example 1 of WRR • Suppose that connections A, B, and C have the same packet size, and weights 0.5, 0.75, and 1.0, how many packets from each connection should a round-robin server serve in each round? • We normalize the weights so that they are all integers, 2: 3: 4. • Then in each round, the server serves 2 packets from connection A, 3 from B, and 4 from C.

  25. Example 2 of WRR • Suppose that connections A, B, and C have mean packet sizes of 50, 500, and 1500 bytes, and weights 0.5, 0.75, and 1.0. How many packets from each connection should a round-robin server serve in each round? • First, we divide the weights by the mean packet size to obtain normalized weights 0.01, 0.0015, and 0.000666. • Second, convert them to all integers 60:9:4. • This results in 3000 bytes from A, 4500 bytes from B, and 6000 bytes from C, which is exactly 0.5:0.75:1.0.

  26. Deficit Round-robin Scheduling • DRR modifies WRR to allow it to handle variable packet sizes without knowing the mean packet size of each connection in advance. • Easy to implement

  27. Priority Dropping • If congestion occurs and the buffer becomes full, when a higher-priority packet comes in, the router will drop low-priority packets in the queue and let the high-priority packet to enter the queue. • Application 1: Multi-layer video stream encoding. • A video stream is composed of an essential layer (high priority) and an enhancement layer (low priority). • Application 2: Let guaranteed-service packets be marked as high-priority packets and best-effort packets be marked as low-priority packets. • Then an ISP can provide guaranteed service while keeping the utilization of the network high.

  28. Overloaded or Early Drop • Overloaded drop • Drop a packet only when the buffer is full • Early drop • Start dropping packets when the buffer is going to be full • Early random drop • When the instantaneous queue length exceeds a threshold, start randomly dropping incoming packets. • Random early detection (RED) • Two main improvements: • Use an exponential weighted average queue length to allow bursty traffic to successfully pass • The packet dropping probability is a linear function of the average queue length RED performs much better than FIFO because it avoids bursty packet dropping, which can easily cause TCP to time-out.

  29. Drop Positions • Drop the packet at the tail of the queue • Simple, easy to implement • Fair? (Intuitively fair) • Drop the packet at the head of the queue • Simple, easy to implement • The receiver and sender can detect the packet drop and thus reduce the sending rate sooner • Fair? (Intuitively unfair) • Drop a packet at a random position in a queue • Not easy to implement • In the queue, packets of a greedy connection are more likely to be dropped. (Also a solution to TCP SYN attack) • Fair? (Intuitively fair)

  30. Drop From Head V.S. Drop From Tail

  31. Conclusions • The many packet scheduling and buffer management schemes discussed here are only local controls that a router can do to provide some degrees of quality-of-service. (This is also called “per-hop behavior” PHB in recent researches) • How to make a series of PHBs to meet an end-to-end Quality-of-service agreement is still an open question.

More Related