1 / 68

A Survey on TCP-Friendly Congestion Control

A Survey on TCP-Friendly Congestion Control. 童曉儒 教授 國立屏東科技大學 資管系. Outline. Introduction TCP and TCP Friendliness Classification of Congestion Control Schemes Window-Based vs. Rate-Based Unicast vs. Multicast Single-Rate vs. Multi-Rate End-to-End vs. Router-Supported

Télécharger la présentation

A Survey on TCP-Friendly Congestion Control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Survey on TCP-Friendly Congestion Control 童曉儒 教授 國立屏東科技大學 資管系

  2. Outline • Introduction • TCP and TCP Friendliness • Classification of Congestion Control Schemes • Window-Based vs. Rate-Based • Unicast vs. Multicast • Single-Rate vs. Multi-Rate • End-to-End vs. Router-Supported • Rate Adaptation Protocol (RAP) • Receiver-driven Layered Congestion Control (RLC) • Conclusions

  3. Introduction(1/4) • Not all Internet applications use TCP and therefore do not follow the same concept of fairly sharing the available bandwidth. • TCP-based protocols applications • Hypertext Transfer Protocol (HTTP) • Simple Mail Transfer Protocol (SMTP) • File Transfer Protocol (FTP)

  4. Introduction(2/4) • Non-TCP traffic applications is constantly growing. • Internet audio players • IP telephony • Videoconferencing • real-time applications • Upon encountering congestion • All contending TCP flows reduce their data rates in an attempt to dissolve the congestion. • The non-TCP flows continue to send at their original rate.

  5. Introduction(3/4) • TCP congestion control • end-to-end mechanism • assumes that end systems correctly follow the protocol • Coexistence of TCP flow and non-TCP flow (or faked TCP flow) • If one is greedy  unfairness • If one is malicious  congestion, DoS

  6. Introduction(4/4) • Since these applications commonly do not integrate TCP-compatible congestion control mechanisms. • To define appropriate rate adaptation rules and mechanisms for non-TCP traffic that are compatible with the rate adaptation mechanism of TCP. • These rate adaptation rules should make non-TCP applications TCP-friendly, and lead to a fair distribution of bandwidth.

  7. What is congestion ?(1/2) • What is congestion ? • The aggregate demand for bandwidth exceeds the available capacity of a link. • What will be occur ? • Performance Degradation • Multiple packet losses • Low link utilization (low Throughput) • High queueing delay • Congestion collapse

  8. 10 Mb/s 1.5 Mb/s 100 Mb/s What is congestion ?(2/2) • Different sources compete for resources inside network • Why is it a problem? • Sources are not aware of current state of resources • Sources are not aware of each other • In many situations will result in < 1.5 Mbps throughput (congestion collapse)

  9. A B Original transmission X RTO Sample RTT retransmission ACK RTT and RTO

  10. Modeling TCP Throughput(1/2) • A basic model that approximates TCP’s steady-state throughput T • The throughput of TCP depends mainly on the parameters RTT tRTT, retransmission timeout value tRTO, segment size s, and packet loss rate p. Using these parameters, an estimate of TCP’s throughput can be derived. • This model is a simplification in that it does not take into account TCP timeouts.

  11. Modeling TCP Throughput(2/2) • An example of a more complex model of TCP throughput • b is the number of packets acknowledged by each ACK and Wm is the maximum size of the congestion window. • The complex model takes into account rate reductions due to TCP timeouts.

  12. Additive Increase Multiplicative Decrease (AIMD) (1/3) multiplicative decrease: cut CongWin in half after loss event additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing Long-lived TCP connection

  13. Additive Increase Multiplicative Decrease (AIMD) (2/3) • AIMD(a,b), with window size W • Increase parameter a, Decrease parameter b • Each RTT increase window to W+a • Upon loss event decrease to (1-b)W • TCP uses AIMD(1, ½) • Increase by 1 every RTT • Decrease by ½ upon loss • Smoother should have b < ½ • TCP-friendly should then have a < 1

  14. Additive Increase Multiplicative Decrease (AIMD) (3/3) (round trips)

  15. One RTT 0R 1 One pkt time 1R 1 2 3 2R 2 3 4 6 5 7 4 5 6 7 3R 8 10 12 14 9 11 13 15 Slow Start Example

  16. Congestion avoidance Slow start threshold Slow start Example assumes that acks are not delayed

  17. After timeout cwnd = 20 ssthresh = 10 ssthresh = 8 Congestion Control -- Timeout

  18. Congestion Window Timeouts may still occur Time Slowstart to pace packets Fast Retransmit and Recovery Initial Slowstart TCP Saw Tooth Behavior

  19. congestion window size (cwnd) [Reno, NewReno] limit ssthresh Additive AIMD Multiplicative congestion avoidance phase slow start phase time TCP’s Rate Control

  20. Packet Received Packet Sent Source Port Dest. Port Source Port Dest. Port Sequence Number Sequence Number Acknowledgment Acknowledgment HL/Flags Window HL/Flags Window D. Checksum Urgent Pointer D. Checksum Urgent Pointer Options.. Options.. App write acknowledged sent to be sent outside window TCP Window Flow Control

  21. Sliding Window Flow Control • Sliding Window Protocol is performed at the byte level:

  22. TCP connection 1 bottleneck router capacity R TCP connection 2 TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K

  23. TCP Friendliness • TCP Friendliness • 「their long-term throughput does not exceed the throughput of a conformant TCP connection under the same conditions」 • TCP friendliness ensures that coexisting TCP flows are not treated unfairly by non-TCP flows. • Throughput • The effect of a non-TCP flow on competing TCP flows rather than on the throughput of the non-TCP flow.

  24. TCP-Friendly Flows • Unresponsive flows get unfair share of network bandwidth and AQM techniques will punish them. • Streaming flows need to be TCP-Friendly. • A TCP-Friendly flow’s bandwidth is no more than a conformant TCP flow running under comparable network conditions.

  25. TCP friendly congestion control(1/3) • TCP friendly: a protocol that behaves like TCP • Backs off if congestion and uses a fair share of resources. • Protocol that obeys TCP long term throughput relation. • Internet requirement: new transport protocols must be TCP friendly • Backs off if congestion and uses a fair share of resources. • Applies also to application layer protocols transmitting over UDP, e.g., real time telephony or streaming applications. • Rate control implemented on top of UDP as part of application.

  26. TCP friendly congestion control(2/3) • Non-TCP friendly: • A protocol that takes more than its fair share of bandwidth (greedy). • May cause fluctuations in network load and result in congestion collapse. • How to protect your protocol against non-TCP friendly greedy protocols? • RED is designed to solve this problem to some extent.

  27. TCP-friendly CM protocol Avg Rate TCP TCP friendly congestion control(3/3) • Average rate same as TCP travelling along same data-path (rate computed via equation), but CM protocol has less rate variance.

  28. Classification of Congestion Control Schemes • Window-Based vs. Rate-Based • Unicast vs. Multicast • Single-Rate vs. Multi-Rate • End-to-End vs. Router-Supported

  29. Window-Based vs. Rate-Based • Window-Based • Algorithms that belong to the window-based category use a congestion window at the sender or at the receiver(s) to ensure TCP friendliness. • Rate-Based • Rate-based congestion control achieves TCP friendliness by dynamically adapting the transmission rate according to some network feedback mechanism that indicates congestion. • Ex. Simple AIMD schemes mimic the behavior of TCP congestion control.

  30. Unicast vs. Multicast(1/3) • The design of good multicast congestion control protocols is far more difficult than the design of unicast protocols. • Multicast congestion control schemes ideally should scale to large receiver sets and be able to cope with heterogeneous network conditions at the receivers. • For example, if for all receivers the sender transmits packets at the same rate, care has to be taken as to how the sending rate is decreased in case of network congestion.

  31. Unicast vs. Multicast(2/3) • Since in large multicast sessions receivers may experience uncorrelated loss. It is therefore likely that most of the transmitted packets are lost to at least one receiver. If the sender responded to each of these losses by decreasing the congestion window, the transmission would likely stall after a certain length of time. This problem is known as the loss path multiplicity problem [5].

  32. Unicast vs. Multicast(3/3) • They[6] show that window-based congestion control can be TCP-friendly without knowing the RTT, whereas rate-based congestion control does need this information in order to be TCP-friendly. This is an important insight, since RTTs are difficult to obtain in a scalable fashion for multicast communication without support from the network.

  33. Single-Rate vs. Multi-Rate(1/2) • A common criterion for classifying TCP-friendly multicast congestion control protocols is whether they operate at a single rate or use a multirate approach. • unicast transport protocols are confined to single-rate schemes.

  34. Single-Rate vs. Multi-Rate(2/2) • Single-Rate • Data is sent to all receivers at the same rate. • This limits the scalability of the mechanism, since all receivers are restricted to the rate that is TCP-friendly for the bottleneck receiver. • Multi-Rate • Allow for a more flexible allocation of bandwidth along the different network paths. • A sender divides the data into several layers and transmits them to different multicast groups. Each receiver can individually select to join as many groups as permitted by the bandwidth bottleneck between that receiver and the sender.

  35. Multi-rate Congestion control • Use layered multicast • A sender divides the data into several layers and transmits them to different multicast groups. • Each receiver can individually select to join as many groups as permitted by the bandwidth bottleneck between that receiver and the sender. • Congestion control is performed indirectly by the group management and routing mechanisms of the underlying multicast protocol.

  36. End-to-End vs. Router-Supported(1/7) • End-to-End • Many of the TCP-friendly schemes proposed are designed for best effort IP networks that do not provide any additional router mechanisms to support the protocols. Thus, they can readily be deployed in today’s Internet. • Separated into sender-based and receiver-based approaches.

  37. End-to-End vs. Router-Supported(2/7) • Sender-based approaches • The sender uses information about the network congestion and adjusts the rate or window size to achieve TCP friendliness. • Receiver-based approaches • The receivers only provide feedback, while the responsibility of adjusting the rate lies solely with the sender.

  38. End-to-End vs. Router-Supported(3/7) • router-supported • Congestion control schemes that rely on additional functionality in the network. • The design of congestion control protocols and particularly fair sharing of resources can be considerably facilitated by placing intelligence in the network (e.g., in routers or separate agents).

  39. End-to-End vs. Router-Supported(4/7) • router-supported • Ex. Multicast protocols can benefit from additional network functionality such as feedback aggregation, hierarchical RTT measurements, management of (sub)groups of receivers, or modification of the routers’ queuing strategies.

  40. End-to-End vs. Router-Supported(5/7) • Disadvantage • End-to-End • End-to-end congestion control has the disadvantage of relying on the collaboration of the end systems. • Experience in the current Internet has shown that this cannot always be assumed: greedy users or applications may use non TCP-friendly mechanisms to gain more bandwidth. • When a router discovers a flow which does not exhibit TCP-friendly behavior, the router might drop the packets of that flow with a higher probability than the packets of TCP-friendly flows.

  41. End-to-End vs. Router-Supported(6/7) • Disadvantage • router-supported • While ultimately fair sharing of resources in the presence of unresponsive or non-TCP-friendly flows can only be achieved with router support, this mechanism is difficult to deploy, since changes to the Internet infrastructure take time and are costly in terms of money and effort.

  42. End-to-End vs. Router-Supported(7/7)

  43. A classification scheme for TCP-friendly protocol

  44. End-to-End , Rate-based , unicast protocol Rate Adaptation Protocol (RAP)

  45. Rate Adaptation Protocol (RAP)(1/3) • Goal: develop an end-to-end TCP-friendly RAP for semi-reliable rate-based applications (e.g. playback of real-time streams) • RAP employs an additive-increase, multiplicative-decrease (AIMD) algorithm with implicit loss feedback to control congestion • RAP separates congestion control from error control • RAP is fair as long as TCP operates in a predictable AIMD mode • Fine-grain rate adaptation extends range of fairness • RED enhances fairness between TCP and RAP traffic • RAP does not exhibit inherent instability

  46. RAP Architecture RAP in a typical end-to-end architecture for realtime playback applications in the Internet

  47. Rate Adaptation Protocol (RAP)(2/3) • RAP is implemented at source host • Each ACK packet contains sequence number of corresponding delivered data packet • From ACKs, RAP source can detect losses and sample RTT • Decision Function: if no congestion detected, periodically increase rate if congestion detected, immediately decrease rate • Congestion detected through timeouts, and gaps in sequence space • Timeout calculated based on Jacobson/Karel algorithm using RTT estimate (SRTT)

  48. Decision Function • RAP couples timer-based loss detection to packet transmission - before sending a new packet, source checks for a potential timeout among outstanding packets using most recent SRTT • A packet is considered lost if an ACK implies delivery of 3 packets after the missing one (cf. fast recovery) • RAP provides robustness against ACK losses by adding redundancy to ACK packets

  49. Increase/Decrease Algorithm • In absence of packet loss, increase rate additively in a step-like fashion • Upon detecting congestion, decrease rate multiplicatively • Rate controlled by adjusting inter-packet gap (IPG)

  50. Decision Frequency • RAP adjusts IPG once every SRTT • If rate is increased by one packet, then slope of rate is inversely related to the square of SRTT (cf. linear increase of TCP) • RAP emulates the coarse-grain rate adjustment of TCP • RAP is unfair to flows with longer RTT as TCP

More Related