1 / 111

Congestion Control Algorithms of TCP in Emerging Networks

Congestion Control Algorithms of TCP in Emerging Networks. Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September 16, 2005. Motivation. Why TCP Congestion Control ? Designed in early ’80’s Still the most predominant protocol on the net Continuously evolves

salma
Télécharger la présentation

Congestion Control Algorithms of TCP in Emerging Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Control Algorithms of TCPin Emerging Networks Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha ReddySeptember 16, 2005

  2. Motivation Why TCP Congestion Control ? • Designed in early ’80’s • Still the most predominant protocol on the net • Continuously evolves • IETF developing an RFC to keep track of TCP changes ! • Has “issues” in emerging networks • We aim to identify problems and propose solutions for TCP in high-speed networks

  3. Motivation • Link speeds have increased dramatically • 270 TB collected by PHENIX (Pioneering High Energy Nuclear Interaction eXperiment) • Data transferred between Brookhaven National Laboratory, NY to RIKEN research center, Tokyo • Typical rate 250Mbps, peak rate 600Mbps • OC48(2.4 Gbps) from Brookhaven to ESNET, transpacific line (10 Gbps) served by SINET to Japan • Used GridFTP (Parallel connections with data striping) Source : CERN Courier, Vol. 45, No.7

  4. Motivation • Historically, high-speed links present only at the core • High levels of multiplexing (low per-flow rates) • New architectures for high-speed routers • Now, high-speed links are available for transfer between two endpoints • Low levels of multiplexing (high per-flow rates)

  5. Outline • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work

  6. Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work

  7. TCP in High-speed Networks Motivation TCP’s one per RTT increase does not scale well * For RTT = 100ms, Packet Size = 1500 Byte *Source : RFC 3649

  8. TCP in High-speed Networks • Design Constraints • More efficient link utilization • Fairness among flows of similar RTT • RTT unfairness no worse than TCP • Retain AIMD behavior

  9. TCP in High-speed Networks • Layered congestion control • Borrow ideas from layered video transmission • Increase layers, if no losses for extended period • Per-RTT window increase more aggressive at higher layers

  10. TCP in High-speed Networks LTCP Concepts (Cont.) • Layering • Start layering when window > WT • Associate each layer with a step size K • When window increases from previous addition of layer by K, increment number of layers • For each layer K, increase window by K per RTT Number of layers determined dynamically based on current network conditions.

  11. Minimum Window Corresponding to the layer Layer Number K + 1 WK+1 dK K WK dK-1 K - 1 WK-1 Number of layers = K when WK W  WK+1 TCP in High-speed Networks LTCP Concepts K

  12. TCP in High-speed Networks Framework Constraint 1 : Rate of increase forflow at higher layer should be lower than flow at lower layer K + 2 WK+2 dK+1 K + 1 WK+1 dK K WK dK-1 K - 1 WK-1 (K1 > K2, for all K1, K2  2) Number of layers = K when WK W  WK+1

  13. TCP in High-speed Networks Framework Constraint 2 : After a loss, recovery time for a larger flow should be more than the smaller flow Flow 1 : Slope = K1' WR1 Window T1 Time Flow 2 : Slope = K2' WR2 Window T2 (K1 > K2, for all K1, K2  2) Time

  14. TCP in High-speed Networks Design Choice • Decrease behavior : • Multiplicative decrease • Increase behavior : • Additive increase with additive factor = layer number W = W + K/W

  15. TCP in High-speed Networks Determining Parameters • Analyze two flows operating at adjacent layers • Should hold for other cases through induction • Ensure constraints satisfied for worst case • Should workin other cases • After loss, drop at most one layer • Ensures smooth layer transitions

  16. TCP in High-speed Networks Determining Parameters(Cont.) • Before Loss : Flow1 at K, Flow2 at (K-1) • After loss four possible cases • For worst case to happen W1 close to WK+1 , W2 close to WK-1 • Substitute worst case values in constraint on decrease behavior worst case

  17. TCP in High-speed Networks Determining Parameters • Analysis yields inequality • Higher the inequality, slower the increase in aggressiveness • We choose • If layering starts at WT, by substitution,

  18. TCP in High-speed Networks Choice of  Since after loss, at most one layer is dropped, By substitution and simplification, We choose  = 0.15

  19. TCP in High-speed Networks Other Analyses • Time to claim bandwidth • Window corresponding to BDP is at layer K • . = • For TCP, T(slowstart) + (W - WT) RTTs (Assuming slowstart ends when window = WT)

  20. TCP in High-speed Networks Other Analyses • Packet recovery time • Window reduction is by  • After loss, increase is atleast by (K-1) • Thus, time to recover from loss is RTTs • For TCP, it is W/2 RTTs • Speed up in packet recovery time

  21. TCP in High-speed Networks

  22. TCP in High-speed Networks Steady State Throughput BW = ND / TD where K' is the layer for steady state window

  23. TCP in High-speed Networks Response Curve

  24. Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work

  25. TCP in High-speed Networks Impact of RTT* • Two-fold dependence on RTT • Smaller the RTT, faster the growth in window • Smaller the RTT, faster the aggressiveness increases • Easy to offset this • Scale K using “RTT compensation factor” KR • Thus, increase behavior is W = W + (KR * K) / W • Decrease behavior is still W =  * W * In collaboration with Saurabh Jain

  26. TCP in High-speed Networks Impact of RTT* • Throughput ratio in terms of RTT and KR is • When KR RTT (1/3), TCP-like RTT-unfairness • When KR RTT, linear RTT unfairness (window size independent of RTT) * In collaboration with Saurabh Jain

  27. TCP in High-speed Networks Window Comparison

  28. TCP in High-speed Networks Related Work • Highspeed TCP : Modifies AIMD parameters based on different response function (no longer AIMD) • Scalable TCP : Uses MIMD • FAST : Based on Vegas core • BIC TCP : Uses Binary/Additive Increase, Multiplicative Decrease • H-TCP : Modifies AIMD parameters based on “time since last drop” (no longer AIMD)

  29. TCP in High-speed Networks Link Utilization

  30. TCP in High-speed Networks Dynamic Link Sharing

  31. TCP in High-speed Networks Effect of Random Loss

  32. TCP in High-speed Networks Interaction with TCP

  33. TCP in High-speed Networks RTT Unfairness

  34. TCP in High-speed Networks Summary • Why LTCP ? • Current design remains AIMD • Dynamically changes increase factor • Simple to understand/implement • Retains convergence and fairness properties • RTT unfairness similar to TCP

  35. Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work

  36. TCP in High-speed Networks Impact on Packet Losses Summary of Bottleneck Link Buffer Statistics Increased aggressiveness increases congestion events

  37. TCP in High-speed Networks Impact on Router Buffers Increased aggressiveness increases stress on router buffers Instantaneous Queue Length at Bottleneck Link Buffers

  38. TCP in High-speed Networks Impact on Buffers and Losses Motivation • Important to be aggressive for fast convergence • When link is underutilized • When new flows join/leave • In steady state, aggressiveness should be tamed • Otherwise, self-induced loss rates can be high

  39. TCP in High-speed Networks Impact on Buffers and Losses • Proposed solution • In steady state, use less aggressive TCP algorithms • Use a control switch to turn on/off aggressiveness • Switching Logic • ON when bandwidth is available • OFF when link is in steady state • ON when network dynamics change (sudden decrease or increase in available bandwidth)

  40. TCP in High-speed Networks Impact on Buffers and Losses Using the ack-rate for identifying steady state Raw ack-rate signal for flow1

  41. TCP in High-speed Networks Impact on Buffers and Losses • Using ack-rate for switching • Trend of the ack rate works well for our purpose • If (gradient = 0) : Aggressiveness OFFIf (gradient  0) : Aggressiveness ON • Responsiveness of raw signal does not require large buffers • Noisy raw signal smoothed using EWMA

  42. TCP in High-speed Networks Impact on Buffers and Losses Instantaneous Queue Length at Bottleneck Link Buffers Without Rate-based Control Switch With Rate-based Control Switch

  43. TCP in High-speed Networks Impact on Buffers and Losses Summary of Bottleneck Link Buffer Statistics

  44. TCP in High-speed Networks Impact on Buffers and Losses Convergence Properties

  45. TCP in High-speed Networks Impact on Buffers and Losses • Other Results • TCP Tolerance slightly improved • RTT Unfairness slightly improved • At higher number of flows, improvement in loss rate is about a factor of 2 • Steady reverse traffic does not impact performance • Highly varying traffic reduces benefits, improvement in loss rate is about a factor of 2

  46. TCP in High-speed Networks Impact on Buffers and Losses Summary • Use of rate-based control switch • provides improvement in loss rates ranging from orders of magnitude to a factor of 2 • low impact on other benefits of high-speed protocols • Benefits extend to other high-speed protocols (verified for BIC and HTCP) • Whichever high-speed protocol emerges as the next standard, rate-based control switch could be safely used with it

  47. Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work

  48. TCP with Non-Congestion Events • TCP behavior: If three dupacks • retransmit the packet • reduce cwnd by half. • Caveat : Not all 3-dupack events are due to congestion • channel errors in wireless networks • reordering etc. • Result : Sub-optimal performance

  49. Impact of Packet Reordering • Packet Reordering in the Internet • Originally thought to be pathological • caused only by route flapping, router pauses etc • Later results claim higher prevalence of reordering • reason attributed to parallelism in Internet components • Newer measurements show • low levels of reordering in most part of Internet • high levels of reordering is localized to some links/sites • is a function of network load

  50. Impact of Packet Reordering • Proposed Solution • Delay the time to infer congestion by  • Essentially a tradeoff between wrongly inferring congestion and promptness of response to congestion •  chosen to be one RTT to allow maximum time while avoiding an RTO

More Related