1.11k likes | 1.25k Vues
Congestion Control Algorithms of TCP in Emerging Networks. Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September 16, 2005. Motivation. Why TCP Congestion Control ? Designed in early ’80’s Still the most predominant protocol on the net Continuously evolves
E N D
Congestion Control Algorithms of TCPin Emerging Networks Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha ReddySeptember 16, 2005
Motivation Why TCP Congestion Control ? • Designed in early ’80’s • Still the most predominant protocol on the net • Continuously evolves • IETF developing an RFC to keep track of TCP changes ! • Has “issues” in emerging networks • We aim to identify problems and propose solutions for TCP in high-speed networks
Motivation • Link speeds have increased dramatically • 270 TB collected by PHENIX (Pioneering High Energy Nuclear Interaction eXperiment) • Data transferred between Brookhaven National Laboratory, NY to RIKEN research center, Tokyo • Typical rate 250Mbps, peak rate 600Mbps • OC48(2.4 Gbps) from Brookhaven to ESNET, transpacific line (10 Gbps) served by SINET to Japan • Used GridFTP (Parallel connections with data striping) Source : CERN Courier, Vol. 45, No.7
Motivation • Historically, high-speed links present only at the core • High levels of multiplexing (low per-flow rates) • New architectures for high-speed routers • Now, high-speed links are available for transfer between two endpoints • Low levels of multiplexing (high per-flow rates)
Outline • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work
Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work
TCP in High-speed Networks Motivation TCP’s one per RTT increase does not scale well * For RTT = 100ms, Packet Size = 1500 Byte *Source : RFC 3649
TCP in High-speed Networks • Design Constraints • More efficient link utilization • Fairness among flows of similar RTT • RTT unfairness no worse than TCP • Retain AIMD behavior
TCP in High-speed Networks • Layered congestion control • Borrow ideas from layered video transmission • Increase layers, if no losses for extended period • Per-RTT window increase more aggressive at higher layers
TCP in High-speed Networks LTCP Concepts (Cont.) • Layering • Start layering when window > WT • Associate each layer with a step size K • When window increases from previous addition of layer by K, increment number of layers • For each layer K, increase window by K per RTT Number of layers determined dynamically based on current network conditions.
Minimum Window Corresponding to the layer Layer Number K + 1 WK+1 dK K WK dK-1 K - 1 WK-1 Number of layers = K when WK W WK+1 TCP in High-speed Networks LTCP Concepts K
TCP in High-speed Networks Framework Constraint 1 : Rate of increase forflow at higher layer should be lower than flow at lower layer K + 2 WK+2 dK+1 K + 1 WK+1 dK K WK dK-1 K - 1 WK-1 (K1 > K2, for all K1, K2 2) Number of layers = K when WK W WK+1
TCP in High-speed Networks Framework Constraint 2 : After a loss, recovery time for a larger flow should be more than the smaller flow Flow 1 : Slope = K1' WR1 Window T1 Time Flow 2 : Slope = K2' WR2 Window T2 (K1 > K2, for all K1, K2 2) Time
TCP in High-speed Networks Design Choice • Decrease behavior : • Multiplicative decrease • Increase behavior : • Additive increase with additive factor = layer number W = W + K/W
TCP in High-speed Networks Determining Parameters • Analyze two flows operating at adjacent layers • Should hold for other cases through induction • Ensure constraints satisfied for worst case • Should workin other cases • After loss, drop at most one layer • Ensures smooth layer transitions
TCP in High-speed Networks Determining Parameters(Cont.) • Before Loss : Flow1 at K, Flow2 at (K-1) • After loss four possible cases • For worst case to happen W1 close to WK+1 , W2 close to WK-1 • Substitute worst case values in constraint on decrease behavior worst case
TCP in High-speed Networks Determining Parameters • Analysis yields inequality • Higher the inequality, slower the increase in aggressiveness • We choose • If layering starts at WT, by substitution,
TCP in High-speed Networks Choice of Since after loss, at most one layer is dropped, By substitution and simplification, We choose = 0.15
TCP in High-speed Networks Other Analyses • Time to claim bandwidth • Window corresponding to BDP is at layer K • . = • For TCP, T(slowstart) + (W - WT) RTTs (Assuming slowstart ends when window = WT)
TCP in High-speed Networks Other Analyses • Packet recovery time • Window reduction is by • After loss, increase is atleast by (K-1) • Thus, time to recover from loss is RTTs • For TCP, it is W/2 RTTs • Speed up in packet recovery time
TCP in High-speed Networks Steady State Throughput BW = ND / TD where K' is the layer for steady state window
TCP in High-speed Networks Response Curve
Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work
TCP in High-speed Networks Impact of RTT* • Two-fold dependence on RTT • Smaller the RTT, faster the growth in window • Smaller the RTT, faster the aggressiveness increases • Easy to offset this • Scale K using “RTT compensation factor” KR • Thus, increase behavior is W = W + (KR * K) / W • Decrease behavior is still W = * W * In collaboration with Saurabh Jain
TCP in High-speed Networks Impact of RTT* • Throughput ratio in terms of RTT and KR is • When KR RTT (1/3), TCP-like RTT-unfairness • When KR RTT, linear RTT unfairness (window size independent of RTT) * In collaboration with Saurabh Jain
TCP in High-speed Networks Window Comparison
TCP in High-speed Networks Related Work • Highspeed TCP : Modifies AIMD parameters based on different response function (no longer AIMD) • Scalable TCP : Uses MIMD • FAST : Based on Vegas core • BIC TCP : Uses Binary/Additive Increase, Multiplicative Decrease • H-TCP : Modifies AIMD parameters based on “time since last drop” (no longer AIMD)
TCP in High-speed Networks Link Utilization
TCP in High-speed Networks Dynamic Link Sharing
TCP in High-speed Networks Effect of Random Loss
TCP in High-speed Networks Interaction with TCP
TCP in High-speed Networks RTT Unfairness
TCP in High-speed Networks Summary • Why LTCP ? • Current design remains AIMD • Dynamically changes increase factor • Simple to understand/implement • Retains convergence and fairness properties • RTT unfairness similar to TCP
Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work
TCP in High-speed Networks Impact on Packet Losses Summary of Bottleneck Link Buffer Statistics Increased aggressiveness increases congestion events
TCP in High-speed Networks Impact on Router Buffers Increased aggressiveness increases stress on router buffers Instantaneous Queue Length at Bottleneck Link Buffers
TCP in High-speed Networks Impact on Buffers and Losses Motivation • Important to be aggressive for fast convergence • When link is underutilized • When new flows join/leave • In steady state, aggressiveness should be tamed • Otherwise, self-induced loss rates can be high
TCP in High-speed Networks Impact on Buffers and Losses • Proposed solution • In steady state, use less aggressive TCP algorithms • Use a control switch to turn on/off aggressiveness • Switching Logic • ON when bandwidth is available • OFF when link is in steady state • ON when network dynamics change (sudden decrease or increase in available bandwidth)
TCP in High-speed Networks Impact on Buffers and Losses Using the ack-rate for identifying steady state Raw ack-rate signal for flow1
TCP in High-speed Networks Impact on Buffers and Losses • Using ack-rate for switching • Trend of the ack rate works well for our purpose • If (gradient = 0) : Aggressiveness OFFIf (gradient 0) : Aggressiveness ON • Responsiveness of raw signal does not require large buffers • Noisy raw signal smoothed using EWMA
TCP in High-speed Networks Impact on Buffers and Losses Instantaneous Queue Length at Bottleneck Link Buffers Without Rate-based Control Switch With Rate-based Control Switch
TCP in High-speed Networks Impact on Buffers and Losses Summary of Bottleneck Link Buffer Statistics
TCP in High-speed Networks Impact on Buffers and Losses Convergence Properties
TCP in High-speed Networks Impact on Buffers and Losses • Other Results • TCP Tolerance slightly improved • RTT Unfairness slightly improved • At higher number of flows, improvement in loss rate is about a factor of 2 • Steady reverse traffic does not impact performance • Highly varying traffic reduces benefits, improvement in loss rate is about a factor of 2
TCP in High-speed Networks Impact on Buffers and Losses Summary • Use of rate-based control switch • provides improvement in loss rates ranging from orders of magnitude to a factor of 2 • low impact on other benefits of high-speed protocols • Benefits extend to other high-speed protocols (verified for BIC and HTCP) • Whichever high-speed protocol emerges as the next standard, rate-based control switch could be safely used with it
Where We are ... • TCP on high-speed links with low multiplexing • Design, analysis and evaluation of aggressive probing mechanism (LTCP) • Impact of high RTT • Impact on router buffers and loss rates (LTCP-RCS) • TCP on high-speed links with high multiplexing • Impact of packet reordering (TCP-DCR) • Future Work
TCP with Non-Congestion Events • TCP behavior: If three dupacks • retransmit the packet • reduce cwnd by half. • Caveat : Not all 3-dupack events are due to congestion • channel errors in wireless networks • reordering etc. • Result : Sub-optimal performance
Impact of Packet Reordering • Packet Reordering in the Internet • Originally thought to be pathological • caused only by route flapping, router pauses etc • Later results claim higher prevalence of reordering • reason attributed to parallelism in Internet components • Newer measurements show • low levels of reordering in most part of Internet • high levels of reordering is localized to some links/sites • is a function of network load
Impact of Packet Reordering • Proposed Solution • Delay the time to infer congestion by • Essentially a tradeoff between wrongly inferring congestion and promptness of response to congestion • chosen to be one RTT to allow maximum time while avoiding an RTO