1 / 25

FAST TCP

FAST TCP. Presenter Farrukh Shahzad. Slides (core) adapted from : IETF presentation slides Link: http://netlab.caltech.edu/FAST/index.html. Congestion control.

roxy
Télécharger la présentation

FAST TCP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FAST TCP • Presenter • FarrukhShahzad Slides (core) adapted from : IETF presentation slides Link: http://netlab.caltech.edu/FAST/index.html

  2. Congestion control Essential strategy :The TCP host sends packets into the network without resource reservation and then the host reacts to observable events. Originally TCP assumed FIFO queuing at hosts. Basic idea: each source determines how much capacity is available to a given flow in the network. ACKs are used to ‘pace’ the transmission of packets such that TCP is “self-clocking”.

  3. AIMD (Additive Increase / Multiplicative Decrease) • CongestionWindow (cwnd) is a variable maintained by the TCP source for each connection. • cwnd:is set based on the perceived level of congestion. TheHost receives implicit (packet drop) or explicit (packet mark) indications of internal congestion. MaxWindow = min (CongestionWindow , AdvertisedWindow) EffectiveWindow = MaxWindow – (LastByteSent -LastByteAcked)

  4. Additive Increase • Additive Increase is a reaction to perceived available capacity. • Linear Increase basic idea:: For each “cwnd’s worth” of packets sent, increase cwnd by 1 packet (w=w +1) • In practice, cwnd is incremented fractionallyfor each arriving ACK. increment = MSS x (MSS /cwnd) cwnd = cwnd + increment

  5. Multiplicative Decrease • The key assumption is that a dropped packet and the resultant timeout are due to congestion at a router /switch. Multiplicate Decrease:: TCP reacts to a timeout by halving cwnd (w/2). • Although cwnd is defined in bytes, the literature often discusses congestion control in terms of packets (or more formally in MSS == Maximum Segment Size). • cwnd is not allowed below the size of a single packet. • Loss Probability • Queuing Delay

  6. Congestion control Congestion control problem: How to adjust the sending rates of the data sources to make sure that the bandwidth B of the bottleneck link is not exceeded? B is unknown to the data sources and possibly time-varying

  7. Congestion control Congestion control problem: How to adjust the sending rates of the data sources to make sure that the bandwidth B of the bottleneck link is not exceeded?

  8. Congestion control When åiri exceeds B the queue fills and data is lost (drops) => )drop(discrete event) Event-based control: The sources adjust their rates based on the detection of drops

  9. Congestion control wi(window size) ´ number of packets that can remain unacknowledged for by the destination

  10. Congestion control queuegets full longerRTT ratedecreases queuegets empty negative feedback wi(window size) ´ number of packets that can remain unacknowledged by the destination This mechanism is still not sufficient to prevent a collapse of the network if the sources set the wi too large

  11. TCP RENO • TCP Reno is a good solution for low speed networks but not a viable solution for high-speed networks • Too conservative, not stable, and requires extremely small equilibrium loss • probability • Philosophy of Reno • Packet level • Designed and implemented first • Flow level • Understood afterwards • Flow level dynamics determines • Equilibrium: performance, fairness • Stability - Design flow level equilibrium & stability - Implement flow level goals at packet level

  12. TCP RENO • congestion control algorithm can be designed at two levels. • 1- The flow-level (macroscopic) design aims to achieve high utilization, low queuing delay and loss, fairness, and stability. • 2- The packet-level design implements these flow level goals within the constraints imposed by end-to-end control • TCP - RENO • Equilibrium problem • Packet level:Additive Increase too slow, Multiplicative decrease too drastic • Flow level:requiredloss probability too small • Dynamic problem • Packet level:mustoscillate on binary signal • Flow level:unstable at large window

  13. ACK: W  W + 1/W Loss: W  W – 0.5W Packet level Flow level dynamics TCP Flow Dynamics- RENO

  14. TCP Variant Performance ACK: W  W + 1/W Loss: W  W – 0.5W RENO: ACK: W  W + a(w)/W Loss: W  W – b(w)W High Speed TCP: ACK: W  W + 0.01 Loss: W  W – 0.125W Scalable TCP:

  15. Flow level: Reno, HSTCP, STCP, FAST Genericflow level dynamics! window adjustment control gain flow level goal = • Differentgain k and utility Ui • They determine stability , responsiveness and equilibrium. • Differentcongestion measure pi • Loss probability (Reno, HSTCP, STCP) • Queueing delay (Vegas, FAST) Even though Reno, HSTCP, STCP, and FAST look different at the packet level, they have similar equilibrium and dynamic structures at the flow level

  16. FAST TCP - Introduction FAST TCPis a new TCP congestion avoidance algorithm especially targeted at high-speed, long-distance links, developed at the Netlab, California Institute of Technology and now being commercialized by Fastsoft. It is compatible with existing TCP algorithms, requiring modification only to the computer from which is sending data. FAST AQM Scalable TCP, where AQM stands for Active Queue Management, and TCP stands for Transmission Control Protocol. Like TCP Vegas, FAST TCP uses queueing delay along with loss probability as a congestion signal It seeks to maintain a constant number of packets in queues throughout the network. The number of packets in queues is estimated by measuring the difference between the observed round trip time (RTT with no Queuing) and the base RTT ( Min observed time for connection), If too few packets are queued, the sending rate is increased, while if too many are queued, the rate is decreased

  17. FAST - Architecture Data control : Component determines which packets to transmit, (Selects packet from the pool of 3 packets : 1) new packets, 2) packets that lost (negatively acknowledged), and transmitted packets but not ack’d ) Window control : Determines how many packets to transmit, Window control regulates packet transmission at the RTT timescale, Burstiness control : Determines when to transmit, It smoothes out transmission of packets in a fluid-like manner to track the available bandwidth

  18. FAST – Architecture…cont .. Burstiness reduction: limits the number of packets that can be sent when an ack advances congestion window by a large amount. Window pacing: determines how to increase congestion window over the idle time of a connection to the target determined by the window control component. It reduces burstiness with a reasonable amount of scheduling overhead.

  19. Windows Control Algorithm FAST reacts to both queueing delay and packet loss. Normal Network Conditions FAST TCP periodically updates the congestion window based on the average RTT and average queueing delay provided by the estimation component γ ∈ (0, 1], baseRTT is the minimum RTT observed and α is a positive protocol parameter that determines the total number of packets queued in routers in equilibrium along the flow’s path. The window update period is 20ms.

  20. Windows Control Algorithm FAST TCP with FAST Network Flows cl : A network is assumed as a set of resources with finite capacities, e.g., transmission links, processing units, memory, etc. didenote the round-trip propagation delay of source i. R be the routing matrix where Rli= 1 if source i uses link l, and 0 otherwise. pl(t) denote the queueing delay at link l at time t. qi(t) = Σl . Rli . pl(t) be the roundtrip queueing delay, in vector notation, q(t) = RT p(t). Then the round trip time of source i is Ti(t) := di + qi(t).

  21. SELF CLOCKING A key difference of FAST TCP from those in the literature is that it assumes that a source’s send rate, defined as xi(t) := wi(t)/Ti(t), cannot exceed the throughput it receives. This is justified because of self-clocking: within one round-trip time after a congestion window is increased, packet transmission will be clocked at the same rate as the throughput the flow receives A consequence of this assumption is that the link queueing delay vector, p(t), is determined implicitly by the instantaneous window size in a static manner:

  22. Dynamic sharing: 3 flows FAST RENOx throughput flows HSTCP STCP

  23. Queuing delay Comparison

  24. Overall Comparison

  25. Thanks

More Related