1 / 67

Chapter 3 Transport Layer – part C

Chapter 3 Transport Layer – part C. Adapted from Computer Networking: A Top Down Approach, 6th edition, Jim Kurose, Keith Ross Addison-Wesley, March 2012. Outline. TCP Performance Beyond TCP. TCP throughput. What’s the average throughout ot TCP as a function of window size and RTT?

mikaia
Télécharger la présentation

Chapter 3 Transport Layer – part C

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3Transport Layer – part C • Adapted from Computer Networking: A Top Down Approach, 6th edition, Jim Kurose, Keith RossAddison-Wesley, March 2012 Transport Layer

  2. Outline • TCP Performance • Beyond TCP Transport Layer

  3. TCP throughput • What’s the average throughout ot TCP as a function of window size and RTT? • Ignore slow start • Let W be the window size when loss occurs. • When window is W, throughput is W/RTT • Just after loss, window drops to W/2, throughput to W/2RTT. • Average throughout: .75 W/RTT Transport Layer

  4. 3 W avg TCP throughput = bytes/sec RTT 4 W W/2 TCP throughput • W: window size (measured in bytes) where loss occurs • avg. window size (# in-flight bytes) is ¾ W TransportLayer

  5. . 1.22 MSS TCP throughput = RTT L TCP Futures: TCP over “long, fat pipes” • example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput • requires W = 83,333 in-flight segments • throughput in terms of segment loss probability, L [Mathis 1997]: ➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a very small loss rate! • new versions of TCP for high-speed TransportLayer

  6. Q:How long does it take to receive an object from a Web server after sending a request? Ignoring congestion, delay is influenced by: TCP connection establishment data transmission delay slow start Notation, assumptions: Assume one link between client and server of rate R S: MSS (bits) O: object size (bits) no retransmissions (no loss, no corruption) Window size: First assume: fixed congestion window, W segments Then dynamic window, modeling slow start Delay modeling Transport Layer

  7. First case: WS/R > RTT + S/R: ACK for first segment in window returns before window’s worth of data sent Fixed congestion window (1) delay = 2RTT + O/R Transport Layer

  8. Second case: WS/R < RTT + S/R: wait for ACK after sending window’s worth of data sent Fixed congestion window (2) delay = 2RTT + O/R + (K-1)[S/R + RTT - WS/R] Transport Layer

  9. fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP Fairness TCP connection 1 bottleneck router capacity R TCP connection 2 TransportLayer

  10. two competing sessions: additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally Why is TCP fair? equal bandwidth share R loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R TransportLayer

  11. Fairness and UDP multimedia apps often do not use TCP do not want rate throttled by congestion control instead use UDP: send audio/video at constant rate, tolerate packet loss Fairness, parallel TCP connections application can open multiple parallel connections between two hosts web browsers do this e.g., link of rate R with 9 existing connections: new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 Fairness (more) TransportLayer

  12. TCP Vegas • Idea: source watches for some sign that some router's queue is building up and congestion will happen soon; e.g., • RTT is growing • sending rate flattens Transport Layer

  13. 1100 900 700 Sending KBps 500 300 100 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 10 5 Queue size in router 0.5 1.0 1.5 4.0 4.5 6.5 8.0 Time (seconds) Intuition 70 60 50 40 KB 30 20 10 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 Time (seconds) Congestion Window Time (seconds) Average send rate at source 2.0 2.5 3.0 3.5 5.0 5.5 6.0 7.0 7.5 8.5 Driving on Ice Average Q length in router Transport Layer

  14. Algorithm • Let BaseRTT be the minimum of all measured RTTs (commonly the RTT of the first packet) • if not overflowing the connection, then • ExpectedRate = CongestionWindow / BaseRTT • source calculates current sending rate (ActualRate) once per RTT • source compares ActualRate with ExpectedRate • Diff = ExpectedRate – ActualRate • if Diff <  • -->increase CongestionWindow linearly • else if Diff > • -->decrease CongestionWindow linearly • else • -->leave CongestionWindow unchanged Transport Layer

  15. Parameters • Parameters • : 1 packet • : 3 packets • Even faster retransmit • keep fine-grained timestamps for each packet • check for timeout on first duplicate ACK Transport Layer

  16. Example TCP Vegas Actual Throughput Expected throughput Transport Layer

  17. Vegas Details • Value of throughput with no congestion is compared to current throughput • If current difference is smaller, increase window size linearly • If current difference is larger, decrease window size linearly • The change in the Slow Start Mechanism consists of doubling the window every other RTT, rather than every RTT and of using a boundary in the difference between throughputs to exit the Slow Start phase, rather than a window size value. Transport Layer

  18. TCP Performance Utilization of a link with 5 TCP connections Cannot fully utilize the huge capacity of high-speed networks! NS-2 Simulation (100 sec) • Link Capacity = 155Mbps, 622Mbps, 2.5Gbps, 5Gbps, 10Gbps, • Drop-Tail Routers, 0.1BDP Buffer • 5 TCP Connections, 100ms RTT, 1000-Byte Packet Size Transport Layer

  19. cwnd = cwnd + 1 cwnd = cwnd * (1-1/2) Packet loss Packet loss TCP Packet loss Packet loss cwnd Slow start Congestion avoidance Time (RTT) TCP Congestion Control • The instantaneous throughput of TCP is controlled by a variable cwnd, • TCP transmits approximately a cwnd number of packets per RTT (Round-Trip Time). Transport Layer

  20. 1.4 hours 1.4 hours 1.4 hours 100,000 10Gbps 50,000 5Gbps TCP over High-Speed Networks • A TCP connection with 1250-Byte packet size and 100ms RTT is running over a 10Gbps link (assuming no other connections, and no buffers at routers) slow increase TCP Packet loss Packet loss Packet loss Packet loss big decrease cwnd Slow start Congestion avoidance Time (RTT) Transport Layer

  21. cwnd = cwnd + 1 cwnd = cwnd + 0.01*cwnd cwnd = cwnd * (1-1/2) cwnd = cwnd * (1-1/8) Packet loss Packet loss STCP (Scalable TCP) • STCP adaptively increases cwnd, and decreases cwnd by 1/8. TCP Packet loss Packet loss cwnd Slow start Congestion avoidance Time (RTT) Transport Layer

  22. cwnd = cwnd * (1-1/2) cwnd = cwnd * (1-dec(cwnd)) cwnd = cwnd + 1 cwnd = cwnd + inc(cwnd) Packet loss Packet loss HSTCP (High Speed TCP) • HSTCP adaptively increases cwnd, and adaptively decreases cwnd. • The larger the cwnd, the larger the increment, and the smaller the decrement. TCP Packet loss Packet loss cwnd Slow start Congestion avoidance Time (RTT) Transport Layer

  23. Some Measurements of Throughput CERN -SARA • Using the GÉANT Backup Link • 1 GByte file transfers • Blue Data • Red TCP ACKs • Standard TCP • Average Throughput 167 Mbit/s • Users see 5 - 50 Mbit/s! • High-Speed TCP • Average Throughput 345 Mbit/s • Scalable TCP • Average Throughput 340 Mbit/s Transport Layer

  24. TCP Vegas • Packet losses offer binary feedback to the end user. • Binary feedback induces oscillations. • Need multi-bit feedback to improve performance. • Idea: source watches for some sign that some router's queue is building up and congestion will happen soon; e.g., • RTT is growing • sending rate flattens Transport Layer

  25. Algorithm • Let BaseRTT be the minimum of all measured RTTs (commonly the RTT of the first packet) • if not overflowing the connection, then • ExpectedRate = CongestionWindow / BaseRTT • source calculates current sending rate (ActualRate) once per RTT • source compares ActualRate with ExpectedRate • Diff = ExpectedRate – ActualRate • if Diff <  • -->increase CongestionWindow linearly (+1) • else if Diff > • -->decrease CongestionWindow linearly (-1) • else • -->leave CongestionWindow unchanged Transport Layer

  26. Parameters • Parameters • : 1 packet • : 3 packets • Even faster retransmit • keep fine-grained timestamps for each packet • check for timeout on first duplicate ACK Transport Layer

  27. Example TCP Vegas Actual Throughput Expected throughput Transport Layer

  28. TCP FAST • Packet Losses give binary feedback to the end user . • Binary feedback induces oscillations. • Need multi-bit feedback to improve performance. • Like TCP Vegas FAST TCP uses delays to infer congestion. • The window is updated as follows. Transport Layer

  29. SC2002 Network OC48 OC192 (Sylvain Ravot, caltech) Transport Layer

  30. FAST throughput(averaged over 1hr) 92% 2G 48% 95% Average utilization 1G 27% 16% 19% txq=100 txq=10000 Linux TCP Linux TCP FAST Linux TCP Linux TCP FAST Transport Layer

  31. Explicit Control Protocol (XCP) (Congestion Control for High Bandwidth-Delay Product Environments) Transport Layer

  32. Explicit Control Protocol (XCP) • Proposed by Katabi et. al Sigcomm 2002; • Explicit feedback on congestion from the network • Flows receive precise feedback on window nincrement/decrement • Routers do detailed per-packet calculations Transport Layer

  33. TCP congestion control performs poorly as bandwidth or delay increases Shown analytically in [Low01] and via simulations Avg. TCP Utilization Avg. TCP Utilization 50 flows in both directions Buffer = BW x Delay RTT = 80 ms 50 flows in both directions Buffer = BW x Delay BW = 155 Mb/s • Because TCP lacks fast response • Spare bandwidth is available  TCP increases • by 1 pkt/RTT even if spare bandwidth is huge • When a TCP starts, it increases exponentially •  Too many drops  Flows ramp up by 1 pkt/RTT, • taking forever to grab the large bandwidth Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec) Transport Layer

  34. Proposed Solution: Decouple Congestion Control from Fairness High Utilization; Small Queues; Few Drops Bandwidth Allocation Policy Transport Layer

  35. Coupled because a single mechanism controls both Example: In TCP, Additive-Increase Multiplicative-Decrease (AIMD) controls both How does decoupling solve the problem? • To control congestion: use MIMD which shows fast response • To control fairness: use AIMD which converges to fairness Proposed Solution: Decouple Congestion Control from Fairness Transport Layer

  36. Characteristics of XCP Solution • Improved Congestion Control (in high bandwidth-delay & conventional environments): • Small queues • Almost no drops • Improved Fairness • Scalable (no per-flow state) • Flexible bandwidth allocation: min-max fairness, proportional fairness, differential bandwidth allocation,… Transport Layer

  37. XCP: An eXplicit Control Protocol • Congestion Controller • Fairness Controller Transport Layer

  38. Round Trip Time Round Trip Time Congestion Window Congestion Window Feedback Feedback Congestion Header How does XCP Work? Feedback = + 0.1 packet Transport Layer

  39. Round Trip Time Congestion Window Feedback = + 0.1 packet How does XCP Work? Feedback = - 0.3 packet Transport Layer

  40. How does XCP Work? Congestion Window = Congestion Window + Feedback XCP extends ECN and CSFQ Routers compute feedback without any per-flow state Transport Layer

  41. Congestion Controller Fairness Controller Goal: Matches input traffic to link capacity & drains the queue Goal: Divides  between flows to converge to fairness Looks at a flow’s state in Congestion Header Looks at aggregate traffic & queue AIMD MIMD • Algorithm: • Aggregate traffic changes by   ~ Spare Bandwidth • ~ - Queue Size So,  =  davg Spare -  Queue Algorithm: If  > 0  Divide  equally between flows If  < 0 Divide  between flows proportionally to their current rates How Does an XCP Router Compute the Feedback? Congestion Controller Fairness Controller Transport Layer

  42. Algorithm: If  > 0  Divide  equally between flows If  < 0 Divide  between flows proportionally to their current rates  =  davg Spare -  Queue Theorem:System converges to optimal utilization (i.e., stable) for any link bandwidth, delay, number of sources if: Need to estimate number of flows N RTTpkt : Round Trip Time in header Cwndpkt : Congestion Window in header T: Counting Interval (Proof based on Nyquist Criterion) Getting the devil out of the details … Congestion Controller Fairness Controller No Per-Flow State No Parameter Tuning Transport Layer

  43. Implementation Implementation uses few multiplications & additions per packet Practical! Liars? • Policing agents at edges of the network or • statistical monitoring • Easier to detect than in TCP Gradual Deployment XCP can co-exist with TCP and can be deployed gradually Transport Layer

  44. Performance Transport Layer

  45. S1 Bottleneck S2 R1, R2, …, Rn Sn Subset of Results Similar behavior over: Transport Layer

  46. XCP Remains Efficient as Bandwidth or Delay Increases Utilization as a function of Delay Utilization as a function of Bandwidth Avg. Utilization Avg. Utilization Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec) Transport Layer

  47.  and  chosen to make XCP robust to delay XCP increases proportionally to spare bandwidth XCP Remains Efficient as Bandwidth or Delay Increases Utilization as a function of Bandwidth Utilization as a function of Delay Avg. Utilization Avg. Utilization Bottleneck Bandwidth (Mb/s) Round Trip Delay (sec) Transport Layer

  48. Start 40 Flows Start 40 Flows Stop the 40 Flows Stop the 40 Flows XCP Shows Faster Response than TCP XCP shows fast response! Transport Layer

  49. XCP Deals Well with Short Web-Like Flows Average Utilization Average Queue Drops Arrivals of Short Flows/sec Transport Layer

  50. (RTT is 40 ms 330 ms ) XCP is Fairer than TCP Same RTT Different RTT Avg. Throughput Avg. Throughput Flow ID Flow ID Transport Layer

More Related