1 / 41

ICTCP: Incast Congestion Control for TCP in Data Center Networks∗

ICTCP: Incast Congestion Control for TCP in Data Center Networks∗. Haitao Wu ⋆ , Zhenqian Feng ⋆ †, Chuanxiong Guo ⋆ , Yongguang Zhang ⋆ { hwu , v- zhfe , chguo , ygz }@microsoft.com, ⋆ Microsoft Research Asia, China

felix
Télécharger la présentation

ICTCP: Incast Congestion Control for TCP in Data Center Networks∗

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ICTCP: IncastCongestion Control for TCPin Data Center Networks∗ Haitao Wu ⋆ , ZhenqianFeng ⋆ †, ChuanxiongGuo ⋆ , Yongguang Zhang ⋆ {hwu, v-zhfe, chguo, ygz}@microsoft.com, ⋆ Microsoft Research Asia, China †School of computer, National University of Defense Technology, China B99106017 圖資三 謝宗昊

  2. Outline • Background • Design Rationale • Algorithm • Implementation • Experimental results • Discussion and conclusion

  3. Outline • Background • Design Rationale • Algorithm • Implementation • Experimental results • Discussions, related work and conclusion

  4. Background • In distributed file systems, files are stored at multiple servers. • TCP does not work well for many-to-one traffic pattern on high-bandwidth, low-latency networks.

  5. Background • Three preconditions of data center • Be well structured and layered to achieve high-bandwidth and low-latency. Buffer size of ToR (top-of-rack) • Barrier synchronized many-to-one traffic pattern is common in data center network • Transmission data volume for such traffic pattern is usually small

  6. Background • TCP incastcollapse • Due to multiple connections overflow the Ethernet switch buffer in a short period of time. • Intense packet losses and thus TCP retransmission and timeout • Previous solution • Reducing the waiting time for packet loss • Control switch buffer occupation to avoid overflow by using ECN and modified TCP at both sender and receiver side

  7. Background • This paper focus on: • Avoiding packet losses before incast congestion • Modify TCP receiver only • Receiver side knows the throughput of all TCP connections and the available bandwidth

  8. Background • Well controlling the receive windows is challenging • Receive window should be small enough to avoid incast congestion • Also should be large enough for good performance and other non-incast cases • Good setting for one scenario may not fit well to others

  9. Background • The technical novelities in this paper: • Use the available bandwidth as a quota to coordinate the receive window increase • Per flow congestion control is performed independently in slotted time of RTT on each connection • Receive window adjustment is based on the ratio of difference of measured and expected throughput over expected one

  10. Background “Goodput” is thorughput obtained and observed at applicaiotn layer • TCP incast congestion • Happen when multiple sending servers under the same ToR switch send to one receiver server simultaneously • TCP throughput is severely degraded on incast congestion

  11. Background • TCP goodput, receive window and RTT • A small static TCP receive buffer may prevent TCP incast congestion collaspe→ Can’t work dynamically • Requires either losses or ECN marks to trigger windows decrease

  12. Background • TCP goodput, receive window and RTT • TCP Vegas: Make the assumption that increase of RTT is only caused by packet queuing at bottleneck buffer. • Unfortunately, the increase of TCP RTT in high-bandwidth, low-latency does not follow such assumption

  13. Outline • Background • Design Rationale • Algorithm • Implementation • Experimental results • Discussion and conclusion

  14. Design Rationale • Goal • Improve TCP performance for incast congestion. • No new TCP option or modification to TCP header.

  15. Design Rationale • Three observation which form the base for ICTCP • Available bandwidth at receiver side is the signal for receiver to do congestion control. • The frequency of receive window based congestion control should be made according to the per-flow feedback-loop independenty • A receive window based scheme should adjust the window according to both link congestion status and also application requirement. • Set a proper receiver window to all TCP connections sharing the same last-hop • Due to the parallel TCP connections may belong to the same job

  16. Outline • Background • Design Rationale • Algorithmn • Implementation • Experimental results • Disscussion and conclusion

  17. Algorithm • Available bandwidth • C: The link capacity of the interface on receiver server • BWT:Bandwidth of total incoming traffic observed on that interface • : :Parameter to absorb potential oversubscribed during windows adjustment • BWA: The quota of all incoming connections to increase receive window for higher throughtput

  18. Algorithm • Available bandwidth

  19. Algorithm • Window adjustment on single connection • : Incoming measured throughput • : Sample of current throughput (on connection i)

  20. Algorithm • Window adjustment on single connection • : : Expected throughput • : Receive window of I • We have the max procedure to endure <=

  21. Algorithm • Window adjustment on single connection • : The ratio of throughput difference of connection i • <= , thus \

  22. Algorithm • Window adjustment on single connection • We have two thresholds , ( > )to differentiate three case: • <= or <= → increase receive window if in global second sub-slot and having enough quota of available bandwidth → decrease receive window by one MSS^2 if this condtion hold for three continuous RTT • Otherwise, keep current receive window • Initiate newly established or long time idle connection in slow start • Go into congestion avoidance when above second and third is met, or the first case is met but no enough quota

  23. Algorithm • Fairness controller for multiple connections • Fairness is only considered among low-latency flows • For windows decrease, cut the receive window by MSS^3, for connections that have receive window larger than average. • For windows increase, be automatically achieved by algorithm we have talked about.

  24. Outline • Background • Design Rationale • Algorithm • Implementation • Experimental results • Discussion and conclusion

  25. Implement • Develop ICTCP as a NDIS driver on Windows OS. • Naturally supports the case for virtual machine • The incoming throughput in very short time scale can be easily obtained. • Does not touch TCP/UP implementation in Windows kernel.

  26. Implement • Redirect the packet to header parser module • Packet header is parsed and the information on flow table is updated • Algorithm module is responsible for receive window calculation • If a TCP ACK packet is sent out, the header modifier change the receive window field in TCP header if need.

  27. Implement • Support for Virtual Machines • The total capacity of virtual NICs is typically configured high than physical as most virtual machine won’t be busy at the same time • The observed virtual link capacity and available bandwidth does not represent the real value • There are two solution • Change the setting to make the total capacity of virtual NICs equal to physical NIC • Deploy a ICTCP driver on virtual machine host server

  28. Implement • Obtain fine-grained RTT at receiver • Define the reverse RTT as the RTT after a exponential filter at the TCP receiver side. • The reverse RTT can be obtained in data traffic on both side. • The data traffic on reverse direction may not be enough for keep obtaining live reverse RTT → Use TCP timestamp • For implement, modify the timestamp counter into 100ns granularity

  29. Outline • Background • Design Rationale • Algorithm • Implementation • Experimental results • Discussion and conclusion

  30. Experimental results

  31. Experimental results

  32. Experimental results

  33. Experimental results

  34. Experimental results

  35. Experimental results

  36. Experimental results

  37. Experimental results

  38. Outline • Background • Design Rationale • Algorithm • Implementation • Experimental results • Discussion and conclusion

  39. Discussion and Conclusion • Discussion three issues • Scalability: if the number of connections become extremely large • Switching the receive window between several value • How to handle congestion while sender and receiver are not under the same switch • Use ECN to obtain congestion information • Whether ICTCP works for future high-bandwidth low-latency network • The switch buffer should be enlarged correspondingly • The MSS should be enlarged.

  40. Discussion and Conclusion • Conclusion • Focus on receiver based congestion control to prevent packet loss • Adjust TCP receive window on the ratio of difference of achieved and expected per connection throughput • Experimental results show that ICTCP is effective to avoid congestion

  41. Thanks for listening

More Related