1 / 34

FAST TCP

FAST TCP. Cheng Jin David Wei Steven Low. netlab. CALTECH .edu. GNEW, CERN, March 2004. Acknowledgments. Caltech Bunn, Choe, Doyle, Jin, Newman, Ravot, Singh, J. Wang, Wei UCLA Paganini, Z. Wang CERN/DataTAG Martin, Martin-Flatin Internet2 Almes, Shalunov SLAC Cottrell, Mount

haracha
Télécharger la présentation

FAST TCP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FAST TCP Cheng Jin David Wei Steven Low netlab.CALTECH.edu GNEW, CERN, March 2004

  2. Acknowledgments • Caltech • Bunn, Choe, Doyle, Jin, Newman, Ravot, Singh, J. Wang, Wei • UCLA • Paganini, Z. Wang • CERN/DataTAG • Martin, Martin-Flatin • Internet2 • Almes, Shalunov • SLAC • Cottrell, Mount • Cisco • Aiken, Doraiswami, Yip • Level(3) • Fernes • LANL • Wu

  3. NSF RI (2003) HEP networks TeraGrid Abilene IETF GGF Deployment NSF STI (2002) Experiment DummyNet HEP networks Abilene PlanetL WAN in Lab UltraLight testbed Implement NSF ITR (2001) Linux TCP kernel Other platforms Monitoring Debugging Theory Performance Stability Fairness TCP/IP Noise Random-ness FAST project

  4. Outline • Experiments • Results • Future plan • Status • Open issues • Code release mid 04 • Unified framework • Reno, FAST, HSTCP, STCP, XCP, … • Implementation issues

  5. Aggregate throughput 88% FAST • Standard MTU • Utilization averaged over > 1hr 90% 90% Average utilization 92% 95% 1.1hr 6hr 6hr 1hr 1hr 1 flow 2 flows 7 flows 9 flows 10 flows DataTAG: CERN – StarLight – Level3/SLAC (Jin, Wei, Ravot, etc SC2002)

  6. Dynamic sharing: 3 flows FAST Linux Dynamic sharing on Dummynet • capacity = 800Mbps • delay=120ms • 3 flows • iperf throughput • Linux 2.4.x (HSTCP: UCL)

  7. Dynamic sharing: 3 flows FAST Linux Steady throughput HSTCP STCP

  8. 30min queue FAST Linux loss throughput Dynamic sharing on Dummynet • capacity = 800Mbps • delay=120ms • 14 flows • iperf throughput • Linux 2.4.x (HSTCP: UCL) HSTCP STCP

  9. 30min queue Room for mice ! FAST Linux loss throughput HSTCP STCP HSTCP

  10. ideal performance Aggregate throughput Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  11. small window 800pkts large window 8000 Aggregate throughput Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  12. HSTCP ~ Reno Jain’s index Fairness Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  13. stable in diverse scenarios Stability Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  14. Outline • Experiments • Results • Future plan • Status • Open issues • Code release • Unified framework • Reno, FAST, HSTCP, STCP, XCP, … • Implementation issues

  15. Benchmarking TCP • Not just static throughput • Dynamic sharing, what protocol does to network, … • Tests to zoom in on specific properties • Throughput, delay, loss, fairness, stability, … • Critical for basic design • Test scenarios may not be realistic • Tests with realistic scenarios • Same performance metrics • Critical for refinement for deployment • Just started • Input solicited • What’s realistic for your applications?

  16. Open issues: well understood • baseRTT estimation • route changes, dynamic sharing • does not upset stability • Small network buffer • at least like TCP • adapt a on slow timescale, but how? • TCP-friendliness • friendly at least at small window • tunable, but how to tune? • Reverse path congestion • should react? rare for large transfer?

  17. Status: code release • Source release mid 2004 • For any non-profit purposes • Re-implementation of FAST TCP completed • Extensive testing to complete by April 04 • Pre-release trials • CFP for high-performance sites! • Incorporate into Web100 • with Matt Mathis

  18. Status: IPR Caltech will license royalty-free if FAST TCP becomes IETF standard • IPR covers more broadly than TCP • Leave all options open

  19. Outline • Experiments • Results • Future plan • Status • Open issues • Code release mid 04 • Unified framework • Reno, FAST, HSTCP, STCP, XCP, … • Implementation issues

  20. ACK: W  W + 1/W Loss: W  W – 0.5W • Packet level • Flow level • Equilibrium • Dynamics pkts Packet & flow level Reno TCP (Mathis formula)

  21. Reno TCP • Packet level • Designed and implemented first • Flow level • Understood afterwards • Flow level dynamics determines • Equilibrium: performance, fairness • Stability • Design flow level equilibrium & stability • Implement flow level goals at packet level

  22. Reno TCP • Packet level • Designed and implemented first • Flow level • Understood afterwards • Flow level dynamics determines • Equilibrium: performance, fairness • Stability Packet level design of FAST, HSTCP, STCP, H-TCP, … guided by flow level properties

  23. ACK: W  W + 1/W Loss: W  W – 0.5W • Reno AIMD(1, 0.5) ACK: W  W + a(w)/W Loss: W  W – b(w)W • HSTCP AIMD(a(w), b(w)) ACK: W  W + 0.01 Loss: W  W – 0.125W • STCP MIMD(a, b) • FAST Packet level

  24. Flow level: Reno, HSTCP, STCP, FAST • Similarflow level equilibrium MSS/sec a = 1.225 (Reno), 0.120 (HSTCP), 0.075 (STCP)

  25. Flow level: Reno, HSTCP, STCP, FAST • Commonflow level dynamics window adjustment control gain flow level goal = • Different gain k and utility Ui • They determine equilibrium and stability • Different congestion measure pi • Loss probability (Reno, HSTCP, STCP) • Queueing delay (Vegas, FAST)

  26. FAST TCP • Reno, HSTCP, and FAST have commonflow level dynamics window adjustment control gain flow level goal = • Equation-based • Need to estimate “price” pi(t) • pi(t) = queueing delay • Easier to estimate at large window • k(t) and U’i(t) explicitly designed for • Performance • Fairness • Stability

  27. Window control algorithm • Full utilization • regardless of bandwidth-delay product • Globally stable • exponential convergence • Intra-protocol fairness • weighted proportional fairness • parameter a

  28. Goal: • Less delay • Less jitter FAST stabilized FAST tunes to knee TCP oscillation

  29. Window adjustment FAST TCP

  30. netlab.caltech.edu/FAST • FAST TCP: motivation, architecture, algorithms, performance IEEE Infocom March 2004 • FAST TCP: from theory to experiments Submitted for publication April 2003

  31. Panel 1: Lessons in Grid Networking

  32. Metrics • Performance • Throughput, loss, delay, jitter, stability, responsiveness • Availability, reliability • Simplicity • Application • Management • Evolvability, robustness

  33. Constraints • Scientific community • Small & fixed set of major sites • Few & large transfers • Relatively simple traffic characteristics and quality requirements • General public • Large, dynamic sets of users • Diverse set of traffic characteristics & quality requirements • Evolving/unpredictable applications

  34. Mechanisms • Fiber infrastructure • Lightpath configuration • Resource provisioning • Traffic engineering, adm control • Congestion/flow control Months - years Mintes - days Service: sec - hrs Flow: sec - mins RTT: ms - sec • Timescale: desired, instead of feasible • Balance: cost/benefit, simplicity, evolvability

More Related