1 / 60

FAST TCP

FAST TCP. Cheng Jin David Wei Steven Low. netlab. CALTECH .edu. Acknowledgments. Caltech Bunn, Choe, Doyle, Hegde, Jayaraman, Newman, Ravot, Singh, X. Su, J. Wang, Xia UCLA Paganini, Z. Wang CERN Martin SLAC Cottrell Internet2 Almes, Shalunov MIT Haystack Observatory

chill
Télécharger la présentation

FAST TCP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FAST TCP Cheng Jin David Wei Steven Low netlab.CALTECH.edu

  2. Acknowledgments • Caltech • Bunn, Choe, Doyle, Hegde, Jayaraman, Newman, Ravot, Singh, X. Su, J. Wang, Xia • UCLA • Paganini, Z. Wang • CERN • Martin • SLAC • Cottrell • Internet2 • Almes, Shalunov • MIT Haystack Observatory • Lapsley, Whitney • TeraGrid • Linda Winkler • Cisco • Aiken, Doraiswami, McGugan, Yip • Level(3) • Fernes • LANL • Wu

  3. Outline • Motivation & approach • FAST architecture • Window control algorithm • Experimental evaluation skip: theoretical foundation

  4. Congestion control Example congestion measure pl(t) • Loss (Reno) • Queueing delay (Vegas) pl(t) xi(t)

  5. pl(t) • AQM: • DropTail • RED • REM/PI • AVQ xi(t) TCP: • Reno • Vegas TCP/AQM • Congestion control is a distributed asynchronous algorithm to share bandwidth • It has two components • TCP: adapts sending rate (window) to congestion • AQM: adjusts & feeds back congestion information • They form a distributed feedback control system • Equilibrium & stability depends on both TCP and AQM • And on delay, capacity, routing, #connections

  6. Difficulties at large window • Equilibrium problem • Packet level: AI too slow, MD too drastic • Flow level: required loss probability too small • Dynamic problem • Packet level: must oscillate on binary signal • Flow level: unstable at large window 5

  7. ACK: W  W + 1/W Loss: W  W – 0.5W • Packet level • Flow level • Equilibrium • Dynamics pkts Packet & flow level Reno TCP (Mathis formula)

  8. Reno TCP • Packet level • Designed and implemented first • Flow level • Understood afterwards • Flow level dynamics determines • Equilibrium: performance, fairness • Stability • Design flow level equilibrium & stability • Implement flow level goals at packet level

  9. Reno TCP • Packet level • Designed and implemented first • Flow level • Understood afterwards • Flow level dynamics determines • Equilibrium: performance, fairness • Stability Packet level design of FAST, HSTCP, STCP guided by flow level properties

  10. ACK: W  W + 1/W Loss: W  W – 0.5W • Reno AIMD(1, 0.5) ACK: W  W + a(w)/W Loss: W  W – b(w)W • HSTCP AIMD(a(w), b(w)) ACK: W  W + 0.01 Loss: W  W – 0.125W • STCP MIMD(a, b) • FAST Packet level

  11. Flow level: Reno, HSTCP, STCP, FAST • Similarflow level equilibrium pkts/sec (Mathis formula) a = 1.225 (Reno), 0.120 (HSTCP), 0.075 (STCP)

  12. Flow level: Reno, HSTCP, STCP, FAST • Commonflow level dynamics! window adjustment control gain flow level goal = • Different gain k and utility Ui • They determine equilibrium and stability • Different congestion measure pi • Loss probability (Reno, HSTCP, STCP) • Queueing delay (Vegas, FAST)

  13. Implementation strategy • Commonflow level dynamics window adjustment control gain flow level goal = • Small adjustment when close, large far away • Need to estimate how far current state is wrt target • Scalable • Window adjustment independent of pi • Depends only on current window • Difficult to scale

  14. Outline • Motivation & approach • FAST architecture • Window control algorithm • Experimental evaluation skip: theoretical foundation

  15. <RTT timescale RTT timescale Loss recovery Architecture

  16. Architecture Each component • designed independently • upgraded asynchronously

  17. Architecture Each component • designed independently • upgraded asynchronously Window Control

  18. FAST-TCP basic idea Uses delay as congestion measure • Delay provides finer congestion info • Dealy scales correctly with network capacity • Can operate with low queuing delay Loss Loss Based TCP Queue Delay FAST C Window

  19. Window control algorithm • Full utilization • regardless of bandwidth-delay product • Globally stable • exponential convergence • Fairness • weighted proportional fairness • parameter a

  20. Outline • Motivation & approach • FAST architecture • Window control algorithm • Experimental evaluation • Abilene-HENP network • Haystack Observatory • DummyNet

  21. Abilene Test OC48 OC192 Periodic losses every 10mins (Yang Xia, Harvey Newman, Caltech)

  22. Periodic losses every 10mins (Yang Xia, Harvey Newman, Caltech)

  23. FAST backs off to make room for Reno Periodic losses every 10mins (Yang Xia, Harvey Newman, Caltech)

  24. Haystack Experiments Lapsley, MIT Haystack

  25. Haystack - 1 Flow (Atlanta-> Japan) • Iperf used to generate traffic. • Sender is a Xeon 2.6 Ghz • Window was constant: • Burstiness in rate due to • Host processing and ack spacing. Lapsley, MIT Haystack

  26. Haystack – 2 Flows from 1 machine (Atlanta -> Japan) Lapsley, MIT Haystack

  27. Linux Loss Recovery • All outstanding packets marked as lost. • SACKs reduce lost packets 2. Lost packets retransmitted slowlyas cwnd is capped at 1 (bug). Timeout

  28. DummyNet Experiments • Experiments using emulated network. • 800 Mbps emulated bottleneck in DummyNet. Receiver PC Dual Xeon 2.6Ghz 2Gb Intel GbE Linux 2.4.22 Sender PC Dual Xeon 2.6Ghz 2Gb Intel GbE Linux 2.4.22 DummyNet PC Dual Xeon 3.06Ghz 2Gb FreeBSD 5.1 800Mbps

  29. Dynamic sharing: 3 flows FAST Linux Dynamic sharing on Dummynet • capacity = 800Mbps • delay=120ms • 3 flows • iperf throughput • Linux 2.4.x (HSTCP: UCL)

  30. Dynamic sharing: 3 flows FAST Linux Steady throughput HSTCP BIC

  31. 30min queue FAST Linux loss throughput Dynamic sharing on Dummynet • capacity = 800Mbps • delay=120ms • 14 flows • iperf throughput • Linux 2.4.x (HSTCP: UCL) HSTCP STCP

  32. 30min queue Room for mice ! FAST Linux loss throughput HSTCP HSTCP BIC

  33. Average Queue vs Buffer Size Dummynet • capacity = 800Mbps • Delay =200ms • 1 flows • Buffer size: 50, …, 8000 pkts (S. Hedge, B. Wydrowski, etc, Caltech)

  34. Is large queue necessary for high throughput?

  35. netlab.caltech.edu/FAST • FAST TCP: motivation, architecture, algorithms, performance. IEEE Infocom March 2004 • b-release: April 2004 Source freely available for any non-profit use

  36. ideal performance Aggregate throughput Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  37. small window 800pkts large window 8000 Aggregate throughput Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  38. HSTCP ~ Reno Jain’s index Fairness Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  39. stable in diverse scenarios Stability Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts

  40. netlab.caltech.edu/FAST • FAST TCP: motivation, architecture, algorithms, performance. IEEE Infocom March 2004 • b-release: April 2004 Source freely available for any non-profit use

  41. BACKUP Slides

  42. IP Rights • Caltech owns IP rights • applicable more broadly than TCP • leave all options open • IP freely available if FAST TCP becomes IETF standard • Code available on FAST website for any non-commercial use

  43. NSF WAN in Lab Caltech: John Doyle, Raj Jayaraman, George Lee, Steven Low (PI),Harvey Newman,Demetri Psaltis, Xun Su, Yang Xia Cisco: Bob Aiken, Vijay Doraiswami, Chris McGugan, Steven Yip netlab.caltech.edu

  44. Key Personnel • Raj Jayaraman, CS • Xun Su, Physics • Yang Xia, Physics • George Lee, CS • 2 grad students • 3 summer students • Cisco engineers • Steven Low, CS/EE • Harvey Newman, Physics • John Doyle, EE/CDS • Demetri Psaltis, EE Cisco • Bob Aiken • Vijay Doraiswami • Chris McGugan • Steven Yip

  45. ? DummyNet EmuLab ModelNet WAIL PlanetLab Abilene NLR DataTAG CENIC WAIL etc NS SSFNet QualNet JavaSim Mathis formula Optimization Control theory Nonlinear model Stocahstic model Spectrum of tools log(cost) log(abstraction) live nk WANiLab emulation simulation math …we use them all

  46. live nk emulation simulation math WANiLab Critical in development e.g. Web100 Spectrum of tools

  47. Goal State-of-the-art hybrid WAN • High speed, large distance • 2.5G  10G • 50 – 200ms • Wireless devices connected by optical core • Controlled & repeatable experiments • Reconfigurable & evolvable • Built in monitoring capability

  48. WAN in Lab • 5-year plan • 6 Cisco ONS15454 • 4 routers • 10s servers • Wireless devices • 800km fiber • ~100ms RTT V. Doraiswami (Cisco) R. Jayaraman (Caltech)

  49. WAN in Lab • Year-1 plan • 3 Cisco ONS 15454 • 2 routers • 10s servers • Wireless devices V. Doraiswami (Cisco) R. Jayaraman (Caltech)

  50. Hybrid Network • Scenarios: • Ad hoc network • Cellular network • Sensor network • How optical core • supports wireless • edges? X. Su (Caltech)

More Related