1 / 32

MB-NG Review High Performance Network Demonstration 21 April 2004

MB-NG Review High Performance Network Demonstration 21 April 2004. Richard Hughes-Jones The University of Manchester, UK. It works ? So what’s the Problem with TCP. TCP has 2 phases: Slowstart & Congestion Avoidance AIMD and High Bandwidth – Long Distance networks

evelia
Télécharger la présentation

MB-NG Review High Performance Network Demonstration 21 April 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MB-NG ReviewHigh Performance Network Demonstration 21 April 2004 Richard Hughes-Jones The University of Manchester, UK

  2. It works ?So what’s the Problem with TCP • TCP has 2 phases: Slowstart & Congestion Avoidance • AIMD and High Bandwidth – Long Distance networks Poor performance of TCP in high bandwidth wide area networks is due in part to the TCP congestion control algorithm - cwnd congestion window • For each ack in a RTT without loss: cwnd -> cwnd + a / cwnd- Additive Increase, a=1 • For each window experiencing loss: cwnd -> cwnd – b (cwnd)- Multiplicative Decrease, b= ½ • Time to recover from 1 packet loss ~100 ms rtt: 2

  3. Investigation of new TCP Stacks • High Speed TCP a and b vary depending on current cwnd using a table • a increases more rapidly with larger cwnd – returns to the ‘optimal’ cwnd size sooner for the network path • b decreases less aggressively and, as a consequence, so does the cwnd. The effect is that there is not such a decrease in throughput. • Scalable TCP a and b are fixed adjustments for the increase and decrease of cwnd • a = 1/100 – the increase is greater than TCP Reno • b = 1/8 – the decrease on loss is less than TCP Reno • Scalable over any link speed. • Fast TCP Uses round trip time as well as packet loss to indicate congestion with rapid convergence to fair equilibrium for throughput. • HSTCP-LP High Speed (Low Priority) – backs off if rtt increases • BiC-TCP – Additive increase large cwnd; binary search small cwnd • H-TCP – after congestion standard then switch to high performance • ●●● 3

  4. Comparison of TCP Stacks • TCP Response Function • Throughput vs Loss Rate – steeper: faster recovery • Drop packets in kernel MB-NG rtt 6ms DataTAG rtt 120 ms 4

  5. Multi-Gigabit flows at SC2003 BW Challenge • Three Server systems with 10 GigEthernet NICs • Used the DataTAG altAIMD stack 9000 byte MTU • Send mem-mem iperf TCP streams From SLAC/FNAL booth in Phoenix to: • Chicago Starlight • rtt 65 ms • window 60 MB • Phoenix CPU 2.2 GHz • 3.1 Gbit hstcp I=1.6% • Amsterdam SARA • rtt 175 ms • window 200 MB • Phoenix CPU 2.2 GHz • 4.35 Gbit hstcp I=6.9% • New TCP stacks are very Stable • Both used Abilene to Chicago 5

  6. Transfer Applications – Throughput [1] • 2Gbyte file transferred RAID0 disks Manc – UCL • GridFTP • See alternate 600/800 Mbit and zero • Apache web server + curl-based client • See steady 720 Mbit 6

  7. Transfer Applications – Throughput [2] • 2Gbyte file transferred RAID5 - 4disks Manc – RAL • bbcp • Mean 710 Mbit/s Mean ~710 • GridFTP • See many zeros Mean ~620 7

  8. lon01 lon03 lon02 ral02 ral02 ral01 man01 man03 man02 Topology of the MB – NG Network Manchester Domain UKERNA DevelopmentNetwork UCL Domain Boundary Router Cisco 7609 Boundary Router Cisco 7609 Edge Router Cisco 7609 RAL Domain Key Gigabit Ethernet 2.5 Gbit POS Access MPLS Admin. Domains Boundary Router Cisco 7609 8

  9. Send data with TCP Drop Packets Monitor TCP with Web100 man03 lon01 High Throughput Demo London Manchester Dual Zeon 2.2 GHz Dual Zeon 2.2 GHz Cisco GSR Cisco GSR Cisco 7609 Cisco 7609 1 GEth 1 GEth 2.5 Gbit SDH MB-NG Core 9

  10. Standard to HS-TCP • No loss, but output queue filled by sender 10

  11. HS-TCP to Scalable • No loss, but output queue filled by sender 11

  12. Standard, HS-TCP, Scalable • Drop 1 in 25,000 12

  13. Standard Reno TCP • Drop 1 in 106 13

  14. Focus on Helping Real Users: Throughput CERN -SARA • Using the GÉANT Backup Link • 1 GByte disk-disk transfers • Blue is the Data • Red is the TCP ACKs • Standard TCP • Average Throughput 167 Mbit/s • Users see 5 - 50 Mbit/s! • High-Speed TCP • Average Throughput 345 Mbit/s • Scalable TCP • Average Throughput 340 Mbit/s • Technology link to EU Projects: DataGrid DataTAG & GÉANT 14

  15. BaBar Case Study: Host, PCI & RAID Controller Performance • RAID0 (striped) & RAID5 (stripped with redundancy) • 3Ware 7506 Parallel 66 MHz 3Ware 7505 Parallel 33 MHz • 3Ware 8506 Serial ATA 66 MHz ICP Serial ATA 33/66 MHz • Tested on Dual 2.2 GHz Xeon Supermicro P4DP8-G2 motherboard • Disk: Maxtor 160GB 7200rpm 8MB Cache • Read ahead kernel tuning: /proc/sys/vm/max-readahead Disk – Memory Read Speeds Memory - Disk Write Speeds 15

  16. lon01 lon03 lon02 HW RAID HW RAID ral02 ral02 ral01 man03 man02 man01 Topology of the MB – NG Network Manchester Domain UKERNA DevelopmentNetwork UCL Domain Boundary Router Cisco 7609 Boundary Router Cisco 7609 Edge Router Cisco 7609 RAL Domain Key Gigabit Ethernet 2.5 Gbit POS Access MPLS Admin. Domains Boundary Router Cisco 7609 16

  17. BaBar Data: Throughput on MB–NG kit • RAID5 - 4disks RAL - Manc • Includes small files ~Kbytes • bbftp 1 stream with compression With bb diag bbftp 6 streams • bbftp 1 stream no compression • 10 * 2 G byte files – each peak is a 20 G byte transfer bbftp 1 stream Files ≥ 1 Mbyte 17

  18. Helping Real UsersRadio Astronomy VLBIPoC with NRNs & GEANT 1024 Mbit/s 24 on 7 NOW 18

  19. VLBI Project: Throughput Jitter & 1-way Delay • 1472 byte Packets man -> JIVE • FWHM 22 µs (B2B 3 µs ) • 1472 byte Packets Manchester -> Dwingeloo JIVE • 1-way Delay – note the packet loss (points with zero 1 –way delay) 19

  20. Case Study: ATLAS LHC • Tests streaming built Events from Level3 Trigger to remote compute farm in real time • 500 Mbit to 1 Gbit CERN – Man • Investigation of use of new high performance TCPs • Testing concepts in the ATLAS Offline Computing model • More Mesh than Star: • CERN Tier0 to Tier 1s • Tier 2s to all Tier 1s • Tests planned over production networks: • Lancaster-Manchester NNW SuperJANET4 • Lancaster-Manchester to CERN 20

  21. 21

  22. Scalable TCP DataTAG • Drop 1 in 106 22

  23. HS-TCP DataTAG • Drop 1 in 106 23

  24. Standard Reno TCP DataTAG • Drop 1 in 106 Transition highspeed to Standard TCP @ 520s 24

  25. Summary • Multi-Gigabit transfers are possible and stable • Demonstrated that new TCP stacks help performance • DataTAG has made major contributions to understanding of high-speed networking • There has been significant technology transfer between DataTAG and other projects • Now reaching out to real users. • But still much research to do: • Achieve performance – Protocol vs implementation issues • Stability / Sharing issues • Optical transports & hybrid networks 25

  26. mmrbc 512 bytes mmrbc 1024 bytes mmrbc 2048 bytes CSR Access PCI-X Sequence Data Transfer Interrupt & CSR Update mmrbc 4096 bytes 5.7Gbit/s 10 Gigabit: Tuning PCI-X • 16080 byte packets every 200 µs • Intel PRO/10GbE LR Adapter • PCI-X bus occupancy vs mmrbc • Measured times • Times based on PCI-X times from the logic analyser • Expected throughput ~7 Gbit/s 26

  27. DataTAG Testbed 27

  28. BaBar Case Study: Disk Performance • BaBar Disk Server • Tyan Tiger S2466N motherboard • 1 64bit 66 MHz PCI bus • Athlon MP2000+ CPU • AMD-760 MPX chipset • 3Ware 7500-8 RAID5 • 8 * 200Gb Maxtor IDE 7200rpm disks • Note the VM parameterreadahead max • Disk to memory (read)Max throughput 1.2 Gbit/s 150 MBytes/s) • Memory to disk (write)Max throughput 400 Mbit/s 50 MBytes/s)[not as fast as Raid0] 28

  29. RAID Controller Performance Write Speed Read Speed RAID 0 RAID 5 29

  30. BaBar: Serial ATA Raid Controllers RAID5 • ICP 66 MHz PCI • 3Ware 66 MHz PCI 30

  31. VLBI Project: Packet Loss Distribution • Measure the time between lost packets in the time series of packets sent. • Lost 1410 in 0.6s • Is it a Poisson process? • Assume Poisson is stationary λ(t) = λ • Use Prob. Density Function:P(t) = λ e-λt • Mean λ = 2360 / s[426 µs] • Plot log: slope -0.0028expect -0.0024 • Could be additional process involved 31

  32. The performance of the end host / disks BaBar Case Study: RAID BW & PCI Activity • 3Ware 7500-8 RAID5 parallel EIDE • 3Ware forces PCI bus to 33 MHz • BaBar Tyan to MB-NG SuperMicroNetwork mem-mem 619 Mbit/s • Disk – disk throughput bbcp40-45 Mbytes/s (320 – 360 Mbit/s) • PCI bus effectively full! • User throughput ~ 250 Mbit/s Read from RAID5 Disks Write to RAID5 Disks 32

More Related