1 / 26

10Gbit between GridKa and openlab (and their obstacles)

10Gbit between GridKa and openlab (and their obstacles). Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany http://www.gridka.de Bruno Hoeft. LAN of GridKa Strucure Prevision of installation in 2008 WAN History (1G/2003)

ravi
Télécharger la présentation

10Gbit between GridKa and openlab (and their obstacles)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D-76021 Karlsruhe, Germany http://www.gridka.de Bruno Hoeft

  2. LAN of GridKa Strucure Prevision of installation in 2008 WAN History (1G/2003) Current 10G Testbed Challenges crossing multi NREN (National Research and Education Network) Quality and quantity evaluation of GridKa and openlap network connection File transfer Outline

  3. Main building Tape Storage IWR 441,442

  4. Projects at GridKa calculating jobs with “real” data Atlas (SLAC, USA) (FermiLab ,USA) 1,5 Mio. Jobs and 4,2 Mio. hours calculation in 2004 (CERN) (FermiLab ,USA) LHC Experimente non-LHC Experimente

  5. CMS ATLAS Lab z Uni e virtual organizations CERN Tier 1 Uni a UK (RAL) USA(Fermi, BNL) Lab y France(IN2P3) Tier 1 Tier 3(Institutecomputer) Uni b CERN Tier 2(Uni-CCs,Lab-CCs) LHCb ………. Tier 0 Italy(CNAF) Tier 4 (Desktop) Lab x Germany(FZK) ………. Lab i  Uni d Uni c  working groups  Tier 0 Centre at CERN LHC Computing Grid Project - LCG The LHC multi-Tier Computing Model    Grid Computing Centre Karlsruhe

  6. PetaByte - 1015 Can you imagine what a PetaByte is ? DSL has 1024 kbit/s. You can transfer 128 kB in a second approx. 40 MB in 5 minutes 1 GB in 2 hours 1 TB in 90 days 1 PB in 248 years 1 PB is the content of 1,4 Million CDs, giving a tower of 1,4 km height !

  7. gradual extention of GridKa resources * Internet connetion for sc • April 2005: • biggest Linux-Cluster within the German Science Society • largest Online-Storage at a single installation in Germany • strongest Internet connection in Germany • available at the Grid with over 100 installations in Europe

  8. Ethernet 320 Mbit Ethernet 100 Mbit Ethernet 1 Gbit FiberChannel 2Gbit Ethernet 10 Gbit DFN 10Gbit/s service challenge only 1Gbit/s GridKa sc nodes router sc nodes SAN … 1 Gbit/s sc nodes Storage direct internet access LoginServer LoginServer 1 Gbit/s … PIX FS NAS FS NAS FS NAS … FS FS SAN ComputeNode … ComputeNode FS ComputeNode … ComputeNode Network installation router private network … … switch … …

  9. Ethernet 320 Mbit Ethernet 100 Mbit Ethernet 1 Gbit FiberChannel 2Gbit Ethernet 10 Gbit 10Gbit/s service challenge only sc nodes sc nodes SAN … sc nodes PIX FS NAS FS NAS FS NAS ComputeNode ComputeNode ComputeNode … ComputeNode Network installation incl. Management Network DFN 1Gbit/s GridKa router 1 Gbit/s Storage direct internet access Master Controller LoginServer • Ganglia • Nagios • Cacti RM switch LoginServer 1 Gbit/s … router private network … … RM … switch FS RM … … FS RM SAN … RM RM FS

  10. GridKa – WAN connectivity planning 20 Gbps ? 10 Gbps Lightpath to CERN start discussions with DFN/Dante ! 2 Gbps 155 Mbps 34 Mbps 2001 2002 2003 2004 2005 2006 2007 2008 • - 10G internet connection since Oct 2004 • mid to end of 2005  production • X-Win partner of DFN • light path between CERN and GridKa via X-Win and Geant2

  11. Géant 10 Gbps CERN DFN 2.4 Gbps WAN connection and Gigabit test with CERN Frankfurt Geneva 1000 98% of 1Gbit Karlsruhe GridFTP tested over 1 Gbps 2x 1 Gbps 0 GridFTP server GridFTP server

  12. Géant 10 Gbps CERN DFN 10 Gbps WAN connection 10Gigabit test with CERN Frankfurt Geneva swiCE3-G4-3.switch.ch swiCE2-P6-1.switch.ch it.ch1.ch.geant.net de.it1.it.geant.net dfn.de1.de.geant.net Karlsruhe cr-karlsruhe1-po12-0.g-win.dfn.de 10 Gbps 10 Gbps ar-karlsruhe1-ge5-2-700.g-win.dfn.de r-internet.fzk.de GridFTP server GridFTP server

  13. Géant

  14. Géant 10 Gbps CERN DFN MPLS tunnel 10 Gbps WAN connection 10Gigabit test with CERN Frankfurt Geneva swiCE3-G4-3.switch.ch fr.ch1.ch.geant.net de.fr1.fr.geant.net dfn.de1.de.geant.net Karlsruhe cr-karlsruhe1-po12-0.g-win.dfn.de 10 Gbps 10 Gbps ar-karlsruhe1-ge5-2-700.g-win.dfn.de r-internet.fzk.de GridFTP server GridFTP server

  15. Géant 10 Gbps CERN DFN 10 Gbps WAN connection 10Gigabit test with CERN Frankfurt Geneva • Bandwidth evaluation (tcp/udp) • MPLS via France • (MPLS - MultiProtocol Label Switching) • LBE (Least Best Effort) • GridFTP server pool HD to HD Storage to Storage • SRM Karlsruhe 10 Gbps 10 Gbps GridFTP server GridFTP server

  16. Various Xeon dual 2.8 and 3.0 GHz IBM x-series (Intel and Broadcom NIC) Recently added 3.0 GHz EM64T (800 FSB) Cisco 6509 with 4 10 Gb ports and lots of 1 Gb Storage:Datadirect 2A8500 with 16 TB Linux RH ES 3.0 (U2 and U3), GPFS 10 GE Link to GEANT via DFN (least best effort ) sc nodes 10Gbit/s service challenge only DFN sc nodes SAN router … sc nodes Hardware • TCP/IP stack • 4 MB buffer • 2 MB window size

  17. 957 Mbit/sec 953 Mbit/sec = = 957 Mbit/sec 884 Mbit/sec Quality Evaluation (UDP streams) scalable 953Mbit; jitter: 0,018 ms; ooo: 215 – 0,008% W A N W A N 884Mbit; jitter: 0,015 ms; ooo: 4239 – 0,018% LAN 10gkt113 oplapro73 [10gtk111] iperf -s –u [10gtk113] iperf -c 192.108.46.111 -u -b 1000M Transfer 957 Mbits/sec Reno symmetric [10gtk113] iperf -s –u [10gtk111] iperf -c 192.108.46.113 -u -b 1000M Transfer 957 Mbits/sec jitter: 0,021; ooo: 0 957 Mbit 957 Mbit jitter: 0,028; ooo: 0 LAN WAN [oplapro73]> iperf -s -u [10gtk113] iperf -c 192.16.160.13 -u -b 1000M Transfer 953 Mbits/sec asymmetric [10gtk113] iperf -s -u [oplapro73]> iperf -c 192.108.46.113 -u -b 1000M Transfer 884 Mbits/sec . 10gkt111 ooo - out of order

  18. Bandwidth evaluation (TCP streams) Reno five single steams Reno W A N W A N 348Mbit/sec – single stream iperf 10gkt101 – 10gtk105 oplapro71 | Oplapro75

  19. Bandwidth evaluation Reno W A N W A N 5 nodes a 112MByte/sec – 24 parrallel threads iperf 10gkt10[1-5] Oplapro7[1-5] Reno time

  20. Wethermap of Géant

  21. Mbit/s 7000 6300 5600 4900 4200 3500 2800 2100 1400 700 18:00 20:00 Evaluation of max throughput • 9 nodes each site • 8 * 845 Mbit • 1 * 540 Mbithigher speed at one stream • results in packet loss time

  22. to hd/SAN to /dev/null Mbit/s 4900 4200 3500 2800 2100 1400 700 4th Feb. 05 6th Feb. 05 7th Feb. 05 8th Feb. 05 5th Feb. 05 Gridftp sc1 throughput Sc1 – 500MByte sustained 19 Nodes - 15 WorkerNodes * 20MByte IDE/ATA HD - 1 FileServer * 50MByte SCSI HD - 3 FileServer * 50MByte SAN

  23. approx 1/5 of the load SC2

  24. SC2 • five nodes at GridKa • gridftp to gpfs

  25. GridKa Tier-1 Centre for LHC 4 Production and 4 LHC Experiments multi NREN 10Gbit link up to 7.0Mbit usable Service Challenge to test the whole chain from tape over WAN to tape Diffrent steps just finished SC2 with approx. 1/5 of the CERN aggregated load Lightpath via X-Win (DFN) / Geant2 to CERN will be realised in 2006 Conclusion

  26. Forschungszentrum Karlsruhein der Helmholtz-Gemeinschaft thanks for your attention ? Questions

More Related