1 / 10

CCS Network Roadmap

CCS Network Roadmap. Steven M. Carter Network Task Lead Center for Computational Sciences. CCS network roadmap. Summary: Hybrid Ethernet/Infiniband (IB) network to provide both high-speed wide-area connectivity and uber-speed local-area data movement. FY 2006–FY 2007

lela
Télécharger la présentation

CCS Network Roadmap

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences

  2. CCS network roadmap Summary: Hybrid Ethernet/Infiniband (IB) network to provide both high-speed wide-area connectivity and uber-speed local-area data movement • FY 2006–FY 2007 • Scaled Ethernet to meet wide-area needs and current day local-area data movement • Developed wire-speed, low-latency perimeter security to fully utilize 10 G production and research WAN connections • Began IB LAN deployment • FY 2007–FY 2008 • Continue building out IB LAN infrastructure to satisfy the file system needs of the Baker system • Test Lustre on IB/WAN for possible deployment

  3. NCCS network roadmap summary Ethernet core scaled to match wide-area connectivity and archive Infiniband core scaled to match central file system and data transfer Lustre Baker Gateway Ethernet [O(10GB/s)] Infiniband [O(100GB/s)] Jaguar High-Performance Storage System (HPSS) Viz

  4. CCS WAN overview Ciena CoreStream Ciena CoreDirector TeraGrid Juniper T320 DOE UltraScience/ NSF CHEETAH Force10 E600 Juniper T640 CCS Cisco 6509 NSTG CCS OC-192 to Internet 2 OC-192 to Esnet Internal 10G Link Internal 1G Link OC-192 to TeraGrid 2 x OC-192 to USN OC-192 to CHEETAH

  5. Spider10 CCS Network 2007 Infiniband Ethernet 48-96 SDR 32 Jaguar 128 DDR 48 48 Spider60 48 64 DDR 64 DDR 48 48 Viz 48 32 DDR 20 4 HPSS 64 DDR 4 48 24 SDR 4 Devel 48 87 SDR 48 E2E 48 3 SDR 20 SDR 20

  6. CCS IB network 2008 20 SDR 24 SDR Spider10 Devel 87 SDR Jaguar E2E 48-96 SDR (16-32 SDR/link) 64 DDR 32 DDR Viz 50 DDR HPSS 50 DDR 300 DDR (50 DDR/link) Baker Spider240

  7. FY 2007 milestones Nov 2006 - Completed IB WAN testing Jan 2007 - Completed 10-G WAN upgrade Jan 2007 - IBoIP firewall installation Mar 2007 - Force10 P10 upgrade Apr 2007 - Completed VL/QOS testing Jun 2007 - IBoIP/SDP decision point

  8. Placed 2 x Obsidiam Longbow devices between Voltaire 9024 and Voltaire 9288 Provisioned loopback circuits of various lengths on the DOE UltraScience Network and ran test. RDMA Test Results: Local: 7.5Gbps (Longbow to Longbow) ORNL <-> ORNL (0.2mile): 7.5Gbps ORNL <-> Chicago (1400miles): 7.46Gbps ORNL <-> Seattle (6600 miles): 7.23Gbps ORNL <-> Sunnyvale (8600 miles): 7.2Gbps 4x Infiniband SDR OC-192 SONET Lustre End-to-End IB over WAN testing Ciena CD-CI (SNV) DOE UltraScience Network Obsidian Longbow Obsidian Longbow Ciena CD-CI (ORNL) Voltaire 9288 Voltaire 9024 20 81

  9. Voltaire 9024 Spider (Linux Cluster) Rizzo (XT3) Network Task Lead Utilizing a server on spider (a commodity x86_64 Linux cluster), we were able to show the first successful infiniband implementation on the XT3. The basic RDMA test (as provided by the Open Fabrics Enterprise Distribution), allows us to see the maximum bandwidth that we could achieve (about 900MB/s unidirectionally) from the XT3's 133MHz PCI-X bus.

  10. Contact Steven Carter Tech Integration Center for Computational Sciences (865) 576-2672 scarter@ornl.gov 10 Presenter_date

More Related