1 / 22

ESnet Joint Techs, Feb. 2005

ESnet Joint Techs, Feb. 2005. William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team

howie
Télécharger la présentation

ESnet Joint Techs, Feb. 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESnet Joint Techs, Feb. 2005 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz,Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory

  2. ESnet’s Mission • Support the large-scale, collaborative science of DOE’s Office of Science • Provide high reliability networking to support the operational traffic of the DOE Labs • Provide network services to other DOE facilities • Provide leading-edge network and Grid services to support collaboration • ESnet is a component of the Office of Science infrastructure critical to the success of its research programs (program funded through Office of Advanced Scientific Computing Research / MICS; managed and operated by ESnet staff at LBNL)

  3. ESnet Physical Network – mid 2005High-Speed Interconnection of DOE Facilitiesand Major Science Collaborators SInet (Japan) Japan – Russia(BINP) GEANT - Germany, France, Italy, UK, etc Australia CA*net4 Taiwan (TANet2) Singaren CERN (DOE link) CA*net4 France GLORIAD Kreonet2 MREN Netherlands StarTap TANet2 Taiwan (ASCC) PNNL NERSC SLAC PNWG BNL ANL MIT LIGO INEEL LLNL LBNL SNLL JGI TWC Starlight 4xLAB-DC GTN&NNSA INEEL-DC ORAU-DC LLNL/LANL-DC Chi NAP AMES JLAB PPPL FNAL ORNL SRS LANL SNLA DOE-ALB PANTEX NOAA ORAU OSTI ARM ALB HUB YUCCA MT BECHTEL-NV GA Abilene Abilene Abilene Abilene MAN LANAbilene Allied Signal KCP SDSC HUB MaxGP ELP HUB CHI HUB ATL HUB DC HUB NYC HUB NREL SEA HUB ESnet Science Data Network (SDN)core ESnet IP core CHI-SL HUB QWEST ATM MAE-E SNV HUB Equinix PAIX-PA Equinix, etc. SNV SDN HUB 42 end user sites Office Of Science Sponsored (22) NNSA Sponsored (12) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (> 10 G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) peering points ESnet IP core: Packet over SONET Optical Ring and Hubs SND core hubs IP core hubs SNV HUB SNV HUB high-speed peering points

  4. ESnet Logical Network:Peering and Routing Infrastructure NY-NAP STARLIGHT CHI NAP MAE-E EQX-ASH EQX-SJ GA ESnet peering points (connections to other networks) GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) University Australia CA*net4 Taiwan (TANet2) Singaren International CA*net4 CERN France GLORIAD Kreonet2 MREN Netherlands StarTap Taiwan (ASCC) TANet2 Commercial PNW-GPOP SEA HUB 2 PEERS Distributed 6TAP 18 Peers Abilene 6 PEERS 1 PEER CalREN2 NYC HUBS 1 PEER LBNL Abilene + 6 Universities SNV HUB 10 PEERS Abilene 2 PEERS 16 PEERS MAX GPOP PAIX-W MAE-W 36 PEERS 13 PEERS 28 PEERS TECHnet 2 PEERS FIX-W NGIX 14 PEERS 2 PEERS LANL CENIC SDSC Abilene ATL HUB • ESnet supports collaboration by providing full Internet access • manages the full complement of Global Internet routes (about 150,000 IPv4 from 180 peers) at 40 general/commercial peering points • high-speed peerings w/ Abilene and the international R&E networks. • This is a lot of work, and is very visible, but provides full Internet access for DOE.

  5. Drivers for the Evolution of ESnet August, 2002 Workshop Organized by Office of Science Mary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White Workshop Panel Chairs Ray Bair, Deb Agarwal, Bill Johnston, Mike Wilde, Rick Stevens, Ian Foster, Dennis Gannon, Linda Winkler, Brian Tierney, Sandy Merola, and Charlie Catlett • The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines • Climate simulation • Spallation Neutron Source facility • Macromolecular Crystallography • High Energy Physics experiments • Magnetic Fusion Energy Sciences • Chemical Sciences • Bioinformatics • (Nuclear Physics) • The network is essential for: • long term (final stage) data analysis • “control loop” data analysis (influence an experiment in progress) • distributed, multidisciplinary simulation • Available at www.es.net/#research

  6. Evolving Quantitative Science Requirements for Networks

  7. ESnet is Currently Transporting About 350 terabytes/mo. ESnet Monthly Accepted Traffic Jan., 1990 – Dec. 2004 Annual growth in the past five years about 2.0x annually. TBytes/Month

  8. A Small Number of Science UsersAccount for a Significant Fraction of all ESnet Traffic Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged DOE Lab-International R&E Total ESnet traffic (Dec, 2004) = 330 TBy TBytes/Month Lab-U.S. R&E Domestic Lab-Lab International • Top 100 host-host flows = 99 TBy Note that this data does not include intra-Lab traffic.ESnet ends at the Lab border routers, so science traffic on the Lab LANs is invisible to ESnet.

  9. Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged Fermilab (US)  IN2P3 (FR) SLAC (US)  INFN CNAF (IT) SLAC (US)  RAL (UK) Fermilab (US)  WestGrid (CA) Fermilab (US)  WestGrid (CA) TBytes/Month SLAC (US)  RAL (UK) SLAC (US)  IN2P3 (FR) BNL (US)  IN2P3 (FR) SLAC (US)  IN2P3 (FR) FNAL  Karlsruhe (DE) LIGO  Caltech NERSC  NASA Ames FNAL  Johns Hopkins NERSC  NASA Ames LLNL  NCAR LBNL  U. Wisc. FNAL  MIT FNAL  SDSC ??  LBNL NERSC  LBNL FNAL  MIT NERSC  LBNL NERSC  LBNL NERSC  LBNL NERSC  LBNL NERSC  LBNL BNL  LLNL BNL  LLNL BNL  LLNL BNL  LLNL

  10. ESnet Traffic • Since BaBar (SLAC high energy physics experiment) production started, the top 100 ESnet flows have consistently accounted for 30% - 50% of ESnet’s monthly total traffic • As LHC (CERN high energy physics accelerator) data starts to move, this will increase a lot (200-2000 times) • Both LHC tier 1 (primary U.S. experiment data centers) are at DOE Labs – Fermilab and Brookhaven • U.S. tier 2 (experiment data analysis) centers will be at universities – when they start pulling data from the tier 1 centers the traffic distribution will change a lot

  11. Monitoring DOE Lab ↔ University Connectivity • Current monitor infrastructure (red&green) and target infrastructure • Uniform distribution around ESnet and around Abilene AsiaPac SEA CERN CERN Europe Europe LBNL Abilene OSU Japan Japan FNAL CHI ESnet NYC DEN SNV BNL DC KC IND LA Japan NCS ATL ALB SDG ESnet Abilene ORNL SDSC ELP DOE Labs w/ monitors Universities w/ monitors network hubs high-speed cross connects: ESnet ↔Internet2/Abilene HOU Initial site monitors

  12. ESnet Evolution Chicago (CHI) New York (AOA) ESnetCore DOE sites Washington, DC (DC) Sunnyvale (SNV) Atlanta (ATL) El Paso (ELP) • With the current architecture ESnet cannot address • the increasing reliability requirements • Labs and science experiments are insisting on network redundancy • the long-term bandwidth needs • LHC will need dedicated 10/20/30/40 Gb/s into and out of FNAL and BNL • Specific planning drivers include HEP, climate, SNS, ITER and SNAP, et al • The current core ring cannot handle the anticipated large science data flows at affordable cost • The current point-to-point tail circuits are neither reliable nor scalable to the required bandwidth

  13. ESnet Strategy – A New Architecture • Goals derived from science needs • Fully redundant connectivity for every site • High-speed access to the core for every site (at least 20 Gb/s) • 100 Gbps national bandwidth by 2008 • Three part strategy 1) Metropolitan Area Network (MAN) rings to provide dual site connectivity and much higher site-to-core bandwidth 2) A Science Data Network core for • large, high-speed science data flows • multiply connecting MAN rings for protection against hub failure • a platform for provisioned, guaranteed bandwidth circuits • alternate path for production IP traffic 3) A High-reliability IP core (e.g. the current ESnet core) to address Lab operational requirements

  14. ESnet MAN Architecture T320 T320 monitor monitor core router R&E peerings ESnet production IP core International peerings core router ESnet SDN core switches managingmultiple lambdas ESnet managedλ / circuit services ESnet managedλ / circuit services tunneled through the IP backbone ESnet management and monitoring 2-4 x 10 Gbps channels ESnet production IP service Lab Lab site equip. Site gateway router site equip. Site gateway router Site LAN Site LAN

  15. New ESnet Strategy:Science Data Network + IP Core + MANs CERN Asia-Pacific GEANT (Europe) ESnet Science Data Network (2nd Core) Seattle (SEA) Chicago (CHI) New York(AOA) MetropolitanAreaRings Core loops Washington, DC (DC) Sunnyvale(SNV) ESnetIP Core Atlanta (ATL) Albuquerque (ALB) Existing IP core hubs El Paso (ELP) SDN hubs New hubs Primary DOE Labs Possible new hubs

  16. Tactics for Meeting Science Requirements – 2007/2008 High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites AsiaPac • 10 Gbps enterprise IP traffic • 40-60 Gbps circuit based transport SEA CERN Aus. Europe Europe ESnet Science Data Network (2nd Core – 30-50 Gbps,National Lambda Rail) Japan Japan CHI SNV NYC DEN DC MetropolitanAreaRings ESnetIP Core (>10 Gbps ??) Aus. ALB ATL SDG ESnet hubs ESnet hubs ELP Metropolitan Area Rings Production IP ESnet core 10Gb/s 30Bg/s40Gb/s High-impact science core 2.5 Gbs10 Gbs Lab supplied Future phases Major international

  17. ESnet Services Supporting Science Collaboration • In addition to the high-bandwidth network connectivity for DOE Labs, ESnet provides several other services critical for collaboration • That is ESnet provides several “science services” – services that support the practice of science • Access to collaborators (“peering”) • Federated trust • identity authentication • PKI certificates • crypto tokens • Human collaboration – video, audio, and data conferencing

  18. DOEGrids CA Usage Statistics * FusionGRID CA certificates not included here. * Report as of Jan 11,2005

  19. DOEGrids CA Usage - Virtual Organization Breakdown * *DOE-NSF collab.

  20. ESnet Collaboration Services: Production Services • Web-based registration and audio/data bridge scheduling • Ad-Hoc H.323 and H.320 videoconferencing • Streaming on the Codian MCU using Quicktime or REAL • “Guest” access to the Codian MCU via the worldwide Global Dialing System (GDS) • Over 1000 registered users worldwide

  21. ESnet Collaboration Services: H.323 Video Conferencing • Radvision and Codian • 70 ports on Radvision available at 384 kbps • 40 ports on Codian at 2 Mbps plus streaming • Usage leveled, but, expect increase in early 2005 (new groups joining ESnet Collaboration) • Radvision increase to 200 ports at 384 kbps by mid-2005

  22. Conclusions • ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE • ESnet is working on providing the DOE mission science networking requirements with several new initiatives and a new architecture • ESnet is very different today in both planning and business approach and in goals than in the past

More Related