1 / 85

ESnet4: Networking for the Future of DOE Science ESnet R&D Roadmap Workshop, April 23-24, 2007

ESnet4: Networking for the Future of DOE Science ESnet R&D Roadmap Workshop, April 23-24, 2007. William E. Johnston ESnet Department Head and Senior Scientist Lawrence Berkeley National Laboratory wej@es.net , www.es.net.

autumn-koch
Télécharger la présentation

ESnet4: Networking for the Future of DOE Science ESnet R&D Roadmap Workshop, April 23-24, 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESnet4: Networking for the Future of DOE ScienceESnet R&D Roadmap Workshop, April 23-24, 2007 William E. Johnston ESnet Department Head and Senior ScientistLawrence Berkeley National Laboratory wej@es.net, www.es.net

  2. ESnet is an important, though somewhat specialized, part of the US Research and Education infrastructure • The Office of Science (SC) is the single largest supporter of basic research in the physical sciences in the United States, … providing more than 40 percent of total funding … for the Nation’s research programs in high-energy physics, nuclear physics, and fusion energy sciences. (http://www.science.doe.gov) • SC will supports 25,500 PhDs, PostDocs, and Graduate students, and half of the 21,500 users of SC facilities come from universities • Almost 90% of ESnet’s 1+Petabyte/month of traffic flows to and from the R&E community

  3. DOE Office of Science and ESnet – the ESnet Mission • ESnet’s primary mission is to enable the large-scale science that is the mission of the Office of Science (SC) and that depends on: • Sharing of massive amounts of data • Supporting thousands of collaborators world-wide • Distributed data processing • Distributed data management • Distributed simulation, visualization, and computational steering • Collaboration with the US and International Research and Education community • ESnet provides network and collaboration services to Office of Science laboratories and many other DOE programs in order to accomplish its mission

  4. What ESnet Is • A large-scale IP network built on a national circuit infrastructure with high-speed connections to all major US and international research and education (R&E) networks • An organization of 30 professionals structured for the service • An operating entity with an FY06 budget of $26.6M • A tier 1 ISP (direct peerings will all major networks) • The primary DOE network provider • Provides production Internet service to all of the major DOE Labs* and most other DOE sites • Based on DOE Lab populations, it is estimated that between 50,000 -100,000 users depend on ESnet for global Internet access • additionally, each year more than 18,000 non-DOE researchers from universities, other government agencies, and private industry use Office of Science facilities * PNNL supplements its ESnet service with commercial service

  5. Office of Science US CommunityDrives ESnet Design for Domestic Connectivity Institutions supported by SC Major User Facilities DOE Specific-Mission Laboratories DOE Program-Dedicated Laboratories DOE Multiprogram Laboratories Pacific Northwest National Laboratory Idaho National Laboratory Ames Laboratory Argonne National Laboratory Brookhaven National Laboratory Fermi National Accelerator Laboratory Lawrence Berkeley National Laboratory Stanford Linear Accelerator Center Princeton Plasma Physics Laboratory Lawrence Livermore National Laboratory Thomas Jefferson National Accelerator Facility General Atomics Oak Ridge National Laboratory Los Alamos National Laboratory Sandia National Laboratories National Renewable Energy Laboratory

  6. Footprint of Largest SC Data Sharing CollaboratorsDrives the International Footprint that ESnet Must Support • Top 100 data flows generate 50% of all ESnet traffic (ESnet handles about 3x109 flows/mo.) • 91 of the top 100 flows are from the Labs to other institutions (shown) (CY2005 data)

  7. What Does ESnet Provide? - 1 • An architecture tailored to accommodate DOE’s large-scale science • Move huge amounts of data between a small number of sites that are scattered all over the world • Comprehensive connectivity • High bandwidth access to DOE sites and DOE’s primary science collaborators: Research and Education institutions in the US, Europe, Asia Pacific, and elsewhere • Full access to the global Internet for DOE Labs • ESnet is a tier 1 ISP managing a full complement of Internet routes for global access • Highly reliable transit networking • Fundamental goal is to deliver every packet that is received to the “target” site

  8. What Does ESnet Provide? - 2 • A full suite of network services • IPv4 and IPv6 routing and address space management • IPv4 multicast (and soon IPv6 multicast) • Primary DNS services • Circuit services (layer 2 e.g. Ethernet VLANs), MPLS overlay networks (e.g. SecureNet when it was ATM based) • Scavenger service so that certain types of bulk traffic can use all available bandwidth, but will give priority to any other traffic when it shows up • Prototype guaranteed bandwidth and virtual circuit services

  9. What Does ESnet Provide? - 3 • New network services • Guaranteed bandwidth services • Via a combination of QoS, MPLS overlay, and layer 2 VLANs • Collaboration services and Grid middleware supporting collaborative science • Federated trust services / PKI Certification Authorities with science oriented policy • Audio-video-data teleconferencing • Highly reliable and secure operation • Extensive disaster recovery infrastructure • Comprehensive internal security • Cyberdefense for the WAN

  10. What Does ESnet Provide? - 4 • Comprehensive user support, including “owning” all trouble tickets involving ESnet users (including problems at the far end of an ESnet connection) until they are resolved – 24x7x365 coverage • ESnet’s mission is to enable the network based aspects of OSC science, and that includes troubleshooting network problems wherever they occur • A highly collaborative and interactive relationship with the DOE Labs and scientists for planning, configuration, and operation of the network • ESnet and its services evolve continuously in direct response to OSC science needs • Engineering services for special requirements

  11. ESnet History transitionin progress

  12. ESnet3 Today Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (Early 2007) SINet (Japan) Russia (BINP) CERN (USLHCnetDOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc PNNL SEA SLAC NERSC MIT ANL BNL IARC INEEL LIGO LBNL LLNL MAN LANAbilene Starlight SNLL CHI-SL TWC JGI OSC GTNNNSA Lab DC Offices ATL CHI PPPL JLAB AMES FNAL Equinix Equinix ORNL SRS SNV SDN LANL SNLA DC DOE-ALB NASAAmes PANTEX NOAA ORAU OSTI ARM NSF/IRNCfunded YUCCA MT BECHTEL-NV GA Abilene Abilene Abilene SDSC MAXGPoP KCP Allied Signal UNM AMPATH(S. America) R&Enetworks AMPATH(S. America) NREL SNV Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet Science Data Network (SDN) core CA*net4 France GLORIAD (Russia, China)Korea (Kreonet2 MREN Netherlands StarTapTaiwan (TANet2, ASCC) PNWGPoP/PAcificWave AU NYC ESnet IP core: Packet over SONET Optical Ring and Hubs MAE-E SNV PAIX-PA Equinix, etc. AU ALB 42 end user sites ELP Office Of Science Sponsored (22) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (13) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) Specific R&E network peers Other R&E peering points commercial peering points ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene

  13. ESnet is a Highly Reliable Infrastructure “5 nines” (>99.995%) “4 nines” (>99.95%) “3 nines” Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric. Dually connected sites

  14. ESnet is An Organization Structured for the Service Network operations and user support(24x7x365, end-to-end problem resolution Deployment and WAN maintenance Network engineering, routing and network services, WAN security Applied R&D for new network services(Circuit services and end-to-end monitoring) Science collaboration services (Public Key Infrastructure certification authorities,AV conferencing, email lists, Web services) Internal infrastructure, disaster recovery,security Management, accounting, compliance 30.7 FTE (full-time staff) total

  15. ESnet FY06 Budget is Approximately $26.6M Approximate Budget Categories Target carryover: $1.0M Special projects (Chicago and LI MANs): $1.2M SC R&D: $0.5M Carryover: $1M Management and compliance: $0.7M SC Special Projects: $1.2M Other DOE: $3.8M Collaboration services: $1.6 Internal infrastructure, security, disaster recovery: $3.4M Circuits & hubs: $12.7M SC operating:$20.1M Operations: $1.1M Engineering & research: $2.9M WAN equipment: $2.0M Total expenses: $26.6M Total funds: $26.6M

  16. Planning the Future Network - ESnet4 There are many stakeholders for ESnet • SC programs • Advanced Scientific Computing Research • Basic Energy Sciences • Biological and Environmental Research • Fusion Energy Sciences • High Energy Physics • Nuclear Physics • Office of Nuclear Energy • Major scientific facilities • At DOE sites: large experiments, supercomputer centers, etc. • Not at DOE sites: LHC, ITER • SC supported scientists not at the Labs (mostly at US R&E institutions) • Other collaborating institutions (mostly US, European, and AP R&E) • Other R&E networking organizations that support major collaborators • Mostly US, European, and Asia Pacific networks • Lab operations (e.g. conduct of business) and general population • Lab networking organizations These accountfor 85% of allESnet traffic

  17. Planning the Future Network - ESnet4 • Requirements of the ESnet stakeholders are primarily determined by 1) Data characteristics of instruments and facilities that will be connected to ESnet • What data will be generated by instruments coming on-line over the next 5-10 years? • How and where will it be analyzed and used? 2) Examining the future process of science • How will the processing of doing science change over 5-10 years? • How do these changes drive demand for new network services? 3) Studying the evolution of ESnet traffic patterns • What are the trends based on the use of the network in the past 2-5 years? • How must the network change to accommodate the future traffic patterns implied by the trends?

  18. (1) Requirements from Instruments and Facilities Advanced Scientific Computing Research National Energy Research Scientific Computing Center (NERSC) (LBNL)* National Leadership Computing Facility (NLCF) (ORNL)* Argonne Leadership Class Facility (ALCF) (ANL)* Basic Energy Sciences National Synchrotron Light Source (NSLS) (BNL) Stanford Synchrotron Radiation Laboratory (SSRL) (SLAC) Advanced Light Source (ALS) (LBNL)* Advanced Photon Source (APS) (ANL) Spallation Neutron Source (ORNL)* National Center for Electron Microscopy (NCEM) (LBNL)* Combustion Research Facility (CRF) (SNLL)* Biological and Environmental Research William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) (PNNL)* Joint Genome Institute (JGI) Structural Biology Center (SBC) (ANL) Fusion Energy Sciences DIII-D Tokamak Facility (GA)* Alcator C-Mod (MIT)* National Spherical Torus Experiment (NSTX) (PPPL)* ITER High Energy Physics Tevatron Collider (FNAL) B-Factory (SLAC) Large Hadron Collider (LHC, ATLAS, CMS) (BNL, FNAL)* Nuclear Physics Relativistic Heavy Ion Collider (RHIC) (BNL)* Continuous Electron Beam Accelerator Facility (CEBAF) (JLab)* DOE SC Facilities that are, or will be, the top network users *14 of 22 are characterized by current case studies

  19. The Largest Facility: Large Hadron Collider at CERN LHC CMS detector 15m X 15m X 22m,12,500 tons, $700M human (for scale)

  20. (2) Requirements from Examiningthe Future Process of Science • In a major workshop [1], and in subsequent updates [2], requirements were generated by asking the science community how their process of doing science will / must change over the next 5 and next 10 years in order to accomplish their scientific goals • Computer science and networking experts then assisted the science community in • analyzing the future environments • deriving middleware and networking requirements needed to enable these environments • These were complied as case studies that provide specific 5 & 10 year network requirements for bandwidth, footprint, and new services

  21. Science Networking Requirements Aggregation Summary

  22. Science Network Requirements Aggregation Summary Immediate Requirements and Drivers

  23. 3) These Trends are Seen in Observed Evolution of Historical ESnet Traffic Patterns top 100 sites to siteworkflows Terabytes / month • ESnet Monthly Accepted Traffic, January, 2000 – June, 2006 • ESnet is currently transporting more than1 petabyte (1000 terabytes) per month • More than 50% of the traffic is now generated by the top 100 sites — large-scale science dominates all ESnet traffic

  24. ESnet Traffic has Increased by10X Every 47 Months, on Average, Since 1990 Apr., 2006 1 PBy/mo. Nov., 2001 100 TBy/mo. 53 months Jul., 1998 10 TBy/mo. 40 months Oct., 1993 1 TBy/mo. 57 months Terabytes / month Aug., 1990 100 MBy/mo. 38 months Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2006

  25. Requirements from Network Utilization Observation • In 4 years, we can expect a 10x increase in traffic over current levels without the addition of production LHC traffic • Nominal average load on busiest backbone links is ~1.5 Gbps today • In 4 years that figure will be ~15 Gbps based on current trends • Measurements of this type are science-agnostic • It doesn’t matter who the users are, the traffic load is increasing exponentially • Predictions based on this sort of forward projection tend to be conservative estimates of future requirements because they cannot predict new uses • Bandwidth trends drive requirement for a new network architecture • New architecture/approach must be scalable in a cost-effective way

  26. Large-Scale Flow Trends, June 2006Subtitle: “Onslaught of the LHC”) Traffic Volume of the Top 30 AS-AS Flows, June 2006(AS-AS = mostly Lab to R&E site, a few Lab to R&E network, a few “other”) Terabytes FNAL -> CERN traffic is comparable to BNL -> CERNbut on layer 2 flows that are not yet monitored for traffic – soon)

  27. Traffic Patterns are Changing Dramatically total traffic,TBy total traffic, TBy • While the total traffic is increasing exponentially • Peak flow – that is system-to-system – bandwidth is decreasing • The number of large flows is increasing 1/05 6/06 2 TB/month 2 TB/month 7/05 2 TB/month 1/06 2 TB/month

  28. The Onslaught of Grids Question: Why is peak flow bandwidth decreasing while total traffic is increasing? Answer: Most large data transfers are now done by parallel / Grid data movers • In June, 2006 72% of the hosts generating the top 1000 flows were involved in parallel data movers (Grid applications) • This is the most significant traffic pattern change in the history of ESnet • This has implications for the network architecture that favor path multiplicity and route diversity plateaus indicate the emergence of parallel transfer systems (a lot of systems transferring the same amount of data at the same time)

  29. What is the High-Level View of ESnet Traffic Patterns? ESnet Inter-Sector Traffic Summary, Mar. 2006 7% Commercial 48% 5% ESnet Inter-Labtraffic ~10% 12% R&E (mostlyuniversities) DOE sites 3% Peering Points 58% 23% International(almost entirelyR&E sites) 43% Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between ESnet sites % = of total ingress or egress traffic • Traffic notes • more than 90% of all traffic Office of Science • less that 10% is inter-Lab

  30. Requirements from Traffic Flow Observations • Most of ESnet science traffic has a source or sink outside of ESnet • Drives requirement for high-bandwidth peering • Reliability and bandwidth requirements demand that peering be redundant • Multiple 10 Gbps peerings today, must be able to add more bandwidth flexibly and cost-effectively • Bandwidth and service guarantees must traverse R&E peerings • Collaboration with other R&E networks on a common framework is critical • Seamless fabric • Large-scale science is now the dominant user of the network • Satisfying the demands of large-scale science traffic into the future will require a purpose-built, scalable architecture • Traffic patterns are different than commodity Internet

  31. Changing Science Environment  New Demands on Network Requirements Summary • Increased capacity • Needed to accommodate a large and steadily increasing amount of data that must traverse the network • High network reliability • Essential when interconnecting components of distributed large-scale science • High-speed, highly reliable connectivity between Labs and US and international R&E institutions • To support the inherently collaborative, global nature of large-scale science • New network services to provide bandwidth guarantees • Provide for data transfer deadlines for • remote data analysis, real-time interaction with instruments, coupled computational simulations, etc.

  32. ESnet4 - The Response to the Requirements I) A new network architecture and implementation strategy • Rich and diverse network topology for flexible management and high reliability • Dual connectivity at every level for all large-scale science sources and sinks • A partnership with the US research and education community to build a shared, large-scale, R&E managed optical infrastructure • a scalable approach to adding bandwidth to the network • dynamic allocation and management of optical circuits II) Development and deployment of a virtual circuit service • Develop the service cooperatively with the networks that are intermediate between DOE Labs and major collaborators to ensure and-to-end interoperability

  33. Next Generation ESnet: I) Architecture and Configuration • Main architectural elements and the rationale for each element 1) A High-reliability IP core (e.g. the current ESnet core) to address • General science requirements • Lab operational requirements • Backup for the SDN core • Vehicle for science services • Full service IP routers 2) Metropolitan Area Network (MAN) rings to provide • Dual site connectivity for reliability • Much higher site-to-core bandwidth • Support for both production IP and circuit-based traffic • Multiply connecting the SDN and IP cores 2a) Loops off of the backbone rings to provide • For dual site connections where MANs are not practical 3) A Science Data Network (SDN) core for • Provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows • Very high total bandwidth • Multiply connecting MAN rings for protection against hub failure • Alternate path for production IP traffic • Less expensive router/switches • Initial configuration targeted at LHC, which is also the first step to the general configuration that will address all SC requirements • Can meet other unknown bandwidth requirements by adding lambdas

  34. ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings 1625 miles / 2545 km IP core hubs SDN hubs Primary DOE Labs Possible hubs 2700 miles / 4300 km international connections international connections international connections Loop off Backbone Seattle SDN Core Cleveland Chicago New York Denver IP Core Sunnyvale Washington DC MetropolitanArea Rings Atlanta LA international connections Albuquerque international connections San Diego 10-50 Gbps circuits Production IP core Science Data Network core Metropolitan Area Networksor backbone loops for Lab access International connections international connections

  35. ESnet4 • Internet2 has partnered with Level 3 Communications Co. and Infinera Corp. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the “Internet2 Network” • The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated optical-electrical design • Level 3 will maintain the fiber and the DWDM equipment • The DWDM equipment will initially be provisioned to provide10 optical circuits (lambdas - s) across the entire fiber footprint (80 s is max.) • ESnet has partnered with Internet2 to: • Share the optical infrastructure • Develop new circuit-oriented network services • Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes

  36. ESnet4 • ESnet will build its next generation IP network and its new circuit-oriented Science Data Network primarily on the Internet2 circuits (s) that are dedicated to ESnet, together with a few National Lambda Rail and other circuits • ESnet will provision and operate its own routing and switching hardware that is installed in various commercial telecom hubs around the country, as it has done for the past 20 years • ESnet’s peering relationships with the commercial Internet, various US research and education networks, and numerous international networks will continue and evolve as they have for the past 20 years

  37. Internet2 and ESnet Optical Node SDNcoreswitch M320 T640 various equipment and experimental control plane management systems ESnet RON IPcore ESnetmetro-areanetworks groomingdevice CienaCoreDirector dynamically allocated and routed waves (future) • support devices: • measurement • out-of-band access • monitoring • security Direct Optical Connections to RONs • support devices: • measurement • out-of-band access • monitoring • ……. Network Testbeds Future access to control plane fiber east fiber west Internet2/Level3National Optical Infrastructure Infinera DTN fiber north/south

  38. ESnet4 • ESnet4 will also involve an expansion of the multi-10Gb/s Metropolitan Area Rings in the SanFrancisco Bay Area, Chicago, Long Island, Newport News (VA/Washington, DC area), and Atlanta • provide multiple, independent connections for ESnet sites to the ESnet core network • expandable • Several 10Gb/s links provided by the Labs that will be used to establish multiple, independent connections to the ESnet core • currently PNNL and ORNL

  39. ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites MANsiteswitch USLHCnetswitch MANsiteswitch USLHCnetswitch SDNcoreswitch SDNcoreswitch T320 SDNcoreeast ESnet production IP core hub IP coreeast IP corewest IP core router SDN corewest ESnetIP core hub ESnet SDNcore hub MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially,with expansion capacity to 16-64 ESnet managedvirtual circuit services tunneled through the IP backbone Large Science Site ESnet production IP service ESnet managedλ / circuit services ESnet MANswitch Independentport card supportingmultiple 10 Gb/s line interfaces Site ESnet switch Virtual Circuits to Site Virtual Circuit to Site Siterouter Site gateway router SDN circuitsto site systems Site LAN Site edge router

  40. ESnet4 Roll OutESnet4 IP + SDN Configuration, mid-September, 2007 ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site All circuits are 10Gb/s, unless noted. Seattle (28) Portland (8) Boise (29) Boston (9) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC Indianapolis (27) (21) (0) (0) (23) (30) (22) Raleigh Tulsa LA Nashville Albuq. OC48 (3) (1(3)) (24) (4) San Diego (1) (1) (2) (20) (19) Atlanta Jacksonville El Paso (17) (6) (5) BatonRouge Houston

  41. ESnet4 Metro Area Rings, 2007 Configurations 600 W. Chicago USLHCNet 32 AoA, NYC Starlight BNL USLHCNet FNAL ANL Wash., DC MATP JGI JLab LBNL ELITE SLAC NERSC ODU ESnet IP switch/router hubs LLNL ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ORNL ESnet IP switch only hubs SNLL Nashville ESnet SDN switch hubs Wash., DC 56 Marietta (SOX) Layer 1 optical nodes not currently in ESnet plans 180 Peachtree Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Houston Long Island MAN West Chicago MAN Seattle (28) Portland (8) Boise (29) Boston (9) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC San FranciscoBay Area MAN Indianapolis (27) (21) (0) (23) (30) (22) Raleigh Tulsa LA Nashville Albuq. OC48 (1(3)) (3) (24) (4) San Diego Newport News - Elite (1) Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) Atlanta MAN All circuits are 10Gb/s.

  42. LHC Tier 0, 1, and 2 Connectivity Requirements Summary TRIUMF (Atlas T1, Canada) BNL (Atlas T1) FNAL (CMS T1) Vancouver CERN-1 CANARIE USLHCNet Seattle Toronto CERN-2 Virtual Circuits ESnet SDN Boise CERN-3 Chicago New York Denver Sunnyvale KC GÉANT-1 ESnet IP Core Wash DC Internet2 / RONs Internet2 / RONs Internet2 / RONs LA Albuq. GÉANT-2 San Diego GÉANT Atlanta Dallas Jacksonville USLHC nodes • Direct connectivity T0-T1-T2 • USLHCNet to ESnet to Abilene • Backup connectivity • SDN, GLIF, VCs Internet2/GigaPoP nodes ESnet IP core hubs Tier 1 Centers ESnet SDN/NLR hubs Cross connects ESnet - Internet2 Tier 2 Sites

  43. ESnet4 2007-8 Estimated Bandwidth Commitments CERN USLHCNet BNL 32 AoA, NYC Wash., DC MATP JLab ELITE JGI ODU LBNL ESnet IP switch/router hubs NERSC SLAC ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs LLNL SNLL Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Long Island MAN 600 W. Chicago West Chicago MAN CMS 5 Seattle 10 (28) Portland (8) CERN Starlight 13 Boise (29) Boston (9) 29(total) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale 10 (12) USLHCNet Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC San FranciscoBay Area MAN Indianapolis (27) (21) (0) (23) (30) (22) Raleigh FNAL Tulsa LA Nashville ANL Albuq. OC48 (3) (1(3)) (24) (4) San Diego Newport News - Elite (1) Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) (5) BatonRouge MAX Houston All circuits are 10Gb/s. 2.5 Committed bandwidth, Gb/s

  44. Aggregate Estimated Link Loadings, 2007-08 ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site 9 12.5 Seattle 13 13 9 (28) Portland (8) 2.5 Existing site supplied circuits Boise (29) Boston (9) Chicago (7) Clev. (10) (11) NYC (25) (13) (32) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (26) Pitts. (16) Wash DC Indianapolis (27) (21) (0) (0) (23) (30) (22) 8.5 Raleigh Tulsa LA Nashville Albuq. OC48 6 (3) (1(3)) (24) (4) San Diego (1) (1) 6 Atlanta (2) (20) (19) Jacksonville El Paso (17) (6) (5) BatonRouge Houston 2.5 2.5 2.5 Committed bandwidth, Gb/s

  45. ESnet4 IP + SDN, 2008 Configuration ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Seattle (28) (? ) Portland (8) (2) Boise (29) Boston (9) (2) Chicago (7) Clev. (2) (1) (11) (2) (10) NYC (2) Pitts. (25) (13) (32) (2) Denver Sunnyvale (12) Philadelphia KC Salt Lake City (14) (15) (2) (2) (26) (1) (16) (21) Wash. DC (27) (2) Indianapolis (1) (2) (23) (1) (30) (22) (0) (2) Raleigh Tulsa (1) LA Nashville Albuq. OC48 (2) (24) (1) (4) (1) (1) San Diego (3) (1) Atlanta (2) (20) (19) (1) Jacksonville El Paso (1) (17) (6) (5) BatonRouge Houston

  46. ESnet4 2009 Configuration(Some of the circuits may be allocated dynamically from shared a pool.) ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Seattle (28) (? ) Portland (8) 3 Boise (29) Boston (9) 3 Chicago (7) Clev. 2 3 (10) (11) 3 NYC Pitts. 3 (25) (13) (32) 3 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 3 3 (26) 2 (16) (21) Wash. DC 2 3 Indianapolis (27) 2 (23) 2 (30) (22) (0) Raleigh 3 Tulsa LA Nashville 2 Albuq. OC48 (24) 2 2 (4) 2 (3) San Diego 1 (1) Atlanta (2) (20) (19) 2 Jacksonville El Paso 2 (17) (6) (5) BatonRouge Houston ESnet SDN switch hubs

  47. Aggregate Estimated Link Loadings, 2010-11 ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) 30 45 Seattle 50 20 (28) 15 (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) 10 (21) Wash. DC (27) 4 5 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa 5 LA Nashville 4 20 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta 20 20 5 (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) 20 (5) BatonRouge Houston ESnet SDN switch hubs

  48. ESnet4 2010-11 Estimated Bandwidth Commitments 600 W. Chicago CERN 40 USLHCNet BNL 32 AoA, NYC CERN 65 Starlight 100 80 80 80 80 USLHCNet FNAL 40 ANL ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) CMS 25 20 Seattle 25 (28) 15 (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) (21) Wash. DC (27) 4 5 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa 5 LA Nashville 4 10 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta 20 20 5 (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) 10 (5) BatonRouge Houston ESnet SDN switch hubs

  49. ESnet4 IP + SDN, 2011 Configuration ESnet IP switch/router hubs ESnet IP core (1) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Seattle (28) (>1 ) Portland (8) 5 Boise (29) Boston (9) 5 Chicago (7) Clev. 4 5 (10) (11) NYC Pitts. 5 (25) (13) (32) 5 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 5 5 (26) 4 (16) (21) Wash. DC (27) 4 5 Indianapolis 4 (23) 3 (30) (22) (0) Raleigh 5 Tulsa LA Nashville 4 Albuq. OC48 (24) 4 4 (4) 3 (3) San Diego 3 (1) Atlanta (2) (20) (19) 4 Jacksonville El Paso 4 (17) (6) (5) BatonRouge Houston ESnet SDN switch hubs

  50. Typical ESnet4 Hub

More Related