1 / 13

ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007

ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007. Office of Science Collaborators (or “Why I am Here in the Frozen North”). SC National User Facilities – User Affiliations.

abbott
Télécharger la présentation

ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESnet UpdateJoint Techs MeetingMinneapolis, MNJoe BurresciaESnet General Manager2/12/2007

  2. Office of Science Collaborators (or “Why I am Here in the Frozen North”) SC National User Facilities – User Affiliations • The Office of Science (OC) supports (FY2008) the research of about 25,500 Ph.D.’s, Postdoctoral Research Associates, and Graduate Students • Half of over 21,500 users of SC’s scientific facilities in FY 2008 will come from universities From Dr Orbach’s FY2008 Budget Request for the Office of Science

  3. Collaborative Effort: OSCARS • On-demand Secure Circuit Advanced Reservation System (OSCARS) • Collaborative effort status • Working with Internet2 and DRAGON to support interoperability between OSCARS/BRUW and DRAGON • Working with Internet2, DRAGON, and Terapaths to determine an appropriate interoperable AAL framework (this is in conjunction with GEANT2's JRA5) • Working with DICE (Dante, Internet2, CANARIE, ESnet) Control Plane group to determine schema and methods of distributing topology and reachability information. • Completed porting OSCARS from Perl to Java to better support web-services. • This is now the common code base for OSCARS and Internet2's BRUW.

  4. Collaborative Effort: perfSONAR • perfSONAR is a global collaboration to design, implement and deploy a network measurement framework. • Collaborators • ARNES, Belnet, CARnet, CESnet, Dante, University of Delaware, DFN, ESnet, FCCN, FNAL ,GARR, GEANT2, Georga Tech, GRNET, Internet2, IST, POZNAN Supercomputing Center, Red IRIS, Renater, RNP, SLAC, SURFnet, SWITCH, Uninett, and others… • ESnet Deployed Services • Link Utilization Measurement Archive • Virtual Circuit Status • In Development • Active Latency and Bandwidth Tests • Topology Service • Additional Visualization capabilities

  5. Current ESnet Network Status • Since the July Joint Techs meeting… • ESnet transited LHC traffic between FNAL and, via USLHCnet, CERN for the first time this month • 10 GE NLR connection between Seattle and Chicago • In production currently carrying LHC traffic to FNAL • Backup for IP circuits • 10 GE NLR connection between Chicago and Washington DC • Was accepted • Will serve as one transition point between the current ESnet backbone and ESnet4 • Chicago MAN dark fiber physically in place • Direct peering between ESnet and Latin America on both Coasts • CUDI in San Diego • AMPATH at MANLAN

  6. Current ESnet Topology 1200 miles / 1900 km 2700 miles / 4300 km R&E R&E R&E R&E R&E R&E R&E Canada CERN Asia-Pacific Canada Russia and China CERN Europe Seattle Chicago NLR supplied 10Gbps circuits Australia New York Qwest supplied 10Gbps backbone Sunnyvale Washington, DC Atlanta South America Albuquerque Aus. San Diego Latin America 10 Gbps circuits Production IP core NLR core Metro Area Networks Lab supplied International connections Backbone hubs Primary DOE Labs Major research andeducation (R&E)network peering points

  7. ESnet Site Availability “5 nines” (>=99.995%) “4 nines” (>99.95%) “3 nines (>99.5%)” Dually connected sites

  8. ESnet4 Status • ESnet4 progress is being made: • All hardware needed to deploy Phase 1 has been ordered • At the cost of about $4M • This hardware continues to arrive at LBNL • A transition plan and schedule is in place, we are scheduled to be transitioned from Qwest backbone circuits totally by September 2007 • ~30 new 10G circuits in 2007 (WAN and MAN) • 1 new 10GE circuit every 12 days • ~40 new 10G circuits in 2008 (WAN and MAN) • 1 new 10GE circuit every 9 days • The first ESnet4 Science Data Network switch has just been installed in New York City, it is connected to the Level3 Infinera gear

  9. ESnet4 IP + SDN Configuration, April 2007 ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Seattle Portland Boise Boston Chicago Clev. NYC Denver Sunnyvale Philadelphia KC Salt Lake City (1(2)) Pitts. Wash DC Indianapolis Raleigh Tulsa LA Nashville Albuq. San Diego Atlanta Jacksonville El Paso BatonRouge Houston All circuits are 10Gb/s.

  10. ESnet4 IP + SDN Configuration, September, 2007 ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site All circuits are 10Gb/s. Seattle Portland Boise Boston Chicago Clev. NYC Denver Sunnyvale Philadelphia KC Salt Lake City Pitts. Wash DC Indianapolis Raleigh Tulsa LA Nashville Albuq. OC48 San Diego Atlanta Jacksonville El Paso BatonRouge Houston

  11. ESnet4 IP + SDN Configuration, September, 2008 ESnet IP switch/router hubs ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site All circuits are 10Gb/s, or multiples thereof Seattle Portland Boise Boston Chicago Clev. Denver Sunnyvale Philadelphia KC Salt Lake City Pitts. Wash DC Indianapolis Raleigh Tulsa LA Nashville Albuq. OC48 San Diego Atlanta Jacksonville El Paso BatonRouge Houston

  12. ESnet4 IP + SDN, 2011 Configuration ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site Seattle (>1 ) Portland 5 Boise Boston 5 Chicago Clev. 4 5 NYC Pitts. 5 5 Denver Sunnyvale (12) Philadelphia KC Salt Lake City 5 5 4 Wash. DC 4 5 Indianapolis 4 3 Raleigh 5 Tulsa LA Nashville 4 Albuq. OC48 4 4 3 San Diego 3 Atlanta 4 Jacksonville El Paso 4 (6) BatonRouge Houston ESnet SDN switch hubs

  13. ESnet4 Built Out 1625 miles / 2545 km Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections Primary DOE Labs High speed cross-connectswith Ineternet2/Abilene Possible hubs 2700 miles / 4300 km Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits),500-600 Gbps by 2011-2012 (100 Gb/s circuits) Canada (CANARIE) CERN(30+ Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific CERN(30+ Gbps) GLORIAD(Russia and China) USLHCNet Europe (GEANT) Asia-Pacific Science Data Network Core Seattle Boston Chicago IP Core Boise Australia New York Kansas City Cleveland Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego Houston Latin America IP core hubs Jacksonville SDN hubs Core network fiber path is~ 14,000 miles / 24,000 km

More Related