1 / 73

SURA IT Committee Regional Infrastructure Discussion July 27, 2006 Atlanta

SURA IT Committee Regional Infrastructure Discussion July 27, 2006 Atlanta. Don Riley SURA IT Fellow. SURA RII Background. Concepts for SURA Regional Infrastructure Initiative were developed by the SURA Crossroads Architecture Working Group (AWG), Aug. 2001 white paper.

shawn
Télécharger la présentation

SURA IT Committee Regional Infrastructure Discussion July 27, 2006 Atlanta

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SURA IT CommitteeRegional Infrastructure DiscussionJuly 27, 2006Atlanta Don Riley SURA IT Fellow

  2. SURA RII Background Concepts for SURA Regional Infrastructure Initiative were developed by theSURA Crossroads Architecture Working Group (AWG), Aug. 2001 white paper • Enhance research capacity of Southeast, enable to participate in national research initiatives and opportunities; • Inclusive - Special focus on non-metropolitan SURA member institutional connectivity; • Avoid limitations on the use of the infrastructure imposed by restrictive use policies; • Support networked services at the lowest possible cost and to create new services where needed; • benefit the broadest community by aggregating connectivity at carrier hotels where possible; • Desire to significantly disruptthe existing pricing models for network infrastructure.

  3. Technical Capability Set Articulated by AWG • A complete set of fiber connected campuses • Each campus can establish “waves” to any other Crossroads campus/pop • First and last OEO demarc is on campus • Transport Core is optically transparent • Core service centers aggregate and route packets/waves • Traditional voice services • Commodity IP services • High performance/Internet2 services • Custom telecom requirements (e.g. Teragrid)

  4. AWG Conceptual Regional InfrastructureAugust 2001

  5. What happened 2001 to 2004 • Geo Study of fiber possibilities for SURA • NSF DTF/ETF • Abilene upgrade • International connectivity • NLR • Regional optical initiatives • IEEAF - international connectivity collaboration • AT&T GridFiber Agreement

  6. SURA Region Map

  7. AT&T NexGen Fiber:6,000 donate miles

  8. Snapshot:

  9. Planning Group Charge • Review the recently approved SURA IT Strategic Plan (attached below) • Review the original August 2001 SURA RII plan and architectural overview. (http://www.sura.org/info_tech/infrastructure/infrastructure.html) • Review the AT&T GridFiber agreement and the recently developed ‘principles’ governing the development of recommendations for allocation of the AT&T assets. • Review and understand the current state of significant network infrastructure initiatives that contribute to the development of state, regional, national, and international high performance computational and data grids, particular attention to those involving dark fiber acquisition. This includes state networks and plans related to National Lambda Rail participation/utilization, etc.

  10. Planning Group Charge • Based on the above inputs, develop and recommend an updated SURA Regional Infrastructure Plan: a framework for strategies and investment for SURA. This plan should, among other things, include the following items: • Develop and document a plan for utilizing the services and assets made available to SURA through the AT&T GridFiber Agreement. • The plan should encourage the leveraging of the AT&T GridFiber Agreement to secure additional assets/services in support of SURA’s strategic goals. • The plan should clearly identify how each major component supports one or more of the goals identified in 1-4 above. • The plan should include a financial analysis, clearly identifying costs and sources of funds associated with each major component. • Support SURA in the development of the AT&T GridWaves Agreement

  11. Conceptual Framework • IT Strategic Plan Drivers • High Performance Computing and national labs • SCOOP • Medicine and BioInformatics • Grids • … and Opportunities • Extra-regional resources and collaborations

  12. High Performance Computing and National Labs • Center for Computational Sciences, University of Kentucky, Dr. John Connolly, Director • East Carolina University, Dr. Ernest Marshburn, Director • CSIT, Florida State University; Dr. Joe Travis, Director • Computer and Information Technology Institute (CITI), Rice University, Jan Odegard, Director◦ • Georgia Tech High Performance Computing, Ron Hutchins, CTO • Mississippi State University Engineering Research Center,Dr. David Marcum, Director • North Carolina Supercomputing Center, Dr. Eric Sills, Director* • * Note: North Carolina State University is going to take over as CASC Member◦ • Center for Computational Sciences, Oak Ridge National Laboratory, Dr. Thomas Zacharia, Director • Texas A&M Supercomputer Center, Dr. Tom Putnam, Director • Texas Advanced Computing Center, UT Austin, Dr. Jay Boisseau, Director • Texas Learning & Computation Center, University of Houston, Dr. Lennart Johnson, Director • High Performance Computing Center, Texas Tech University, Dr. Phil Smith, Director • University of Florida, Gainesville, Dr. Chuck Frazier, Director • Louisiana State University, Ed Seidel, Director • …and then along comes Virginia Tech with their Apple

  13. SCOOP sites • Chesapeake Bay Observing System (CBOS) • Coastal Ocean Monitoring and Prediction System (COMPS) for West Florida • Gulf of Mexico Coastal Ocean Observing System (GCOOS) • NGLI (Mississippi bight) • South Atlantic Bight Synoptic Offshore Observational Network (SABSOON) • Southeast Atlantic Coastal Ocean Observing System (SEA-COOS) • SEAKEYS (Sustained Ecological Research Related to Management of the Florida Keys Seascape) • SFOMC (South Florida Ocean Measurement Center) • Texas Automated Buoy System (TABS) • Texas Coastal Ocean Observation Network (TCOON) • WAVCIS (wave-current information system; Louisiana Coast)

  14. Hopkins Duke UNC (CH) UAB Vanderbilt Emory UVa UT - Dallas U Md (Baltimore) UT - MD Anderson Miami Florida UT - Houston Wake Forest UT - Galveston UT - San Antonio South Carolina GW Kentucky VCU Georgetown Arkansas Tulane Tennessee UT-Austin Medicine and BioinformaticsTop 25 in NIH funding (SURA region)* • Not all are SURA members

  15. Duke Georgetown Hopkins MD Anderson South Florida UAB UNC Vanderbilt VCU Wake Forest NCI Centers participating in Grid Initiative (CaBig)

  16. Grids • Extended Terascale Facility • Distributed Terascale Facility • GriPhyN (now Open Science Grid) • NC Bio-Grid • NCI Cancer Grid (CaBig) • Advanced Medical Detection Grid • SCOOP data grid • + Lots more since - including SURAgrid

  17. Collaborations and extra-regional opportunities • National Lambda rail • National backbone • Southeastern aggregators • MATP, NC,Ga, Fla, Learn • Abilene • Lariat • National labs • Jefferson Lab nuclear physics partners • Argonne, Fermi, CERN • National buyer’s club activists • Northeast, Upper mid-West

  18. Abilene Network 10 Gbps Upgrade

  19. SURA Member Locations (15) (14) (13) (12) (11) (10) (9) (5) (3) (4) (6) (1) (2) (7) (8)

  20. SURA Members, Nat’l Labs and CASC Locations = SURA member. = National Laboratory. = SURA HPC sites (CASC member). = SURA Medical Research sites. = SURA Grid computing sites. = SURA member in SCOOP. = CASC member (non-SURA) . (15) (14) (13) (12) (11) (10) (9) (5) (3) (4) (6) (1) (2) (7) (8)

  21. National Lambda Rail Phase I Network = SURA member. = National Laboratory. = SURA HPC sites (CASC member). = SURA Medical Research sites. = SURA Grid computing sites. = SURA member in SCOOP. = CASC member (non-SURA) . (15) (14) (13) (12) (11) (10) (9) (5) (3) (4) (6) (1) (2) (7) (8)

  22. Potential use of AT&T Network by NLR (red line) = SURA member. = National Laboratory. = SURA HPC sites (CASC member). = SURA Medical Research sites. = SURA Grid computing sites. = SURA member in SCOOP. = CASC member (non-SURA) . (15) (14) (13) (12) (11) (10) (9) (5) (3) (4) (6) (1) (2) (7) (8)

  23. Examples of Regional-State Initiatives Overlaid with AT&T Network (red lines) = SURA member. = National Laboratory. = SURA HPC sites (CASC member). = SURA Medical Research sites. = SURA Grid computing sites. = SURA member in SCOOP. = CASC member (non-SURA) Northern Tier Initiative . NEREN NYSERNet (15) Iowa Lariat (14) MATP (13) (12) OneNet (11) NCREN (10) (9) GA Statewide Initiatives (5) (3) (4) (6) (1) FLR (2) (7) (8) LEARN LA Statewide Initiatives

  24. SURA IT Strategy 2005: Looking Back • SURAnet • Crossroads Regional GigaPop Support (SOX, MAX) • SURA RII • Video Conferencing / Voice over IP (ViDe) • National Middleware Initiative (NMI Testbed) • SURA CyberSecurity Working Group • Internet2; AT&T USAWaves; National LambdaRail • Refocused SURA IT Strategy (2003; again 2005) • Facilitating Collaboration on Science (eScience) • Strengthening the base: Enabling the Possible • Positioning SouthEast for Lead in IT and Science

  25. SURA IT Strategy 2006 : Looking Forward • Foundation (Infrastructure) Building • Connectivity • Regional: Crossroads/Sox/MAX + MATP/SLR/FLR/LONI/LEARN etc • National (AT&T GridFiber, NLR, Internet2) • International Opportunities (IEEAF, IRNC, Atlantic Wave) • Network Research • Regional Grid/HPC Infrastructure and Capacity Building • Middleware • High Performance Computing • Data storage • Identifying and Enabling “Science Communities” • Focused on Support for SURA Science Drivers • Jefferson Laboratory • SCOOP - Coastal Research • Bio-Informatics / Medical Research / Rural Telehealth

  26. NLR Today

  27. Internet2 Tomorrow - “NewNet”

  28. Use of SURA Donated AT&T Fiber AT&T NexGen Routes Red = AT&T route not yet available Green = implementation complete Yellow = implementation in process Black = possible future deployment

  29. International: Atlantic Wave

  30. = Resources on-grid = SURA Member SURAgrid Participants (As of April 2006) Bowie State GMU UMD UMich UKY UVA UArk GPN Vanderbilt ODU UAH USC NCState OleMiss TTU SC UNCC TACC UAB UFL TAMU LSU GSU ULL Tulane

  31. SINet (Japan) Russia (BINP) GÉANT - France, Germany, Italy, UK, etc PacificWave PNWGPoP/PacificWave SEA MAN LANAbilene CHI-SL Starlight UNM Equinix Equinix PAIX-PA Equinix MAE-West SNV NGIX-W NGIX-E USN USN Abilene Abilene Abilene Abilene MAXGPoP SoXGPoP High Speed International Connection Commercial and R&E peering points ESnet core hubs IP SDN High-speed peering points with Abilene Connecting DOE Labs to the World’s R&E and Commercial Nets:ESnet’s Domestic, Commercial, and International Connectivity (Spring 2006) Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren CA*net4 France GLORIAD (Russia, China)Korea (Kreonet2) MREN Netherlands StarTapTaiwan (TANet2) UltraLight CERN (USLHCnetCERN+DOE funded) Australia NYC MAE-E CHI SNV DC ATL SDSC ALB Australia • ESnet provides: • High-speed peerings with Abilene, CERN, and US and international R&E networks • Management of the full complement of global Internet routes (about 180,000 unique IPv4 routes) in order to provide DOE scientists rich connectivity to all Internet sites AMPATH (S. America) AMPATH S. America

  32. ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings Europe (GEANT) CERN Canada (CANARIE) Canada (CANARIE) Asia-Pacific CERN Europe (GEANT) Seattle SDN Core Chicago Australia New York Denver IP Core Sunnyvale Washington, DC MetropolitanArea Rings Atlanta Loop off Backbone LA Aus. Albuquerque South America (AMPATH) San Diego IP core hubs 10-50 Gb/s circuits Production IP core Science Data Network core Metropolitan Area Networks International connections South America (AMPATH) SDN hubs Primary DOE Labs possible hubs

  33. 1200 miles / 1900 km 2700 miles / 4300 km R&E R&E R&E R&E R&E R&E R&E ESnet3: A National IP Network Built onVarious Circuit Infrastructure Canada CERN Asia-Pacific Canada Russia and China CERN Europe Seattle Chicago Australia NLR supplied 10Gbps circuits New York Qwest supplied 10Gbps backbone Sunnyvale Washington, DC Atlanta AMPATH(S. America) Aus. Albuquerque San Diego AMPATH 10 Gbps circuits Production IP core NLR core Metro Area Networks Lab supplied International connections • ESnet network architecture consists of • Circuits • Circuits interconnects • hubs with routers and switches • Connected sites • Connections to other networks • US R&E, international, and commercial Backbone hubs Primary DOE Labs Major research andeducation (R&E)network peering points

  34. ESnet3 Layer 2 Architecture Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (spring, 2006) SINet (Japan) Russia (BINP) CERN (USLHCnetCERN+DOE funded) GÉANT - France, Germany, Italy, UK, etc PNNL SEA NERSC SLAC BNL MIT ANL INEEL LIGO LBNL LLNL MAN LANAbilene SNLL CHI-SL JGI TWC Starlight OSC GTNNNSA Lab DC Offices Chi NAP AMES PPPL JLAB FNAL ORNL SRS LANL SNLA DC DOE-ALB NASAAmes PANTEX ORAU NOAA OSTI ARM YUCCA MT BECHTEL-NV GA Abilene Abilene Abilene MAXGPoP Allied Signal KCP NREL SNV Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren CA*net4 France GLORIAD (Russia, China)Korea (Kreonet2 MREN Netherlands StarTapTaiwan (TANet2, ASCC) PNWGPoP/PAcificWave ESnet Science Data Network (SDN) core AU ESnet IP core NYC MAE-E SNV CHI Equinix PAIX-PA Equinix, etc. SNV SDN ATL SDSC AU ALB 42 end user sites ELP Office Of Science Sponsored (22) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) ESnet IP core: Packet over SONET Optical Ring and Hubs Laboratory Sponsored (6) commercial and R&E peering points ESnet core hubs IP high-speed peering points with Internet2/Abilene

  35. TRIUMF (Atlas T1, Canada) BNL (Atlas T1) FNAL (CMS T1) Vancouver LHC Tier 0, 1, and 2 Connectivity Requirements Summary CERN-1 CANARIE USLHCNet Seattle Toronto Abilene / Gigapop Footprint CERN-2 Virtual Circuits ESnet SDN Boise CERN-3 Chicago New York Denver Sunnyvale KC GÉANT-1 ESnet IP Core Wash DC LA Albuq. GÉANT-2 San Diego GÉANT Atlanta Dallas Jacksonville USLHC nodes • Direct connectivity T0-T1-T2 • USLHCNet to ESnet to Abilene • Backup connectivity • SDN, GLIF, VCs Abilene/GigaPoP nodes ESnet IP core hubs Tier 1 Centers ESnet SDN/NLR hubs Cross connects with Internet2/Abilene Tier 2 Sites

  36. Quilt Fiber-based RON Map

  37. SURA Region Fiber-based RON Map

  38. Internet2 SEGP Map

  39. EPSCOR States

  40. Level3/Wiltel/Progress Fiber Footprint

  41. Level3/Wiltel/Progress SURA Fiber Footprint

  42. Road Tour: State/RON updates

  43. Delaware

  44. Maryland-DC-Virginia

  45. KEY: MAX Members & Projects NGIX MATP Members & Resources BoSSnet Abilene GEANT NREN DREN ESnet NASA NISN MATP VORTEX DWDM Node V OC-48 POS NLR 10 GE Baltimore ATDnet HOPI DRAGON Network Virginia MAX T640 CLPK MATP OSR V V V V V V Charlottesville Lynchburg Richmond Arlington Ashburn Norfolk E-Lite V Roanoke MAX – MATP Combined Regional Infrastructure UMBC JHU UMB MAX DC metro WDM GWU UMD GU Combined Participants (partial) Primary Members (MAX and MATP) GMU GU GWU JeffLabs/SURA ODU NASA UMD UVA VCU VT W&M Selected Connectors of 39 Connectors Library of Congress NASA GSFC Nat. Archives & Records Ad. NSF Nat. Inst. Of Health ATDnet Nat. Library of Medicine USGS NOAA USDA Naval Research Lab US Census Howard Hughes Med. Inst. JHU Smithsonian Institution ISI East UCAID World Bank Univ. System of Maryland (11 campuses) NetworkVirginia (651 campuses) McLean LVL3 V GMU UVA VCU VT ODU Ring completes via Pittsburgh NASA W&M JeffLab Ring completes via Durham

  46. VT Private Dark Fiber/DWDM Program with MBC VORTEX 10 Gbps LANPHY (1 wave lit, 3 waves dark) MBC Private DWDM System (Infinera DWDM on XO Fiber under development, Fall 06) National LambdaRail MATP Washington DC Internet2 Abilene / HOPI Mid Atlantic Crossroads Mid-Atlantic Broadband Coop VT Blacksburg Campus NLR / MCNC Raleigh/Durham NLR / SOX Atlanta

  47. Jefferson Laboratory Connectivity MATP Virginia Tech MAX GIGAPOP DC - MAX Giga-POP MATP NYC ESnet core Lovitt Bute St CO ODU Eastern LITE (E-LITE) Old Dominion University ESnet Router NASA Atlanta VMASC JLAB Site Switch 10GE OC192 OC48 W&M JLAB JTASC

  48. North Carolina

  49. NCNI Dark Fiber NCNI Dark Fiber Network Summer 2006 Duke Progress Telecom fiber RENCI Level(3) fiber MCNC UNC EPA NIEHS DukeNet? fiber Level3 Cisco ITS RCH Dark fiber connected location Not yet connected ITC-Deltacom fiber NCSU Campus fiber NCSU Hillsborough Qwest NCSU Centennial

  50. University3 LEC Access Commercial Users To other NCREN POPs Regional Users fiber rPOP Services University1 University2 Commercial ( ISP ) Services fiber IP Transport Service NCREN Services R&E Users ILEC/LEC Access Cisco 7609 Cisco 12xxx To other NCREN PoPs NCREN rPoP

More Related