1 / 17

Update on National Networks: Status, Application Targets, Challenges & Opportunities

Update on National Networks: Status, Application Targets, Challenges & Opportunities. Charlie Catlett, Senior Fellow, Computation Institute University of Chicago and Argonne National Laboratory Director, NSF TeraGrid Initiative. Universities supported by SC. Major User Facilities.

Télécharger la présentation

Update on National Networks: Status, Application Targets, Challenges & Opportunities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Update on National Networks:Status, Application Targets, Challenges & Opportunities Charlie Catlett, Senior Fellow, Computation Institute University of Chicago and Argonne National Laboratory Director, NSF TeraGrid Initiative

  2. Universities supported by SC Major User Facilities DOE Specific-Mission Laboratories DOE Program-Dedicated Laboratories DOE Multiprogram Laboratories Office of Science US Community Pacific Northwest National Laboratory Idaho National Laboratory Ames Laboratory Argonne National Laboratory Brookhaven National Laboratory Fermi National Accelerator Laboratory Lawrence Berkeley National Laboratory Stanford Linear Accelerator Center Princeton Plasma Physics Laboratory Lawrence Livermore National Laboratory Thomas Jefferson National Accelerator Facility General Atomics Oak Ridge National Laboratory Los Alamos National Laboratory Sandia National Laboratories National Renewable Energy Laboratory SC program sites Bill Johnston, ESNET

  3. TeraGrid PI’s By Institution as of May 2006 Blue: 10 or more PI’s Red: 5-9 PI’s Yellow: 2-4 PI’s Green: 1 PI TeraGrid PI’s

  4. TeraGrid Science Gateways Initiative:Community Interface to Grids TeraGrid Grid-X Grid-Y • Common Web Portal or application interfaces (database access, computation, storage, workflow, etc). • “Back-End” use of TeraGrid computation, information management, visualization, or other services. • Standard approaches so that science gateways may readily access resources in any cooperating Grid without technical modification.

  5. TeraGrid Science Gateway Partner Sites TG-SGW-Partners 21 Science Gateway Partners (and growing) - Over 100 partner Institutions

  6. Some Observations • While many challenges and concepts are not new… • The need for huge pipes remains constant; the definition of huge continues to grow • There is a large dynamic range of bandwidth required between average and peak - this too is not new, but the prefixes change (Mb->Gb->TB…) • Switched circuits are older than the Internet, certainly not new, but the names and the bandwidth both change (POTS, ATM SVC, switched lambdas…) • The nature of applications really does seem to be changing • Service oriented architectures • Web services • More direct control in the hands of application developers… for everything but the network…

  7. Examples • Many Flows leading to huge loads: YouTube • After one year, YouTube drives 20 Gb/s • Traffic growth is at 20% per month • Web Services enabling growth driven by embedding services within applications • Amazon S3 - many clients, can write your own (it is a web service), and supports bittorrent • We will “big brother” ourselves • More video…larger email attachments!

  8. Driver for user controlled networks • Increasingly more and more organizations are acquiring their own fiber networks • Universities, schools, hospitals, business • Acquiring fiber in the long haul is very expensive to light and obtain • Alternative is to use “dim fiber” –point to point wavelengths • But want flexibility to do configuration and change management as with dark fiber • Increasingly science needs dedicated networks for specific applications and disciplines for high data volume grids • Want to be able to manipulate the network in the same way they can manipulate the application Bill St. Arnaud (CANARIE)

  9. Application Requirements Driving ESnet Architecture Bill Johnston, ESNET

  10. Next Generation ESnet • Architecture and capacity is driven by the SC Program requirements • Main architectural elements and the rationale for each element • 1) A High-reliability IP core (e.g. the current ESnet core) to address • General science requirements, Lab operational requirements, Backup for the SDN core, Vehicle for science services, Full service IP routers • 2) Metropolitan Area Network (MAN) rings to provide • Dual site connectivity for reliability, Much higher site-to-core bandwidth, Support for both production IP and circuit-based traffic, Multiply connecting the SDN and IP cores • 2a) Loops off of the backbone rings to provide for dual site connections where MANs are not practical • 3) A Science Data Network (SDN) core for • Provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows, Very high total bandwidth, Multiply connecting MAN rings for protection against hub failure, Alternate path for production IP traffic, Less expensive router/switches, Initial configuration targeted at LHC, which is also the first step to the general configuration that will address all SC requirements, Can meet other unknown bandwidth requirements by adding lambdas Bill Johnston, ESNET

  11. NLR owned fiber* NLR WaveNet, FrameNet & PacketNet PoP NLR WaveNet & FrameNet PoP NLR WaveNet PoP PoP for primary connection point by a member (“MetaPoP”) PoP needed because of signal regeneration requirements but can also be used for secondary connection by a member PoP established by NLR for members regional needs PoP established at exchange points NLR Allocated Waves % used/Free 56%/14 35%/13 23%/31 SEAT 40%/12 15%/17 19%/26 23%/31 PORT STAR 19%/26 SYRA BOIS 60%/8 35%/26 23%/31 30%/28 CLEV NEWY OGDE 5%/19 15%/27 DENV KANS CHIC SUNN PITT SALT WASH 25%/15 25%/24 20%/16 25%/24 25%/30 RALE RATO 25%/24 10%/18 LOSA TULS ALBU 25%/15 ATLA 22%/25 19%/26 DALL SAND PHOE 15%/34 PENS 19%/26 ELPA JACK 25%/24 SANA BATO 25%/24 HOUS 19%/26 19%/26 15%/27 15%/27 22%/25 * Fiber on the SAND-LOSA-SUNN path belongs to CENIC Tom West, NLR

  12. Layer 1 Topology with IP Network PROVISIONAL TOPOLOGY – SUBJECT TO DISCUSSION Rick Summerhill

  13. Virtual Circuit Network Services • A top priority of the science community • Today • Primarily to support bulk data transfer with deadlines • In the near future • Support for widely distributed Grid workflow engines • Real-time instrument operation • Coupled, distributed applications • To get an idea of how circuit services might be used to support the current trends, look at the one year history of the flows that are currently the top 20 • Estimate from the flow history what would be the characteristics of a circuit set up to manage the flow Bill Johnston, ESNET

  14. CANARIE Approach: UCLP • User Controlled LightPaths – a configuration and provisioning tool built around grid technology using web services • Third party can concatenate cross connects together from various links, routers and switches to produce a wide are network that is under their control • Articulated Private Network (APN) • Next generation VPN • Uses Service Oriented Architecture (SOA) and so network can be integrated with other web service applications Bill St. Arnaud (CANARIE)

  15. http://10.0.0.6 User A Single Computer or WS instance of an orchestration http://10.0.0.1 http://10.0.0.2 http://10.0.0.3 http://10.0.0.5 Interface Card or port With URI addressing http://10.0.0.4 VPN extends into computer to specific processes Layer 3 WS Virtual Routers yyyy:410:0:1 zzzz:410:0:1 User B DWDM Network Time slice or software process Web service Routing daemon Web service Time slice or Software process Web service Extending Networks into Applications Bill St. Arnaud (CANARIE)

  16. Technical Issues Rapid growth of Internet New applications increasing bandwidth requirements and network usage End-to-end performance Applications Distributed computing Digital libraries Require peak bursts of bandwidth several orders of magnitude greater than average traffic Proposed Architecture “The Internet architecture should, wherever possible, combine the advantages of packet switched networks and of virtual circuit networks [to support] the ability to increase dynamically in capacity..." Hybrid Networks: A Blast from the Past ( 1991 ! ) “Life After Internet: Making Room for New Applications” (Larry Smarr and Charlie Catlett) in “Building Information Infrastructure,” ed. Brian Kahin, McGraw-Hill)

  17. Conclusions • National Networks will do on demand today, and better on demand next year • Today in days or weeks - in 2007/8 will do it in minutes • Today granularity is per site - multiple efforts trying to move to per application • CANARIE UCLP • ESnet OSCARS • I2 HOPI • Issues will remain • Getting a switched lambda to a desktop is not a national backbone issue • Regional networks • Campus/Lab networks • Maybe we are doing OK!

More Related