1 / 16

T0/T1/T2 Networking Issues May 2005 David Foster Communications Systems Group Leader CERN

EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY FOR PARTICLE PHYSICS. T0/T1/T2 Networking Issues May 2005 David Foster Communications Systems Group Leader CERN. CERN Networking for LCG. LCG will require Several thousands of Gigabit ports in the computer center

amiel
Télécharger la présentation

T0/T1/T2 Networking Issues May 2005 David Foster Communications Systems Group Leader CERN

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY FOR PARTICLE PHYSICS T0/T1/T2 Networking IssuesMay 2005David FosterCommunications Systems Group Leader CERN

  2. CERN Networking for LCG • LCG will require • Several thousands of Gigabit ports in the computer center • Hundreds of Ten Gigabit Ethernet connections in the computer Center • 10+ Ten Gigabit Ethernet links to T1’s (WAN) • 8+ Ten Gigabit Ethernet links to the Experiments • Challenges • Operation of the system as ONE entity • Ensure security and protection of the system • Good monitoring to understand how the network is being used. • T1/T2 Campus Infrastructures

  3. LCG cluster network WAN Gigabit Ethernet Ten Gigabit Ethernet link Double Ten gigabit Ethernet link 2.4 Tbps CORE One double Link not shown Experimental areas Campus Network Distribution layer …. …. 10gbps to 88*1gbps 10gbps to 88*1gbps 10gbps to 88*1gbps 10gbps to 32*1gbps 10gbps to 10*10gbps ..32.. ..10.. ..96.. ..96.. ..96.. ~2000 Tape and Disk servers ~6000 CPU servers

  4. The future ??? • Coming faster than we imagine … ??? • 3 Core • 1 TFlop?? • Liquid Cooled • Networked, 20G Disk • <$300 ?? • And then there is the PS3 …..

  5. GridKa IN2P3 TRIUMF Brookhaven ASCC T2 T2 T2 T2 T2 T2 T2 T2 T2 T2 Nordic Fermilab CNAF SARA PIC T0/T1/T2 Interconnectivity T2s and T1s are inter-connectedby the general purpose researchnetworks Dedicated10 Gbit links Any Tier-2 may access data at any Tier-1 RAL

  6. LHC T1 Networking • Needs to provide high bandwidth production connections to the T0 • For the first time for HEP the WAN is an integral part of the computing system. • Dedicated 10Gb/sec links • Is the combination of a number of initiatives: • GEANT networking deployed in Europe – Geant-2 • SURFNET, SARA • UKERNA, RAL • RENATER, IN2P3 • DFN, FZK • NORDUNET, Nordic T1 • RedIRIS, PIC • GARR, Bologne • Dedicated transatlantic and transpacific links • TRIUMF • ASCC • FNAL, BNL • Networking Initiatives • NetherLight, UKLight, GLIF, Gloriad, Ultralight and many others interconnecting China, Asia Pacific, North America, South America etc …

  7. LHC T2 Networking • Needs to provide connectivity between T2’s and T2 to T1. No particular access pattern is assumed. • T2’s are expected to have good (1Gb/sec -> 10Gb/sec) access to national and international research networks • Geant • ESnet • Abilene • …….

  8. GÉANT2 Topology • Up to 15 of 30 consortium partners will be connected to DF • Selection of preferred providers expected to be completed on 9.5.2005 in Pisa • On 31.3. all DF routes relevant to the 7 European T1s have been selected

  9. SNIC CPH SARA AMS RAL LON FRA FZK PAR GVA LHC IN2P3 PIC MIL MAD BOL GÉANT2 DF Topology as relevant to LHC up to 6 further Central European locations

  10. Cost Model • GÉANT2 does not have prices, it shares cost • The NRENs on the dark fibre cloud subscribe to a GEANT+ service - about 2 M€ per NREN per year • 10Gb/s IP and 10Gb/s worth of p2p services • This subscription finances the DF backbone • Extra wavelengths for projects at marginal cost • Opportunities for more direct connections (T2-T1) than first thought?

  11. GLIF MAP From GLIF

  12. Major DOE Office of Science Sites ESnet Goal – 2007/2008 AsiaPac • 10 Gbps enterprise IP traffic • 40-60 Gbps circuit based transport SEA CERN Aus. Europe Europe ESnet Science Data Network (2nd Core – 30-50 Gbps,National Lambda Rail) Japan Japan CHI SNV NYC DEN DC MetropolitanAreaRings Aus. ESnet IP Core (≥10 Gbps) ALB ATL SDG ESnet hubs New ESnet hubs ELP Metropolitan Area Rings High-speed cross connects with Internet2/Abilene 10Gb/s 10Gb/s 30Gb/s40Gb/s Production IP ESnet core Science Data Network core Lab supplied Major international

  13. UKERNA DFN RedIris GARR Surfnet Renater UKLight Nether Light GEANT Nordunet Tier0/1 Network Topology IN2P3 Triumf 1G 2005/2006 Evolution BNL GridKa 10G Canarie nx1G PIC ManLan CERN TIER-0 CNAF ESNet SARA 2 10G 6x1G Nordic 10G StarLight 10G RAL 2x1G 2x1G 2x1G 10G ASCC FNAL 10Gb/sec < 10Gb/sec

  14. Basic Network Issues • Collections of circuits are not a network. • GEANT will provide diverse backup routes for dedicated circuits. • DOE/CERN will provide circuits to New York (BNL) and Chicago (Fermilab) with additional transit between New York and Chicago. • The backup connectivity for TRIUMF (Canada) and ASCC (Taipei) is still being discussed. • Testbeds and Production are not the same thing • Many issues to be resolved that need funding, testbeds are important. • We need to evolve a clear plan for production infrastructures for LHC. • The LHC Network design is being discussed. • The sub-group of the Grid Deployment Board, “T0/T1 Networking” is preparing an architecture document. • Aims to reach agreement on how the IP network on the dedicated circuits will be designed. • Will indicate what type of equipment will be required. “Who should put what, where” • Some technology investigations are underway • The use of long-distance WAN-PHY links given the OC192 interface costs. • UCLP • GFP and VCAT technologies.

  15. Basic Operational Issues • Many parties are involved in operations support • At the network layer there are a number of partners with different spheres of influence • GEANT, NREN’s Commercial links, T0 and T1 Centers • At the grid layer there are operations centers, Regional Operations Center (ROC), Grid Operations center (GOC), Core Infrastructure Center (CIC) etc. • For the end user there is Global Grid User Support (GGUS) • Performance Enhancement and Response Teams (PERT) are emerging in some NRENs • The process for resolving end user problems has yet to be fully defined. • We still need to decide on a monitoring strategy • and deploy appropriate monitoring tools.

  16. Far-reaching Issues • There is tremendous momentum behind world-wide networking initiatives but it is important that this continues. • What we want to do today … “requirements” • A typical approach to size needs according to current understanding. • What we can do tomorrow … “opportunity” • Affordability of high capacity end-end networking • CPU-Memory-Disk • Bus-NIC • NIC-Campus • Campus-WAN • Opportunity for “business transformation” and conceive of new ways of collaborating and sharing resources. • Current grid use is largely off-line batch like. • Continued advancement in “on-demand” end-end networking will provide for increasingly cost effective, real-time and interactive usage. • World class networking available for everyone will bring dramatic changes and opportunities • Digital divide issues need to be addressed to make cost effective access to high performance networking accessible to everyone. • Pervasive high performance networking is needed to realise the vision of pervasive high performance grid services

More Related