1 / 29

A follow-up on network projects

HEPiX Fall 2013. A follow-up on network projects. Sebastien.Ceuterickx@cern.ch Co-authors: Edoardo.Martelli@cern.ch , David.Gutierrez@cern.ch Carles.Kishimoto@cern.ch , Aurelie.Pascal@cern.ch IT/Communication Systems. Agenda. Latest network evolution Network connectivity at Wigner

alanna
Télécharger la présentation

A follow-up on network projects

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HEPiX Fall 2013 A follow-up on network projects Sebastien.Ceuterickx@cern.ch Co-authors: Edoardo.Martelli@cern.ch, David.Gutierrez@cern.ch Carles.Kishimoto@cern.ch, Aurelie.Pascal@cern.ch IT/Communication Systems HEPiX Fall 2013

  2. Agenda • Latest network evolution • Network connectivity at Wigner • Business Continuity • Status about IPv6 deployment • TETRA deployment HEPiX Fall 2013

  3. Latest network development HEPiX Fall 2013

  4. Upgrade of Geneva data center • Migration to Brocade routers completed • 2 year project • No service interruption • Benefits: • 100Gbps links • Simplified topology (from 22 to 13 routers) • Lower power consumption per port • Margin for scalability • Enhanced features (MPLS, Virtual routing) HEPiX Fall 2013

  5. CERN Data Center today ∑ 140 Gbps 100 Gbps links Switching fabric 5.28 Tbps ∑ 20 Gbps LCG Internet External Network Wigner Racks Distribution Backbone ∑ 12 Gbps ∑ 200 Gbps Firewall Switching fabric 1.36 Tbps ∑ 60 Gbps Switching fabric 1 Tbps CORE Network GPN 100 Gbps links MPLS HEPiX Fall 2013

  6. Scaling the Data Center Storage Distribution Routers Backbone Routers 100 Gbps links 100s of 10 Gbps 10s of 100 Gbps Try to skip 40 Gbps interface CPU 100s of 10 Gbps 100s of 10 Gbps CPU HEPiX Fall 2013

  7. Scaling the Top of the Rack • Service capacity depends on Service purpose • Blocking Factor: 2 for CPU, 5 for Storage Storage servers CPU servers Distribution Router n x 10 Gbps n x 100 Gbps m x 100 Gbps m x 10 Gbps x 1Gbps 10 Gbps x x 10Gbps 10GBase-T ? 10GBase-T ? 40Gbps ? HEPiX Fall 2013

  8. Extending the Tier0 to Wigner CORE Network Racks Distribution Internet ∑ 240 Gbps Backbone Geneva Switzerland Budapest Hungary HEPiX Fall 2013

  9. WLCG Tier0 Backbone Routers CORE MPLS Internet Racks Distribution Racks Distribution Backbone Backbone CERN WIGNER HEPiX Fall 2013

  10. Business Continuity 11 HEPiX Fall 2013

  11. Wigner for Business Continuity Internet Internet Extended CORE LCG Firewall Firewall LCG GPN External Network External Network GPN Virtual Routers MPLS CERN WIGNER HEPiX Fall 2013

  12. 2nd Network Hub at CERN ∑ 140 Gbps ∑ 20 Gbps LCG External Network Wigner Internet Racks Distribution Backbone ∑ 12 Gbps ∑ 200 Gbps Firewall CORE Network ∑ 60 Gbps GPN Single Building HEPiX Fall 2013 13

  13. 2nd Network Hub at CERN ∑ 140 Gbps ∑ 20 Gbps GPN LCG External Network External Network Wigner Internet ∑ 12 Gbps ∑ 200 Gbps Firewall Firewall LCG GPN ∑ 60 Gbps CORE Network CORE Network HEPiX Fall 2013

  14. IPv6 deployment at CERN HEPiX Fall 2013

  15. Network Database: Schema and Data IPv6 Ready Admin Web: IPv6 integrated 2012 Configuration Manager supports IPv6 routing Gradual deployment on the routing infrastructure starts The Data Center is Dual-Stack NTPv6 and DNSv6 DHCPv6 2013 Automatic DNS AAAA configuration • Infrastructure is Dual-Stack • Firewallv6 automated configuration • User Web and SOAP integrate IPv6 Today HEPiX Fall 2013

  16. IPv4 / IPv6 same portfolio • Identical performance, common tools and services • Dual Stack, dual routing • OSPFv2/OSPFv3 • BGP ipv4 and ipv6 peers • Service managers decide when ready for IPv6 • Devices must be registered • IPv6 auto configuration (SLAAC) disabled • RAs: Default Gateway + IPv6 prefixes no-autoconfig • DHCPv6 • MAC addresses as DUIDs: painful without RFC6939 • ISC has helped a lot (βcode implementing classes for ipv6) • DHCPv6 clients might not work ‘out of the box’ HEPiX Fall 2013

  17. Lots of VMs Current VM adoption plan will cause IPv4 depletion during 2014. Two options: A) VMs with only public IPv6 addresses + Unlimited number of VMs - Several applications don't run over IPv6 today (PXE, AFS, ...) - Very few remote sites have IPv6 enabled (limited remote connectivity) + Will push IPv6 adoption in the WLCG community B) VMs with private IPv4 and public IPv6 + Works flawlessly inside CERN domain - No connectivity with remote IPv4-only hosts (NAT solutions not supported or recommended) HEPiX Fall 2013

  18. TETRA deployment HEPiX Fall 2013

  19. What is TETRA? • A digital professional radio technology • E.T.S.I standard (VHF band – 410-430MHz) • Make use of “walkie-talkies” • Voice services • Message services • Data and other services • Designed for safety and security daily operation HEPiX Fall 2013

  20. The project Update the radio system used by the CERN Fire Brigade • A fully-secured radio network (unlike GSM) • Complete surface and underground coverage • Cooperation with French and Swiss authorities • Enhanced features and services • Fully operational since early 2013 • 2.5 years work HEPiX Fall 2013

  21. Which services? • Interconnection with other networks • Distinct or overlapping user communities • security, transport, experiments, maintenance teams… • Outdoor and indoor geolocation • Lone worker protection HEPiX Fall 2013

  22. Conclusion • The Network is ready to cope with ever-increasing needs • Wigner is fully integrated • Development of Business Continuity • Before end-2013, IPv6 will be fully deployed and available to the CERN community • TETRA system provides CERN with an advanced, fully-secured radio network. HEPiX Fall 2013

  23. Thank you! Question HEPiX Fall 2013

  24. Some links • A short introduction to the Worldwide LCG, Marteen Litmaath • https://espace.cern.ch/cern-guides/Documents/WLCG-intro.pdf • Physics computing at CERN, Helge Meinhard • https://openlab-mu-internal.web.cern.ch/openlab-mu-internal/03_Documents/4_Presentations/Slides/2011-list/H.Meinhard-PhysicsComputing.pdf • WLCG – Beyond the LHCOPN, Ian Bird • http://www.glif.is/meetings/2010/plenary/bird-lhcopn.pdf • LHCONE – LHC Use case • http://lhcone.web.cern.ch/node/23 • LHC Open Network Environment. Bos-Fisk paper • http://lhcone.web.cern.ch/node/19 • Introduction to CERN Data Center, Frederic Hemmer • https://indico.cern.ch/getFile.py/access?contribId=87&resId=1&materialId=slides&confId=243569 • The invisible Web • http://cds.cern.ch/journal/CERNBulletin/2010/49/News%20Articles/1309215?ln=en • CERN LHC Technical infrastructure monitoring • http://cds.cern.ch/record/435829/files/st-2000-018.pdf • Computing and network infrastructure for Controls • http://epaper.kek.jp/ica05/proceedings/pdf/O2_009.pdf HEPiX Fall 2013

  25. Extra Slides HEPiX Fall 2013

  26. HEPiX Fall 2013

  27. LHCONE • LHC Open Network Environment • Enable high-volume data transport between T1s, T2s and T3s • Separate LHC large flows from the general purpose routed infrastructures of R&E • Provide access locations that are entry points into a network private to the LHC T1/2/3 sites. • Complement the LHCOPN HEPiX Fall 2013

More Related