1 / 38

High Energy Physics Networking and Computing in Europe

High Energy Physics Networking and Computing in Europe. EUSO Meeting. Jorge Gomes - jorge@lip.pt ( LIP Computer Centre ). Palermo 4-May-2001. LHC Computing. The CERN LHC will be pushing computing and networking requirements for High Energy Physics in the next years.

milos
Télécharger la présentation

High Energy Physics Networking and Computing in Europe

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Energy Physics Networking and Computing in Europe EUSO Meeting Jorge Gomes - jorge@lip.pt (LIP Computer Centre) Palermo 4-May-2001

  2. LHC Computing • The CERN LHC will be pushing computing and networking requirements for High Energy Physics in the next years. • LHC experiments will produce after filtering a recording rate of 100 MegaByte/sec. • Each LHC experiment foresees a recorded raw rate of one PetaByte/year at the start of LHC operation. • The demands for processor power, storage and networking are at least 2 to 3 orders of magnitude more than we known how to handle today. • The complexity of accessing and processing this data is increased substantially by: • the size and global span of the experiments; • limited wide area network bandwidth.

  3. Monarc Regional Centre Model • The CERN MONARC project has establish the LHC computing requirements and architecture. • The architecture will be based on a hierarchical structure of distributed regional centres with the following objectives: • Maximise the intellectual contribution of physicists all over the world. • Bring computing facilities geographically closer to the home institutes will enable cheaper and higher bandwidth. • A hierarchy of centres with data storage will ensure that network problems will not interfere with physics analysis. • Allow the usage of the expertise and resources residing in computer centres throughout the world. • Maximise contributions from each member country.

  4. CERN Tier 0 IN2P3 RAL FNAL Monarc Regional Centre Model Tier 0 (main centre) 2.5 Gbps 622 Mbps 2.5 Gbps Tier 1 (RC) 155 mbps 155 mbps Uni n 622 Mbps Tier 2 (local RC) Lab a Lab c Uni b Tier 3 (department) Tier 4 (desktop)

  5. Regional Centre Model Requirements • The RC model must be: • Scalable: capable to deal with: • Thousands of processors. • Petabytes of data • Terabits/second of I/O bandwidth • Distributed: capable to deal with: • Integration of heterogeneous collections of regional centres • Multiple policies • Data distribution, replication and synchronisation • To implement this model a new distributed computing concept know as a Computational Grid will be used.

  6. The Grid Concept • The Grid concept is introduced in the already famous book "The Grid: Blueprint for a New Computing Infrastructure" • The Grid concept was born from the change of orientation of meta-computing activities from super-computers to smaller cheaper interconnected systems. • A computational grid provides a metaphor for a coherent set of computing resources physically distributed across a number of geographically separated sites. • A computational grid exhibits a uniform interface to its resources providing a dependable and pervasive service. • Computational grid works like a electrical power grid in which resources are provided to users through a simplified interface hiding the underlying complexity of the network.

  7. The Grid Vision • The vision is easy access to shared wide-area distributed computing facilities, without the user having to know the underlying details. • You submit your work • And the grid: • Finds a convenient place for it to be run • Organises efficient access to your data through: • caching, migration, replication • Deals with authentication to the different sites • Interfaces with local site resource allocation mechanisms • Runs your job • Monitors the progress • Recovers from problems if necessary • Tells you when the job completes • It can also decompose the job in execution units if there is scope for parallelism.

  8. Grid Software • To build a grid a software layer above the existing software infrastructures is required. • Most grid projects are using the Globus toolkit. • The Globus toolkit provides a basic middleware for: • Authentication based on certificates (GSI) • Information service based on LDAP (MDS) • Resource management (GRAM) • Data management and transfer (GASS,GSI FTP) • Communication (Globus I/O, Nexus) • Process monitoring (HBM) • Globus is the result of a large community effort: • Core development by: Argonne, USC, NCSA and SDSC • Contributors: NASA, DOE (SNL, LBNL, LLNL) and many others • Globus is an on-going project with many features still being developed.

  9. Globus Architecture Applications Application Toolkits GlobusView Testbed Status DUROC MPI Condor-G HPC++ Nimrod/G globusrun Grid Services Nexus GRAM GSI-FTP I/O HBM GASS GSI MDS Grid Fabric Condor MPI TCP UDP DiffServ Solaris LSF PBS NQE Linux NT

  10. Grids Using Globus • NSF PACIs National Technology Grid • NSF ITR GriPhyN Grid Physics Network • Indiana (ATLAS), Caltech (CMS), LIGO (Wisconsin-Milwaukee, UT-B, Caltech), John Hopkins (SDSS) • Chicago, USC/ISI, UW-Madison, Caltech • NASA Information Power Grid • DOE ASCI • EMERGE: Advanced reservation and QoS • GUSTO: Globus Ubiquitous Supercomputing Testbed Organization • Particle Physics Data Grid (PPDG) • LHC, BaBar, Run II, RHIC • ANL BNL, Caltech, Fermilab, U. Florida, JLAB, LBNL, SDSC, SLAC, U. Wisconsin • European Grids

  11. Globus Applications • Online instruments (remote control) • Collaborative engineering • Tele-immersion • Distributed supercomputing (one parallel task in many CPUs) • High-Throughput computing (many tasks in many CPUs) • Problem solving environments (computational chemistry)

  12. The DATAGRID Project • Partially funded by the EU. • Three years project started in January 2001. • The partners are: • CERN, CNRS, ESA/ESRIN, INFN, NIKHEF and PPARC. • Industrial Partners: IBM UK, Datamat and Comp. des Signaux. • Many associated partners from France, Italy Netherlands, Sweden, Germany, Spain Czech Republic, Finland and Hungary. • I&R forum with: Denmark, Greece, Israel, Japan, Norway, Poland, Portugal (LIP), Russia and Switzerland. • Involved sciences: High Energy Physics, Earth observation and Biology.

  13. DATAGRID Objectives • The main objectives are: • Establish a Research Network that will enable the development of the technology components essential for the implementation of a large scale GRID. • Demonstrate the effectiveness of this technology through the deployment of experiment applications involving real users. • Demonstrate the ability to build, connect and manage distributed clusters based on commodity components capable of scale to large computing requirements. • A large scale testbed will be implemented for development, testing and demonstration purposes.

  14. DATAGRID Applications • High Energy Physics • The four LHC experiments (ATLAS, CMS, LHCB, ALICE) • Live testbed for the Regional Centre model • Earth Observation • ESA-ESRIN • KNMI (Dutch Meteo) climatology • Processing of atmospheric ozone data derived from ERS GOME and ENVISAT SCIAMACHY sensors • Biology • CNRS (France), Karolinska (Sweden) • Processing of data acquired by gene sequencers • Determination of three-dimensional macromolecular structures

  15. DATAGRID Work Packages

  16. DATAGRID Status • Infrastructure deployment has started at individual institutes. • Authentication infrastructure being deployed. • The DATAGRID project will be based on the Globus Grid Toolkit. • Installation kits being tested. • Testbed 0 will be running soon. • Initial tesbed participants:CERN, RAL, INFN (several sites), IN2P3-Lyon, ESRIN (ESA-Italy), SARA/NIKHEF (Amsterdam), ZUSE Institut (Berlin), CESNET (Prague), IFAE (Barcelona), LIP (Lisbon), IFCA (Santander) • The network infrastructure will be provided by the EU TEN-155 and Geant projects. • Co-ordination with Geant going on. • Activities on quality of service, network monitoring, security and high performance file transfer have started.

  17. The CROSSGRID Project • Submitted to the EU for partial funding in April 2001. • Complement to the DATAGRID project. • CROSSGRID will work closely with DATAGRID, both testbeds will interoperate seamlessly. • If approved will be a three years project starting in January 2002. • The partners are: • Poland, Netherlands, Slovakia, Austria, Germany, Cyprus, Ireland, Spain, Greece and Portugal (LIP). • Industrial Partners: Datamat, AlgoSystems. • Applications: High Energy Physics analysis , Biomedical simulation, Weather forecast, Pollution modelling and Flood crises management.

  18. CROSSGRID Objectives • The main objectives are: • Extend the Grid environment to: • New applications focus on: • Interactive applications (data and computing intensive) • Visualisation • Data mining • Application portals and user friendly access to applications • Agents • New countries • Development of tools for: • Grid performance prediction, analysis and monitoring • Verification of parallel source code • Application performance monitoring • Efficient distributed data access • Resource management • Methodologies and generic architectures for application development

  19. WPNo Workpackage name Person-months WP1 CrossGrid Application Development 676 (474) WP2 Grid Application Development Environments 260 (182) WP3 New Grid Services and Tools 521 (337) WP4 International Testbed Organisation 891 (626) WP5 Information Dissemination and Exploitation 144 (120) WP6 Project Management 144 (72) CROSSGRID Work Packages

  20. The EUROGRID Project • Partially funded by the EU. • Three years project started in November 2000. • The partners are: • France, Norway, Germany, Switzerland, Poland and UK. • Objectives: • To establish a European GRID network of leading High Performance Computing centres from different European countries. • To operate and support the EUROGRID software infrastructure. The EUROGRID software will use the existing Internet network and will offer seamless and secure access for the EUROGRID users. • To develop important GRID software components and to integrate them into EUROGRID • To demonstrate distributed simulation codes from different application areas (Biomolecular simulations, Weather prediction, Coupled CAE simulations, Structural analysis, Real-time data processing). • To contribute to the international GRID development and to liase with the leading international GRID projects.

  21. Other Grid Projects • Besides the EU funded grid projects several European countries have their own national grid initiatives: • France: • IDRIS, IN2P3, CINES, CRIHAN, UREC, Renater, EDF, CS etc • Appl: HEP, Bioinformatics and earth observation • Italy: • INFN-GRID (26 sites, 3 years, 70 FTEs) • Appl: HEP (LHC, Virgo and APE), Earth observation (ESRIN) • UK: • UK HEP Grid: CLRC (RAL + DL) • Netherlands: • NIKHEF, SARA, KNMI, U.Nijmegen, SurfNET • Appl: HEP, Earth observation

  22. Networking for The Grid • High performance networks are a crucial element for successful grid deployment: • In Europe bandwidth will be provided by: • National research networks (NRENs) • TEN-155 and its successor Geant • In the US by: • Abilene Internet 2 advanced network (Universities) • vBNS+ Very high-performance Backbone Network Service (NSF) • ESnet Energy Sciences Network (DOE) • NREN NASA Research and Education Network • NISN NASA Integrated Services Network • DREN Dept of Defense Research and Education Network • STARTAP Science Technology and Research Transit Point

  23. Networking Requirements • ICFA-Network Task Force Bandwidth requirements summary in Mbps

  24. Networking Evolution • Trans-Atlantic pricing and capacity provided by DANTE • DANTE is a consortium of European NRENs • The DANTE mandate is: "... to rationalise the management of otherwise fragmented, uncoordinated, expensive and inefficient trans-national services and operational facilities." • DANTE is responsible for the coordination of TEN-155 (Trans-European Network) and Geant

  25. TEN-155 in September 2000

  26. TEN-155 in March 2001

  27. Geant • Will replace TEN-155 • Support for any research traffic • Connect to other world regions (Asia-Pacific, North America, South America and Mediterranean region) • Complementary to NRENs • One PoP in each country • End user´s connections are NRENs rsponsability • Multigigabit core (2-10 Gbits at start) • Core will start with 6-10 locations and will expand to 20 locations in 2003 • Capacity will grow 2-4 times per year • Non-core locations will connect at 34-622 Mbits • ~ 210 MEuro in 4 years (62% from NRENs, 38% from EU)

  28. Austria Belgium Bulgaria Cyprus Croatia Czech Republic Estonia France Germany Greece Geant Scope • Hungary • NorduNet • Denmark • Finland • Iceland • Norway • Sweden • Ireland • Israel • Italy • Latvia • Lithuania • Luxembourg • Netherlands • Poland • Portugal • Romania • Slovenia • Slovakia • Spain • Switzerland • United Kingdom

  29. Geant Services • Extensive VPN capability (extremely important for grids) • ATM • MPLS / DiffServ • lallocation • Several levels of service • Best effort IP (standard service) • Premium-IP service (Priority over best effort service) • Guaranteed capacity service (Consistent high performance) • Multicast • Support for IPv6

  30. Geant Status • The closing date for bids was 29 September of 2000. • Responses: • 17 offers for Gbit/s circuits • 14 offers for lower capacity circuits • 37 offers for network management • 17 offers for facilities management • Technical review of offers and clarification. • Selection: • Out of the 17 Gbit/s offers 6 have been selected (3 possible, 3 probable) • Final selection of main supplier(s) in April (11) • Delivery of the infrastructure expected for Summer 2001.

  31. National Research Networks • Besides the geant initiative there are several national projects to build high performance Gigabit backbone networks: • Italy – GARR G • France – Renater 2 bis • Northern Countries – Upgrade of NORDUnet • Netherlands – SURFNET upgrade • UK – Super JANET upgrade • Poland – Optical Internet Testbed

  32. Internet Access @ CERN Canada ESnet Japan Internet2 Abilene STARTAP vBNS SWITCH MREN SURFNET Commodity Internet JANET TEN-155 DFN CERN PoP USA CERN CIXP Mission oriented

  33. CERN • US Line Consortium • CERN, US/HEP, Canada HEP • IN2P3 (CCPN Lyon) • World Health Organization • JAPAN • NACSIS (4Mbits ATM/VP over TEN-155 MBS) • Genesis & JEG (Japan Europe Gamma Project) • CIXP (CERN Internet Exchange Point) • TEN-155

  34. CERN • WAN technology is well ahead of LAN technology, state of the art is 10Gbps (WAN) against 1Gbps (LAN). • Prices are less of an issue as they are falling down very rapidly. • Multiple circuits from CERN to Tier1 regional centers at up to 2.5 Gbps (i.e. STM-16/OC-192c) will be possible by 2003-2005. • Cost may be problematic (1-3MCHF per circuit). • Very high speed LANs implied. • Gigabit/second file transfer on high bandwidth*delay paths may still be problematic. • The public Internet as well as national research networks are evolving in a way nobody can predict.

  35. END

  36. Current Grid Activities at LIP • Hardware • Test of cpu and I/O solutions • Test of rack mount solutions • Software • Evaluation of scheduling systems (pbs, lsf, condor) • Implementation of a LIP certification authority • Installation and test of the Globus grid toolkit • Security and X509 certificates • Information services and LDAP • Job submission • Resource management • Remote file management • I/O • Co-ordination with the DATAGRID project

  37. National Academic and Research Network Tape Storage ROUTER FarmNode Storage Server Disk Storage FarmNode Farm Node Storage Server Disk Storage Farm Node SWITCH Farm Node Farm Node Storage Server Disk Storage FarmNode Farm Node Storage Server Disk Storage Farm Node Future Grid Test-bed Setup at LIP

  38. Software – Globus at LIP • Installation of the Globus 1.1.3 toolkit in the LIP farm. • Each node has a fork service and lflip04 has a pbs service. • A distributed Globus information service (MDS) with GIIS has been established. • Institutional MDS service “dc=lip, dc=pt, o=Grid” (lflip04:3890) • Local MDS service in each system • GSI authentication through the LIP CA. • Problems: • Documentation dispersed, confusing and out of date, almost no docs on MDS. • Some odd behaviors of certain tools not fully understood. • Several bugs found specially on GASS (remote cache system). • Debugging Globus is not an easy task. • Two small documents on Globus at LIP have been produced. • Configuring and managing certificates and the LIP CA. • Installing and deploying Globus at LIP.

More Related