1 / 30

LHC Computing Grid Project - LCG

LCG. LHC Computing Grid Project - LCG. The LHC Computing Grid First steps towards a Global Computing Facility for Physics 18 September 2003 Les Robertson – LCG Project Leader CERN – European Organization for Nuclear Research Geneva, Switzerland les.robertson@cern.ch.

miles
Télécharger la présentation

LHC Computing Grid Project - LCG

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LCG LHC Computing Grid Project - LCG The LHC Computing Grid First steps towards a Global Computing Facility for Physics 18 September 2003 Les Robertson – LCG Project Leader CERN – European Organization for Nuclear Research Geneva, Switzerland les.robertson@cern.ch

  2. LHC Computing Grid Project • The LCG Project is a collaboration of – • The LHC experiments • The Regional Computing Centres • Physics institutes .. working together to prepare and deploy the computing environment that will be used by the experiments to analyse the LHC data • This includes support forapplications • provision of common tools, frameworks, environment, data persistency • .. and the development and operation of a computing service • exploiting the resources available to LHC experiments in computing centres, physics institutes and universities around the world • presenting this as a reliable, coherent environment for the experiments

  3. Applications Area Development environment Joint projects Data management Distributed analysis Middleware Area Provision of a base set of gridmiddleware – acquisition,development, integration,testing, support CERN Fabric Area Large cluster management Data recording Cluster technology Networking Computing service at CERN Grid Deployment Area Establishing and managing theGrid Service - Middleware certification, security, operations,registration, authorisation,accounting Joint with EGEE Operational Management of the Project

  4. Applications Area Projects • Software Process and Infrastructure (SPI) (A.Aimar) • Librarian, QA, testing, developer tools, documentation, training, … • Persistency Framework (POOL) (D.Duellmann) • POOL hybrid ROOT/relational data store • Core Tools and Services (SEAL) (P.Mato) • Foundation and utility libraries, basic framework services, object dictionary and whiteboard, math libraries, (grid enabled services) • Physicist Interface (PI) (V.Innocente) • Interfaces and tools by which physicists directly use the software. Interactive analysis, visualization, (distributed analysis & grid portals) • Simulation (T.Wenaus et al) • Generic framework, Geant4, FLUKA integration, physics validation, generator services • Close relationship with -- ROOT (R.Brun) • ROOT I/O event store; Analysis package • Group currently working on distributed analysis requirements

  5. 1 Megabyte (1MB) A digital photo 1 Gigabyte (1GB) = 1000MB A DVD movie 1 Terabyte (1TB) = 1000GB World annual book production 1 Petabyte (1PB) = 1000TB Annual production of one LHC experiment 1 Exabyte (1EB) = 1000 PB World annual information production LHC data • 40 million collisions per second • After filtering, 100-200 collisions of interest per second • 1-10 Megabytes of data digitised for each collision = recording rate of 0.1-1 Gigabytes/sec • 1010 collisions recorded each year • = ~15 Petabytes/year of data CMS LHCb ATLAS ALICE

  6. Balloon (30 Km) LHC data CD stack with 1 year LHC data! (~ 20 Km) ~15 PetaBytes – about 20 million CDs each year! Its analysis will need the computing power of ~ 100,000 of today's fastest PC processors! Concorde (15 Km) Where will the experiments store all of this data? and where will they find this computing power? Mt. Blanc (4.8 Km)

  7. The CERN Computing Centre ~2,000 processors ~100 TBytes of disk ~1 PB of magnetic tape Even with technology-driven improvements in performance and costs – CERN can provide nowhere near enough capacity for LHC!

  8. Computing for LHC Europe: 267 institutes 4603 users Elsewhere: 208 institutes 1632 users Solution:Computing centres, which were isolated in the past, will be connected into acomputing grid -- uniting the computing resources of particle physicists in the world!

  9. First wave centres CERN Academica Sinica Taiwan Brookhaven National Lab PIC Barcelona CNAF Bologna Fermilab FZK Karlsruhe IN2P3 Lyon KFKI Budapest Moscow State University University of Prague Rutherford Appleton Lab (UK) University of Tokyo Other Centres Caltech GSI Darmstadt Italian Tier 2s(Torino, Milano, Legnaro) JINR Dubna Manno (Switzerland) NIKHEF Amsterdam Ohio Supercomputing Centre Sweden (NorduGrid) Tata Institute (India) Triumf (Canada) UCSD UK Tier 2s University of Florida– Gainesville …… LCG Regional Centres Pilot production service : 2003 – 2005

  10. GriPhyN PPDG iVDGL The Virtual Data Toolkit - VDT Some of the Sources for Middleware & Tools used by LCG

  11. Goals for LCG-1 – 2003-04The Pilot Grid Service for LHC Experiments • Be the principal service for Data Challenges in 2004 • Initially focused on batch production work • And later also interactive analysis • Get experience in close collaboration between the Regional Centres • Learn how to maintain and operate a global grid • Focus on a production-quality • Robustness, fault-tolerance, predictability, and supportability take precedence; additional functionality gets prioritized • LCG should be integrated into the sites’ mainline physics computing services – it should not be something apart • This requires coordination between participating sites in: • Policies and collaborative agreements • Resource planning and scheduling • Operations and Support

  12. Elements of a Production LCG Service • Middleware: • Integration, testing and certification • Packaging, configuration, distribution and site validation • Support – problem determination and resolution; feedback to middleware developers • Operations: • Grid infrastructure services • Site fabrics run as production services • Operations centres – trouble and performance monitoring, problem resolution – 24x7 globally • Support: • Experiment integration – ensure optimal use of system • User support – call centres/helpdesk – global coverage; documentation; training

  13. LCG LCG Service Status • Certification and distribution process established • Middleware package – components from – • European DataGrid (EDG) • US (Globus, Condor, PPDG, GriPhyN)  the Virtual Data Toolkit • Agreement reached on principles for registration and security • Rutherford Lab (UK) to provide the initial Grid Operations Centre • FZK (Karlsruhe) to operate the Call Centre • Pre-release middleware deployed in July to the initial 10 centres • The “certified” release was made available to 13 centres on 1 September – Academia Sinica Taipei, BNL, CERN, CNAF, FNAL, FZK, IN2P3 Lyon, KFKI Budapest, Moscow State Univ., Prague, PIC Barcelona, RAL, Univ. Tokyo • Next steps – - Get the experiments going - Expand to other centres

  14. Preliminary full simulation and reconstruction tests with ALICE ALIEN submitting work to the LCG service • Aliroot 3.09.06 - fully reconstructed events • CPU-intensive, RAM-demanding (up to 600MB), long lasting jobs ( average 14 hours ) • Outcome: • > 95 % successful job submission, execution and output retrieval

  15. LCG 1 at the Time of First Release • Impressive improvement on Stability w.r.t. old 1.x EDG releases and corresponding testbeds • Lots of room for further improvements • Additional features to be added before the end of the year, in preparation for the data challenges of 2004 • As more centres join, Scalability will surely become a major issue

  16. Resources committed for 1Q04 Resources in Regional Centres • Resources planned for the period of the data challenges in 2004 • CERN ~12% of the total capacity • Numbers have to be refined – different standards used by different countries • Efficiency of use is a major question mark • Reliability • Efficient scheduling • Sharing between Virtual Organisations (user groups)

  17. From LCG-1 to LHC Startup

  18. Where are we now with Grid Technology? • For LHC – - we now understand the basic requirements for batch processing • And we have a prototype solution developed by Globus and Condor in the US and the DataGrid and related projects in Europe • It is more difficult than was expected – - reliability, scalability, monitoring, operation, ..And we are not yet seeing useful industrial products • But we are ready to start re-engineering the components • part of the large EGEE project proposal submitted to the EU • re-write of Globus using a web-services architecture is now available • Many more practical problems will be discovered now that we start running a grid as a sustained round-the-clock serviceand the LHC experiments begin to use it for doing real work

  19. LCG Grid Middleware for LCG in theLonger Term Requirements – • A second round of specification of the basic grid requirements is being completed now - HEPCAL II • A team has started to specify the higher level requirements for distributed analysis – batch and interactive – and define the HEP-specific tools that will be needed For basic middleware the current strategy is to assume • that the US DoE/NSF will provide a well supported Virtual Data Toolkit based on Globus Toolkit 3 • that the EGEE project, approved for EU 6th framework funding, will develop the additional tools needed by LCG And the LCG Applications Area will develop higher-level HEP-specific functionality

  20. Work in Progress ! ARDA – An Architectural RoadmapTowards Distributed Analysis

  21. 2003 2004 2005 2006 2007 stabilise, expand the service develop operations centre, etc. first data How close are we to LHC startup? computing service physics agree spec. of initial service (LCG-1) open LCG-1 Starter toolkit – componentsfrom VDT and EDG used for simulated event productions * TDR – technical design report

  22. 2003 2004 2005 2006 2007 stabilise, expand the service develop operations centre, etc. LCG-2 upgraded middleware, mgt. s/w Computing model TDRs* first data Timeline for the LCG computing service computing service physics agree spec. of initial service (LCG-1) open LCG-1 used for simulated event productions principal service for LHC data challenges – batch analysis and simulation This is the full complement of thefirst generation middleware fromVDT/EDG - hardened to provide astable, reliable service * TDR – technical design report

  23. 2003 2004 2005 2006 2007 stabilise, expand the service develop operations centre, etc. LCG-2 upgraded middleware, mgt. s/w Computing model TDRs* LCG-3 full multi-tier prototype service – batch and interactive testing, hardening of 2nd generation middleware TDR for the Phase 2 grid first data Timeline for the LCG computing service computing service physics agree spec. of initial service (LCG-1) open LCG-1 used for simulated event productions principal service for LHC data challenges – batch analysis and simulation validation of computing models At this point we expect to startdeploying second generationmiddleware components - building this up during the yearto attain the full base functionalityrequired for LHC startup * TDR – technical design report

  24. 2003 2004 2005 2006 2007 stabilise, expand the service develop operations centre, etc. LCG-2 upgraded middleware, mgt. s/w Computing model TDRs* LCG-3 full multi-tier prototype service – batch and interactive testing, hardening of 2nd generation middleware TDR for the Phase 2 grid acquisition, installation, commissioning of Phase 2 service (for LHC startup) Phase 2 service in production first data Timeline for the LCG computing service computing service physics agree spec. of initial service (LCG-1) open LCG-1 used for simulated event productions principal service for LHC data challenges – batch analysis and simulation validation of computing models At CERN the acquisitionprocess will have startedalready during 2004 witha market survey experiment setup & preparation * TDR – technical design report

  25. 2003 2004 2005 2006 2007 stabilise, expand the service develop operations centre, etc. LCG-2 upgraded middleware, mgt. s/w Computing model TDRs* LCG-3 full multi-tier prototype service – batch and interactive testing, hardening of 2nd generation middleware TDR for the Phase 2 grid acquisition, installation, commissioning of Phase 2 service (for LHC startup) Phase 2 service in production Timeline for the LCG computing service computing service physics agree spec. of initial service (LCG-1) open LCG-1 used for simulated event productions principal service for LHC data challenges – batch analysis and simulation validation of computing models experiment setup & preparation first data * TDR – technical design report

  26. Evolution of the Base Technology • These are still very early days – with very few grids providing a reliable, round-the-clock “production” service • And few applications that are managing gigantic distributed databases • Although the basic ideas and tools have been around for a long time, we are only now seeing these applied to large scale services • Developing the grid concept continues to attract substantial interest and public funding • There are major changes taking place in architecture and frameworks – • E.g. the Open Grid Services Architecture and Infrastructure (OGSA, OGSI) and there will be more to come as experience grows • There is a lot of commercial interest from potential software suppliers (IBM, HP, Microsoft, ..) – but no clear sight of useful products

  27. Adapting to the changing landscape • In the short-term there will many grids and several middleware implementations -- for LCG - inter-operability will be a major headache • Will be all agree on a common set of tools? - unlikely! • Or will we have to operate a grid of grids – some sort of federation? • Or will computing centres be part of several grids? • The Global Grid Forum – promises to provide a mechanism for evolving architecture and agreeing on standards – but this is a long-term process • In the medium-term, until there is substantial practical experience with different architectures and different implementations,de facto standards will emerge • How quickly will we recognise the winners? • Will we have become too attached to our own developments to change?

  28. Access Rights and Security The grid idea assumes global authentication, and authorisation based on the user’s role in his virtual organisation-- one set of credentials for everything you do • The agreement for LHC is that all members of a physics collaboration will have access to all of its resources-- the political implications of this have still to be tested! • Could be an attractive target for hackers -- this is probably the greatest risk that we take by adopting the grid model for LHC computing

  29. LCG Key LCG goals for next 12 months • Take-up by the experiments of the first versions of common applications – starts NOW • Evolve the LCG-1 service into a production facility for LHC experiments – validated in data challenges • Establish the requirements and a credible implementation plan for baseline distributed grid analysis for 2007-08 • the model • hep-specific tools • base grid technology - middleware to support the computing models of the experiments – Technical Design Reports due end 2004

  30. Summary • The LCG Project has a clear goal of providing the environment and services for recording and analysing the LHC data when the accelerator starts operation in 2007 • The computational requirements of LHC dictate a geographically distributed solution, taking maximum advantage of the facilities available to LHC around the world--- a computational GRID • A pilot service – LCG-1 – has been opened to learn how to use this technology to provide a reliable, efficient service encompassing many independent computing centres • It is already clear that the current middleware will have to re-engineered or replaced to achieve the goals of reliability and scalability • In the medium term we expect to get this new middleware from EU and US funded projects • But the technology is evolving rapidly – and LCG will have to adapt to a changing environment • While we keep a strong focus on providing a continuing service for the LHC Collaborations

More Related