1 / 73

Worldwide LHC Computing Grid & WLCG Deployment Schedule

This workshop discusses the status of the WLCG service and plans for 2007, lessons learned from WLCG service challenges, distributed database services, and conclusions. It also highlights the worldwide LHC computing grid and the deployment schedule.

fclifford
Télécharger la présentation

Worldwide LHC Computing Grid & WLCG Deployment Schedule

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Worldwide LHC Computing Grid Distributed Database Services & WLCG Deployment Schedule 3D Workshop, SARA, March 2007 Jim Gray – Rest in El Paso (NM) Jim Gray – Rest in La Paz (MX)

  2. Outline • Status of WLCG Service and Outlook for 2007 • Plans of LHC Experiments:Now to Startup i) prepare for Dress Rehearsals; ii) execute DRs; iii) Pilot run. • Some lessons from the WLCG Service Challenges • Distributed Database Services • Conclusions • The odd postscript…

  3. The World in 2007 • As widely reported – even in the popular press - 2007 will see the startup of CERN’s (& Europe’s & the World’s) new flagship accelerator • Next one may well just be “the world’s” – the ILC • This is the result of many years of hard work • The earliest accepted reference that I know of to the LHC is from John Adams in ~1979 – originally foreseen to install the LHC on top of LEP – “L3 +1” etc. • LHC Computing is somewhat younger that this… • For me, CHEP 1992 (Annecy) was the turning point • CAS 2007 will be an interesting review… • September 2007 – 15 years to the week since CHEP 1992!

  4. In Just A Few Months From Now… Hell’s Bells & Whistles… Minimum HWC

  5. Our future playground pp, B-Physics,CP Violation ALICE LHC : 27km around 100m underground ATLAS General Purpose,pp, heavy ions Heavy ions, pp CMS +TOTEM Introduction Status of LHCb ATLAS ALICE CMS Conclusions

  6. The Worldwide LHC Computing Grid A Few Slides Regarding my Involvement with Databases 3D Workshop, SARA, March 2007

  7. Online Backup • First job at CERN was to work on the VAX services, which supported Oracle • Later grew into ‘the largest VAXcluster in the West’, with largely similar architecture to RAC • Dedicated interconnect for cluster communications, quorum disk for small clusters, voting, “failover” etc. • Our first ‘big’ VAX (8600 or Venus) was shipped via Zurich by a certain ‘Heinz Benz’, which caused some amusement… • VAXes(n) were famed for their poor tape handling and backup tapes often got overwritten – e.g. with critical Oracle backups vital for LEP construction… • Enter online (disk) backup – some 20 years ago

  8. Distributed Lock Manager • Shortly after the startup of LEP visited Oracle (in Belmont – not yet Redwood Shores) for the first time • File catalog and more (standard interface to storage), Conditions database, “optimised” (z)FTP that understood HEP file formats. • Updates to catalog /conditions kept first in local queues and then transferred to remote machines and applied. Resync options etc. • Even a prototype File Transfer Service – see roof of B513 ! • Oracle were looking for ideas for ‘new directions’, i.e. ways of making more money • I suggested – no doubt from then-current work and VAX background – a ‘Distributed Lock Manager’ • “NA – doesn’t scale.” (And the rest is history…)

  9. Enter the Grid • For LHC startup, we had a bit more success… • Native numbers (IEEE), cross-platform transportable tablespaces, commodity clusters & Linux, ULDB support (no more 16bit fields)… • Also instrumental in their joining the CERN openlab, which funded people to work on issues including DataGuard, Advanced Replication then Streams • Oracle 10g – the database for the Grid ? 109 = G

  10. IWLSC – CHEP 2006 The Evolution of Databases in HEP A Time-Traveller's Tale [ With an intermezzo covering LHC++ ] Jamie Shiers, CERN ~ ~ ~ …, DD-CO, DD-US, CN-AS, CN-ASD, IT-ASD, IT-DB, IT-GD, ¿¿-??, …

  11. The Worldwide LHC Computing Grid WLCG Service Status & Deployment Schedule 3D Workshop, SARA, March 2007

  12. LHCC Review Conclusions - Paul Dauncey • A lot of progress since the last Comprehensive Review • Jamie Shiers: “Despite the problems encountered – and those yet to be faced and resolved – I believe that it is correct to say we have a usable service (not a perfect one)” • Several critical components still be to deployed • Without disrupting services • With a somewhat uncertain schedule • Service problems seen are amorphous and not easy to categorise • Many one-offs, so progress will be slow in fixing them • The bottom line is that we do have a service – need to build on this and steadily improve it… Includes 3D Services!

  13. LCG 2007 2008 services WLCG CommissioningSchedule Introduce residual servicesFile Transfer Services for T1-T2 traffic Distributed Database Synchronisation Storage Resource Manager v2.2 VOMS roles in site scheduling experiments • Continued testing of computing models, basic services • Testingthe full data flow DAQTier-0Tier-1Tier-2 • Building up end-user analysis support • Dress Rehearsals • Exercising the computing systems, ramping up job rates, data management performance, …. Commissioning the service for the 2007 run– increase performance, reliability, capacity to target levels, monitoring tools, 24 x 7 operation, …. 01jul07 - service commissioned - full 2007 capacity, performance 01jul07 - service commissioned - full 2007 capacity, performance • Key dates: • 1st April – 2007 services ready & in place for FDR testing • 1st July – Full 2007 service is commissioned + FDRs start • 1st November – the LHC detectors go live & data taking commences! • Services must be production tested well in advance of this date! • Experience shows (e.g. SCs) that this is far from easy and takes time! first LHC collisions 1st April is the target to have required services in place to prepare for Dress Rehearsals! Where are we now? Both ATLAS & CMS have already started preparations – 26th & 12th Feb. respectively

  14. LCG * * * * * * * * * * * * The LCG Service Challenges:Rolling out the LCG Service Jamie Shiers, CERN-IT-GD-SC http://agenda.cern.ch/fullAgenda.php?ida=a053365 June 2005

  15. WLCG Service Challenges • Purpose • Understand what it takes to operate a real grid service – run for days/weeks at a time (outside of experiment Data Challenges) • Trigger/encourage the Tier1 & large Tier2 planning – move towards real resource planning – based on realistic usage patterns • Get the essential grid services ramped up to target levels of reliability, availability, scalability, end-to-end performance • Set out milestones needed to achieve goals during the service challenges • NB: This is focussed on Tier0 – Tier1/large Tier2 • Data management, batch production and analysis • Short term goal – by end 2004 – have in place a robust and reliable data management service and support infrastructure and robust batch job submission From early proposal, May 2004 Ian Bird – ian.bird@cern.ch

  16. Tier0 – the accelerator centre • Data acquisition & initial processing • Long-term data curation • Data Distribution to Tier-1 centres Tier1 – “online” to the data acquisition process  high availability • Managed Mass Storage – grid-enabled data service • All re-processing passes • Data-heavy analysis • National, regional support Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany –Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) WLCG Service Hierarchy Tier2 – ~100 centres in ~40 countries • Simulation • End-user analysis – batch and interactive • Services, including Data Archive and Delivery, from Tier1s

  17. simulation Data Handling and Computation for Physics Analysis reconstruction event filter (selection & reconstruction) detector ESD analysis processed data event summary data raw data RAW batch physics analysis event reprocessing AOD analysis objects (extracted by physics topic) event simulation les.robertson@cern.ch interactive physics analysis

  18. Data R A W E S D A O D TAG 1TB/yr 10TB/yr 100TB/yr Tier1 1PB/yr/expt (1PB/s prior to reduction!) Tier0 random seq. # of users

  19. T0-T1 Data Transfer Rates~24h averages - peaks = 2 x average(ATLAS assume 50Ks of beam/day)

  20. Summary of Tier0/1/2 Roles(The WLCG Computing Model1) • Tier0: safe keeping of RAW data (first copy); first pass reconstruction, distribution of RAW data and reconstruction output to Tier1; reprocessing of data during LHC down-times; • Tier1s: safe keeping of a proportional share of RAW and reconstructed data; large scale reprocessing and safe keeping of corresponding output; distribution of data products to Tier2s and safe keeping of a share of simulated data produced at these Tier2s; • Tier2s: Handling analysis requirements and proportional share of simulated event production and reconstruction. • O(1) Tier0, O(10) Tier1s, O(100) Tier2s • Sum of resources at each level ~equal (within a factor…) 1 LCG-TDR-001 – LHC Computing Grid Technical Design Report

  21. WLCG Tier1 Services1 • acceptance of an agreed share of raw data from the Tier0 Centre, keeping up with data acquisition; • acceptance of an agreed share of first-pass reconstructed data from the Tier0 Centre; • acceptance of processed and simulated data from other centres of the WLCG; • recording and archival storage of the accepted share of raw data (distributed back-up); • recording and maintenance of processed and simulated data on permanent mass storage; • provision of managed disk storage providing permanent and temporary data storage for files and databases; • provision of access to the stored data by other centres of the WLCG and by named AF’s as defined in paragraph X of this MoU; • operation of a data-intensive analysis facility; • provision of other services according to agreed Experiment requirements; • ensure high-capacity network bandwidth and services for data exchange with the Tier0 Centre, as part of an overall plan agreed amongst the Experiments, Tier1 and Tier0 Centres; • ensure network bandwidth and services for data exchange with Tier1 and Tier2 Centres, as part of an overall plan agreed amongst the Experiments, Tier1 and Tier2 Centres; • administration of databases required by Experiments at Tier1 Centres. • All storage and computational services shall be “grid enabled” according to standards agreed between the LHC Experiments and the regional centres. 1 WLCG Memorandum of Understanding (signed by each T0/T1/T2)

  22. Service Challenge Results • SC1 & SC2 were preliminary exercises focusing on TCP/IP tuning for high bandwidth transfers - including over high latency links - storage configuration and tuning; LCGOPN setup etc… THE GROUNDWORK • SC3 & SC4 added SRM, included extensive productions by the experiments – raised the bar considerably and resulted in a usable – but not perfect - production service • CMS CSA06 in particular demonstrated a particularly effective and efficient strategy • Everything needs to be carefully planned and tested separately and then together… • Don’t assume that anything will work first time… or if you go away & leave it

  23. The Main Lesson from the Service Challenges • Setting up production services takes aL O N Gtime – even if you foresee and plan for this! • e.g. some 2+ years to get LFC / FTS services to a state considered acceptable by experiments! • Many reasons for this, including long hardware delivery times – but mainly the large and inevitable uncertainties around how these services will actually be used • The above services have been redeployed at CERN at least twice since prior to SC3!

  24. The 1st Law Of (Grid) Computing • Murphy's law (also known as Finagle's law or Sod's law) is a popular adage in Western culture, which broadly states that things will go wrong in any given situation. "If there's more than one way to do a job, and one of those ways will result in disaster, then somebody will do it that way." It is most commonly formulated as "Anything that can go wrong will go wrong." In American culture the law was named after Major Edward A. Murphy, Jr., a development engineer working for a brief time on rocket sled experiments done by the United States Air Force in 1949. • … first received public attention during a press conference … it was that nobody had been severely injured during the rocket sled [of testing the human tolerance for g-forces during rapid deceleration.]. Stapp replied that it was because they took Murphy's Law under consideration. • “Expect the unexpected” – Bandits (Bruce Willis)

  25. Handling WLCG Interventions… • Whatever can go wrong does - sooner or later… • Never say never… • Need to understand: • What the intervention is for; • When it will take place; • Who will do what; • What the precise order is; • What to do when things go wrong; • Intervention plan & coordinator • Interventions carried out in CERN Grid Control Room • Checklist & template announcements being setup

  26. WLCG Service Ramp-Up • As discussed at recent WLCG Collaboration Workshop, much work has already been done on service hardening • Reliable hardware, improved monitoring & logging, middleware enhancements • Much still remains to be done – this will be an on-going activity during the rest of 2007 and probably beyond • The need to provide as much robustness in the services themselves – as opposed to constant baby-sitting – is well understood • There are still new / updated services to deploy in full production (see next slide) • It is unrealistic to expect that all of these will be ready prior to the start of the Dress Rehearsals • Foresee a ‘staged approach’ – focussing on maintaining and improving both service stability and functionality (‘residual services’) • Must remain in close contact with both experiments and sites on schedule and service requirements – these will inevitably change with time • (Some slides on experiment activities during 2007 up to and including startup are included later if time permits…)

  27. 2006 2007 2008 WLCG CommissioningSchedule SC4 – becomes initial service when reliability and performance goals met Introduce residual servicesFull FTS services; 3D; gLite 3.x; SRM v2.2; VOMS roles; SL(C)4 Continued testing of computing models, basic services Testing DAQTier-0 (??) & integrating into DAQTier-0Tier-1data flow Building up end-user analysis support Exercising the computing systems, ramping up job rates, data management performance, …. Initial service commissioning – increase performance, reliability, capacity to target levels, experiencein monitoring, 24 x 7 operation, …. 01jul07 - service commissioned - full 2007 capacity, performance first collisions in the LHC. Full FTS services demonstrated at 2008 data rates for all required Tx-Ty channels, over extended periods, including recovery (T0-T1).

  28. April 1st WLCG Service Targets • LFC with bulk methods (production release1.6.3) • FTS 2.0 (pilot established, also on (P)PS at outside sites • Distributed DB services for ATLAS & LHCb • Procedures / testing (SAM) for VO services [incl. Frontier, Squid etc.] • Monitoring / logging / reporting / dashboards • VOMS roles in job priorities • SRM 2.2 available for experiment testing: • DPM – available in 1.6.3 production release (CERN test cluster and elsewhere) • dCache “April 1. dCache.org will have dCache-1.8-beta, including all discussed basic SRM 2.2 features, available on the web site. So sites are free to prepare for pre-production dCache hardware and install the new 1.8 software as soon as available.” • CASTOR SRM 2.2 TBD • WN & UI for SLC4 (32bit mode) • Statement on gLite WMS required (now available – see March MB/GDB) Source: LCG ECM February 26th 2007

  29. Q1 2007 – Tier0 / Tier1s • Demonstrate Tier0-Tier1 data export at 65% of full nominal rates per site using experiment-driven transfers • Mixture of disk / tape endpoints as defined by experiment computing models, i.e. 40% tape for ATLAS; transfers driven by experiments • Period of at least one week; daily VO-averages may vary (~normal) • Demonstrate Tier0-Tier1 data export at 50% of full nominal rates (as above) in conjunction with T1-T1 / T1-T2 transfers • Inter-Tier transfer targets taken from ATLAS DDM tests / CSA06 targets • Demonstrate Tier0-Tier1 data export at 35% of full nominal rates (as above) in conjunction with T1-T1 / T1-T2 transfers and Grid production at Tier1s • Each file transferred is read at least once by a Grid job • Some explicit targets for WMS at each Tier1 need to be derived from above • Provide SRM v2.2 endpoint(s) that implement(s) all methods defined in SRM v2.2 MoU, all critical methods pass tests • See attached list; Levels of success: threshold, pass, success, (cum laude)

  30. Q2 2007 – Tier0 / Tier1s • As Q1, but using SRM v2.2 services at Tier0 and Tier1, gLite 3.x-based services and SL(C)4 as appropriate • It now looks clear that the above will not be fully ready for Q2 – perhaps not even for the pilot run! • Provide services required for Q3 dress rehearsals • Basically, what we had at end of SC4 + Distributed Database Services; LFC bulk methods; FTS 2.0 • Work also ongoing on VO-specific services, SAM tests, experiment dashboards and Joint Operations issues

  31. WLCG DB Service Schedule Colours are wrt April 1st start date of FDR preparations (CMS started February 12th, ATLAS February 26th)

  32. WLCG DB Service Schedule Colours are wrt April 1st start date of FDR preparations (CMS started February 12th, ATLAS February 26th)

  33. DB Applications: Tunc & Nunquam • Main distributed ‘database’ applications for LEP: conditions & file catalogue – just as now?? • Definitive article on conditions written in 1987: • “Database Systems for HEP Experiments”; • Computer Physics Communications 45 (1987) 200 – 310 • Definitive article on file catalogs yet to be written… • But the reality is really very different… • Very small number of database applications at all during LEP – don’t even dream of mentioning SQL… • Very many now – conditions and catalogs are just the tip of the iceberg… and they’re not all Oracle either

  34. Summary • The basic programme for this year is: • Q1 / Q2: prepare for the experiments’ Dress Rehearsals • Q3: execute Dress Rehearsals (several iterations) • Q4: Pilot run of the LHC – and indeed WLCG! • 2008 will probably be: • Q1: analyse results of pilot run • Q2: further round of Dress Rehearsals • Q3: data taking • Q4: (re-)processing and analysis

  35. Conclusions • Distributed Database Services are likely to be one of the few residual services that really make it for the 2007 pilot run! • Running production services is not easy and typically requires a lot of live experience before the main problems are fully ironed out • We need to remain strongly focussed on the production schedule / deadlines, e.g.FDR(s) • “It’s a team effort” • Tom Kyte, Effective Oracle by Design, Chapter 1

  36. The End

  37. The Worldwide LHC Computing Grid Experiment Plans for 2007 3D Workshop, SARA, March 2007

  38. Dimensions… Introduction Status of LHCb ATLAS ALICE CMS Conclusions ATLAS Bld. 40 CMS

  39. ATLAS Computing plans for 2007: D.Barberis 22 January 2007 (1/3)

  40. ATLAS Computing plans for 2007: D.Barberis 22 January 2007 (2/3)

  41. ATLAS Computing plans for 2007: D.Barberis 22 January 2007 (3/3) 30M evts/month 2Q/3Q to 60M+ evts/month 4Q First from end Feb+4 weeks: K.Bos – next in May ?

  42. CMS Commissioning Plan : S.Belforte Feb 1 2007

  43. CMS load generator proposal: D.Bonacorsi 1 Feb 2007 (1/2)

  44. CMS load generator proposal: D.Bonacorsi 1 Feb 2007 (2/2)

  45. Looking further ahead: ‘The Dress Rehearsal’ (A Mid Summer Night’s Dream?) A complete exercise of the full chain from trigger to (distributed) analysis, to be performed in 2007, a few months before data taking starts Some details for experts: • Generate O(107) evts: few days of data taking, ~1 pb-1 at L = 1031 cm-2 s-1 • Filter events at MC generator level to get physics spectrum expected at HLT output • Pass events through G4 simulation (realistic “as installed” detector geometry) • Mix events from various physics channels to reproduce HLT physics output • Run LVL1 simulation (flag mode) • Produce byte streams  emulate the raw data • Send raw data to Point 1, pass through HLT nodes (flag mode) and SFO, write out events by streams, closing files at boundary of luminosity blocks. • Send events from Point 1 to Tier0 • Perform calibration & alignment at Tier0 (also outside ?)  Run reconstruction at Tier0 (and maybe Tier1s ?)  produce ESD, AOD, TAGs • Distribute ESD, AOD, TAGs to Tier1s and Tier2s • Perform distributed analysis (possibly at Tier2s) using TAGs • MCTruth propagated down to ESD only (no truth in AOD or TAGs) Ambitious goals… need to plan it carefully (both in terms of effort needed and of technical issues and implications)

  46. WLCG Commissioning Schedule • Still an ambitious programme ahead • Timely testing of full data chain from DAQ to T-2 chain was major item from last CR • DAQ T-0 still largely untested

  47. The Worldwide LHC Computing Grid WLCG Service Interventions 3D Workshop, SARA, March 2007

  48. Handling WLCG Interventions… Whatever can go wrong usually does - sooner or later… • Never say never… • Need to understand: • What the intervention is for; • When it will take place; • Who will do what; • What the precise order is; • What to do when things go wrong; • Intervention plan & coordinator • Interventions carried out in CERN Grid Control Room • Checklist & template announcements being setup

  49. Any Services & The Grid • Message 1 – “Think Grid” • Your services and their availability probably have a much wider visibility & impact than a single site… • Message 2 – “Think User” • Users typically shielded by many layers of experiment software and / or Grid middleware • Does an ATLAS DDM user / CMS PhEDEx user know and understand FNAL DBA standards ? • These are of course generic statements and valid for all services in the Grid…

  50. WLCG Interventions • Scheduled service interventions shall normally be performed outside of the announced period of operation of the LHC accelerator. • In the event of mandatory interventions during the operation period of the accelerator – such as a non-critical security patch[1] – an announcement will be made using the Communication Interface for Central (CIC) operations portal and the period of scheduled downtime entered in the Grid Operations Centre (GOC) database (GOCDB). • Such an announcement shall be made at least one working day in advance for interventions of up to 4 hours. N.B. this does NOT mean the afternoon of the day before! • Interventions resulting in significant service interruption or degradation longer than 4 hours and up to 12 hours shall be announced at the Weekly Operations meeting prior to the intervention, with a reminder sent via the CIC portal as above. • Interventions exceeding 12 hours must be announced at least one week in advance, following the procedure above. • A further announcement shall be made once normal service has been resumed. • [deleted] • Intervention planning should also anticipate any interruptions to jobs running in the site batch queues. If appropriate the queues should be drained and the queues closed for further job submission. CERN uses GMOD & SMOD to ensure announcements are made correctly. (CIC portal and CERN IT status board respectively.)

More Related