1 / 27

LHCb Computing Model and Grid Status

LHCb Computing Model and Grid Status. Glenn Patrick GRIDPP13, Durham – 5 July 2005. LHCb – June 2005. MF4. MF1-MF3 Mu-filters. LHCb Magnet. HCAL. ECAL. 03 June 2005. Computing completes TDRs. Jan 2000. June 2005. Tier 1. Tier 1. Tier 1. Tier 1. Tier 1. Tier 1. Online System.

Télécharger la présentation

LHCb Computing Model and Grid Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb Computing Modeland Grid Status Glenn Patrick GRIDPP13, Durham – 5 July 2005

  2. LHCb – June 2005 MF4 MF1-MF3 Mu-filters LHCb Magnet HCAL ECAL 03 June 2005

  3. Computing completes TDRs Jan 2000 June 2005

  4. Tier 1 Tier 1 Tier 1 Tier 1 Tier 1 Tier 1 Online System 40 MHz Level-0 Hardware 1 MHz Level-1 Software 40 kHz HLT Software Raw Data: 2kHZ, 50MB/s Tier 0 2 kHz

  5. HLT Output Understand bias on other B selections. Clean peak allows PID calibration. Calibration for proper-time resolution. 200 Hz Hot Stream Will be fully reconstructed on online farm in real time. “Hot stream” (RAW + rDST) written to Tier 0. 2kHz RAW data written to Tier 0 for reconstruction at CERN and Tier 1s.

  6. Event Model/Physics Event Model Framework - Gaudi Data Flow Stripped DST Detector Description Conditions Database Analysis DaVinci Digitisation Boole Reconstruction Brunel Simulation Gauss Raw Data MC Truth DST Analysis Objects

  7. LHCb Computing Model • CERN Tier 1 essential • for accessing “hot • stream” for • First alignment & calibration. • First high-level analysis. 14 candidates

  8. Distributed Data CERN = Master Copy RAW DATA 500 TB 2nd copy distributed over six Tier 1s Pass 1: During data taking at CERN and Tier 1s (7 months) RECONSTRUCTION 500TB/pass Pass 2: During winter shutdown at CERN, Tier 1s and online farm (2months) Pass 1: During data taking at CERN and Tier 1s (7 months) STRIPPING 140 TB/pass/copy Pass 2: After data taking at CERN and Tier 1s (1 month) Pass 3: During shutdown at CERN, Tier 1s and online farm Pass 4: Before next year data taking at CERN and Tier 1s (1 month)

  9. Prod DB group1 group2 groupN Stripping Job - 2005 Stripping runs on reduced DSTs (rDST). Pre-selection algorithms categorise events into streams. Events that pass are fully reconstructed and full DSTs written. Read INPUTDATA and stage them in 1 go Usage of SRM Check File status CERN, CNAF, PIC used so far – sites based on CASTOR. Not yet Staged Send bad file info staged Send file info Check File integrity Check File integrity Check File integrity Check File integrity Good file Merging processDST and ETC ETC DaVinci stripping DaVinci stripping DaVinci stripping DST DaVinci stripping

  10. Resource Profile

  11. 2010 2008 2009 Comparisons - CPU Tier 1 CPU – integrated (Nick Brook) LHCb

  12. Tier-2 Tier-1 54%pledged CERN Comparisons- Disk LCG TDR – LHCC, 29.6.2005 (Jurgen Knobloch)

  13. Tier-1 CERN 75%pledged Comparisons - Tape LCG TDR – LHCC, 29.6.2005 (Jurgen Knobloch)

  14. DIRAC Architecture User interfaces Job monitor Production manager GANGA UI User CLI BK query webpage FileCatalog browser BookkeepingSvc FileCatalogSvc DIRAC Job Management Service DIRAC services JobMonitorSvc InformationSvc MonitoringSvc JobAccountingSvc AccountingDB Agent Agent Agent DIRAC resources DIRAC Storage LCG Resource Broker DIRAC Sites CE 3 DIRAC CE gridftp bbftp DIRAC CE DIRAC CE DiskFile CE 2 CE 1 rfio Services Oriented Architecture

  15. 187 M Produced Events Phase 1 Completed 3-5 106/day LCG restarted LCG paused LCG in action 1.8 106/day DIRAC alone Data Challenge 2004 20 DIRAC sites + 43 LCG sites were used. Data written to Tier 1s. UK second largest producer (25%) after CERN. • Overall, 50% of events produced • using LCG. • At end, 75% produced by LCG.

  16. RTTC - 2005 Real Time Trigger Challenge – May/June 2005 150M Minimum bias events to feed online farm and test software trigger chain. 37% Completed in 20 days (169M events) on 65 different sites. 95% produced with LCG sites 5% produced with “native” DIRAC sites Average of 10M events/day. Average of 4,000 cpus

  17. Start DC06 Processing phase May 2006 Analysis at Tier 1s Nov. 2005 Alignment/calibration Challenge October 2006 Next Challenge SC3 – Sept. 2005 Ready for data taking April 2007 2005 2006 2007 2008 SC3 First physics cosmics First beams Full physics run SC4 LHC Service Operation Looking Forward

  18. Phase 1 (Sept. 2005 ): Movement of 8TB of digitised data from CERN/Tier 0 to LHCb Tier 1 centres in parallel over a 2 week period (~10k files). Demonstrate automatic tools for data movement and bookkeeeping. Removal of replicas (via LFN) from all Tier 1 centres. Redistribution of 4TB data from each Tier 1 centre to Tier 0 and other Tier 1 centres over a 2 week period. Demonstrate data can be redistributed in real time to meet stripping demands. Moving of stripped DST data (~1TB, 190k files) from CERN to all Tier 1 centres. LHCb and SC3 Phase 2 (Oct. 2005 ): • MC production in Tier 2 centres with DST data collected in Tier 1 centres in real time followed by stripping in Tier 1 centres (2 months). Data stripped as it becomes available. • Analysis of stripped data in Tier 1 centres.

  19. Tier 1 Read only LCG file catalogue (LFC) for > 1 Tier 1. SRM version 1.1 interface to MSS. GRIDFTP server for MSS. File Transfer Service (FTS) and LFC client/tools. gLite CE. Hosting CE for agents (with managed, backed up file system). Job Agent CE Hosting CE Monitor Agent Transfer Agent Request DB LocalSE Software repository WN Job SC3 Requirements Tier 2 • SRM interface to SE. • GRIDFTP access • FTS and LFC tools.

  20. Phase 1 Temporary MSS access to ~10TB of data at each Tier 1 (with SRM). Permanent access to 1.5TB on disk at each Tier 1 with SRM interface. SC3 Resources Phase 2 • LCG version 2.5.0 in production for whole of LCG. • CPU • MC production: ~250 (2.4GHz) WN over 2 months (non Tier1). • Stripping: ~2 (2.4GHz) WN per Tier 1 for duration of this phase. • Storage (permanent) • 10TB of storage across all Tier 1s and CERN for MC output. • 350 GB disk with SRM interface at each Tier 1 and CERN for stripping output.

  21. Data Production on the Grid Production manager DIRAC Sites DIRAC CE DIRAC CE DIRAC CE WN Job Production DB DIRAC Job Monitoring Service DIRAC Job Management Service PilotJob Agent File Catalog LFC Agent LCG Resource Broker CE 3 Agent Agent CE 2 CE 1 RemoteSE LocalSE

  22. Software installation Gauss execution Steps Gauss B Gauss MB Gauss MB Gauss MB Sim Check logfile Dir listing Bookkeeping report Boole B Boole MB Boole MB Boole MB Digi Modules Reco Brunel B Brunel MB UK:Workflow Control Production Desktop Gennady Kuznetsov (RAL) Primary event Spill-over event

  23. TCP/IP Streaming GANGA application Bookkeeping ARDA Server ARDA Client ARDA Client Tomcat API API Servlet Web Browser UK: LHCb Metadata and ARDA Carmine Cioffi (Oxford)

  24. Job Job Job Job LSF LSF store & retrieve job definition localhost localhost submit, kill gLite gLite prepare, configure LCG2 LCG2 Athena get output update status DIRAC DIRAC Gaudi DIAL DIAL scripts AtlasPROD AtlasPROD Ganga4 UK: Ganga Karl Harrison (Cambridge) Alexander Soroko (Oxford) Alvin Tan (Birmingham) Ulrik Egede (Imperial) Andrew Maier (CERN) Kuba Moscicki (CERN) See next talk! + split, merge, monitor, dataset selection

  25. Version Production version: VELO: v3 for T<t3, v2 for t3<T<t5, v3 for t5<T<t9, v1 for T>t9 HCAL: v1 for T<t2, v2 for t2<T<t8, v1 for T>t8 RICH: v1 everywhere ECAL: v1 everywhere Time VELO alignment HCAL calibration RICH pressure ECAL temperature t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 Time = T Data source UK: Conditions Database User-interface Nicolas Gilardi (Edinburgh) LCG COOL project providing underlying structure for conditions database.

  26. UK: Analysis with DIRAC Software Installation + Analysis via DIRAC WMS Stuart Patterson (Glasgow) Check for all SE’s which have data PACMAN DIRAC installation tools Data as LFN DIRAC API for analysis job submission Job DIRAC If no data specified [ Requirements = other.Site == "DVtest.in2p3.fr"; Arguments = "jobDescription.xml"; JobName = "DaVinci_1"; OutputData = { "/lhcb/test/DaVinci_user/v1r0/LOG/DaVinci_v12r11.alog" }; parameters = [ STEPS = "1"; STEP_1_NAME = "0_0_1" ]; SoftwarePackages = { "DaVinci.v12r11" }; JobType = "user"; Executable = "$LHCBPRODROOT/DIRAC/scripts/jobexec"; StdOutput = "std.out"; Owner = "paterson"; OutputSandbox = { "std.out", "std.err", "DVNtuples.root", "DaVinci_v12r11.alog", "DVHistos.root" }; StdError = "std.err"; ProductionId = "00000000"; InputSandbox = { "lib.tar.gz", "jobDescription.xml", "jobOptions.opts" }; JobId = ID ] Matching Closest SE Agent Installs software Task-Queue Job executes on WN

  27. 2007 Data Taking Distributed Analysis Distributed Reconstruction Data Stripping 2005 Monte-Carlo Production on the Grid Conclusion Half way there! But the climb gets steeper. DC04 DC03

More Related