1 / 51

LHCb Computing Model

LHCb Computing Model. Domenico Galli, Bologna. INFN CSN1 Roma, 31.1.2005. Premise: The Event Rates. Current LHCb computing model and resource estimates are based on the event rates at HLT output following “ re-optimized trigger/DAQ/computing ”:

keefe
Télécharger la présentation

LHCb Computing Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCbComputing Model Domenico Galli, Bologna INFN CSN1 Roma, 31.1.2005

  2. Premise: The Event Rates • Current LHCb computing model and resource estimates are based on the event rates at HLT output following “re-optimized trigger/DAQ/computing”: • Maximize physics output given available/expected computing resources. • They are summarized on the following table: LHCb Computing Model. 2 Domenico Galli

  3. CERN On-line Farm MC calibration data Selected DST+RAW TAG RAWmc data RAW data • CERN • Tier-1s Physics Analysis reconstruction User DST n-tuple User TAG rDST Local Analysis pre-selectionanalysis • Tier-3s Paper DST+RAW TAG The LHCb Dataflow • Tier-2s • On-line Farm • CERN • Tier-1s • CERN • Tier-1s Chaotic job Scheduled job LHCb Computing Model. 3 Domenico Galli

  4. The LHCb Dataflow (II) LHCb Computing Model. 4 Domenico Galli

  5. calibration data RAWmc data RAW data reconstruction rDST pre-selectionanalysis DST+RAW TAG Event Parameters • rDST (reduced DST): only enough reconstructed data to allow the physics pre-selections algorithms to be run. LHCb Computing Model. 5 Domenico Galli

  6. 200 b-exclusive di-muon D* 900 600 b-inclusive 300 HLT 1 a = 107 s over 7-month period CERNcomputingcentre 200 Hz 60 MB/s 2x1010 evt/a 500 TB/a b-exclusive 200 Hz rDST (25 kB/evt) 2 streams di-muon 600 Hz 2 kHz RAW (25 kB/evt) D* 300 Hz b-inclusive 900 Hz The On-line Event Filter Farm • 5.5 MSi2k; • 1800 CPU (assuming PASTA/2006-2007forecast); • 40 TB disk. LHCb Computing Model. 6 Domenico Galli

  7. RAW data 25 kB/evt calibration data reconstruction 24 kSi2k•s/evt rDST data 25 kB/evt Reconstruction • Evaluate: • Track position and momentum. • Energy of electromagnetic and hadronic showers. • Particle identification (e, γ, π0, π/K, μ). • Make use of: • Calibration and alignment constants (produced from online monitoring and/or offline from a pre-processing of data associated with the sub-detector). • Detector conditions (a subset ofExperimental Control System database). LHCb Computing Model. 7 Domenico Galli

  8. Reconstruction (II) • Required CPU for 1 pass of 1-year data set: 1.5 MSi2k•a. • Performed twice in a year. LHCb Computing Model. 8 Domenico Galli

  9. Reconstruction (III) • 500 TB/a input RAW. • Stored on MSS in 2 copies: one at CERN, the other divided among Tier-1s: • 500 TB/a @ CERN; • 500/6 = 83 TB/a @ each Tier-1; • 500 TB/a output rDST per pass. • 1000 TB/a stored on MSS in 1 copy divided among CERN and Tier1s: • 1000/7 = 150 TB/a @ each CERN + Tier-1 LHCb Computing Model. 9 Domenico Galli

  10. rDST25 kB/evt RAW25 kB/evt pre-selection analysis 0.2 kSi2k•s/evt di-muonrDST+RAW 50 kB/evt b-exclusiveDST+RAW100 kB/evt b-inclusiveDST+RAW100 kB/evt D*rDST+RAW50 kB/evt TAG Pre-selection Analysis (aka Stripping) • Evaluate: • 4-momentum of measured particletracks; • Primary and secondary vertices; • Candidates for composite particles; • 4-momentum of composite particles. • Apply: • Cuts based on a specificpre-selection algorithm foreach of the ~40 physicschannels. • At least 4 output data streamsforeseen during the first datataking (b-exclusive,b-inclusive, di-muonand D*). LHCb Computing Model. 10 Domenico Galli

  11. rDST25 kB/evt RAW25 kB/evt pre-selection analysis 0.2 kSi2k•s/evt di-muonrDST+RAW 50 kB/evt b-exclusiveDST+RAW100 kB/evt b-inclusiveDST+RAW100 kB/evt D*rDST+RAW50 kB/evt TAG Pre-selection Analysis (II) • Pre-selection cuts are looser with respect to final analysis and include sidebands to extract background properties. • The event that pass the selection criteria will be fully reconstructed (full DST, 75 kB/evt). • An Event Tag Collection is created for faster reference to selectedevents; it contains: • a brief summary of eachevent’s characteristics; • the results of the pre-selectionalgorithms; • a reference to theactual DST record. LHCb Computing Model. 11 Domenico Galli

  12. Pre-selection Analysis (III) • Required CPU for 1 pass of 1-year data set: 0.29 MSi2k•a. • Performed 4 times in a year. LHCb Computing Model. 12 Domenico Galli

  13. Pre-selection Analysis (IV) • Input: 2 pass x 500 TB/a rDST. • Output: 4 pass x (119 + 20 = 139) TB/a DST+TAG. • Stored on MSS in 2 copies: one at CERN, the other divided among Tier-1s: • 4x139 = 556 TB/a @ CERN; • 556/6 = 93 TB/a @ each Tier-1; • Stored on disk in 7 copies: one at CERN, one for each Tier-1. Older version removed (2 version kept): • 2x139 TB/a @ CERN; • 2x139 TB/a @ each Tier-1; LHCb Computing Model. 13 Domenico Galli

  14. Simulation • Simulation studies are usually performed in order to: • measure the performance of the detector and of the event selection as a function of the regions of phase space; • estimate the efficiency of the full reconstruction and analysis of the B decay channel. • Due to the large background rejection, a full simulation of background events is unfeasible. Moreover it is better to rely on real data (mass sidebands) than on MC samples. • Simulation strategy: concentrate the simulation on what we consider as main-stream signals, in particular B decays and b-inclusive events. • Statistics must be sufficient such that the total error is not dominated by MC statistical error. LHCb Computing Model. 14 Domenico Galli

  15. Simulation (II) • 2•109 signal events; • 2•109 b-inclusive events; • 10% of these events will pass the trigger simulation and will be reconstructed and stored on MSS. • 6.5 MSi2k•a required (dominates CPU needs for LHCb). • MC DST size (including “thruth” information and relationships) is ~400 kB/evt. TAG size is ~1 kB/evt. • MSS storage: 160 TB/a. LHCb Computing Model. 15 Domenico Galli

  16. Selected DST+RAW TAG Physics Analysis 0.3 kSi2k•s/evt User DST n-tuple User TAG Local Analysis Paper Analysis • Analysis starts from stripped DST. • Output of stripping is self-contained, i.e. no need to navigate between files. • Further reduces the sample (typically by a factor of 5) to focus on one particular analysis channel. • Produce an n-tuple object or aprivate stripped DST, used bya single physicist or a small groupof collaborators. • Typical analysis jobs run on a~106 event sample. • Some analysis jobs will run ona larger ~107 event sample. LHCb Computing Model. 16 Domenico Galli

  17. Analysis (II) LHCb Computing Model. 17 Domenico Galli

  18. Analysis (III) • CPU need in 2008 (including 60% efficiency):1.3 MSi2k•a. • Due to better access to the RAW data, past copies of stripped DST and the availability of MC data, we foresee CERN servicing a larger fraction of the analysis: • CERN: 25%; • Tier-1: 75% (12.5% each one). • CPU power required in 2008 for CERN:1.3 * 0.25 = 0.32 MSi2k•a. • CPU power required in 2008 for each Tier-1:1.3 * 0.75/6 = 0.16 MSi2k•a. • CPU need for analysis will grow linearly with the available data in the early years of data taking (e.g. 3.9 MSi2k•a in 2010). • Disk storage need in 2008: ~200 TB. (will grow linearly with the available data in the early years of the experiment, e.g. ~600 TB in 2010). LHCb Computing Model. 18 Domenico Galli

  19. Data Location (MSS) • Tier-1s: • INFN-CNAF (Bologna, Italy) • FZK (Karlsruhe, Germany) • IN2P3 (Lyon, France) • NIKHEF (Amsterdam, Netherlands) • PIC (Barcelona, Spain) • RAL (UK) CERN Tier1’s RAW x 2 rDST DST x 2 MC x 2 LHCb Computing Model. 19 Domenico Galli

  20. 2008 • Assumed first year of full data taking: • 107 seconds @ 2 x 1032cm-2s-1. • Extended over 7 month (April-October) • These are “stable running conditions”. • Data sample: LHCb Computing Model. 20 Domenico Galli

  21. CPU Requirements in 2008 • Online FARM resources not presented here. • CPU efficiencies: • Production: 85%. • Analysis: 60%. LHCb Computing Model. 21 Domenico Galli

  22. CPU Requirements in 2008 (II) LHCb Computing Model. 22 Domenico Galli

  23. CPU Requirements in 2008 (III) LHCb Computing Model. 23 Domenico Galli

  24. Permanent Storage (MSS) in 2008 LHCb Computing Model. 24 Domenico Galli

  25. Fast Storage (Disk) in 2008 LHCb Computing Model. 25 Domenico Galli

  26. Network Bandwith • Peak bandwidth need exceed the average by a factor of 2. LHCb Computing Model. 26 Domenico Galli

  27. Network Bandwith (II) LHCb Computing Model. 27 Domenico Galli

  28. Network Bandwith (III) LHCb Computing Model. 28 Domenico Galli

  29. CPU growth LHCb Computing Model. 29 Domenico Galli

  30. Permanent Storage (MSS) growth LHCb Computing Model. 30 Domenico Galli

  31. Fast Storage (Disk) growth LHCb Computing Model. 31 Domenico Galli

  32. Re-optimization Cost Comparison • “Now” estimate based on CERN financing model (no internal LAN estimates though) • delay in purchasing, PASTA III report, … LHCb Computing Model. 32 Domenico Galli

  33. Tier-2 in Italy • In LHCb computing model Monte Carlo production is performed at Tier-2s. • LHCb-Italy has currently no priority on Tier-2 resources. • We see the following options: • Reserve some Tier-1 resources to perform Monte Carlo production as well. • Build-up LHCb Tier-2(s). • Add resources for LHCb to existing Italian Tier-2s. LHCb Computing Model. 33 Domenico Galli

  34. Tier-1 • In LHCb computing model Tier-1s are the primary user analysis facility. We need in Tier-1 fast random disk access. • We are investigating the solution of parallel file systems together with SAN technology as a mean to achieve the required I/O needs. LHCb Computing Model. 34 Domenico Galli

  35. Testbed for Parallel File Systems @ CNAF Gigabit switch 4 Gb trunked uplinks Gigabit switch Network Boot Server for the 14 File Servers 14 File Servers GPFS PVFS Lustre 40 GB IDE disk Rack of 36 Clients LHCb Computing Model. 35 Domenico Galli

  36. Parallel File Systems: Write Throughput Comparison LHCb Computing Model. 36 Domenico Galli

  37. Parallel File Systems: Read ThroughputComparison LHCb Computing Model. 37 Domenico Galli

  38. Back-up

  39. Trigger/DAQ/Computing Re-optimization • In the original model only b-exclusive decays were collected (200 Hz). • The idea was to understand the properties of the background by the simulation of large samples of background events. • In the meanwhile, also having in mind the Tevatron experience, we realized that whenever and as much as possible we need to extract information from real data itself. • E.g. study the background from the sideband of the mass spectrum • Collect unbiased samples of b events • E.g. trigger on the semileptonic decay of the other B. • The net effect is the reduction of the CPU need for simulation but the increase in the storage need with no overall increase in the cost. LHCb Computing Model. 39 Domenico Galli

  40. Dimuon Events • Simple and robust trigger: • L1: • ~1.8 kHz of high-mass dimuon trigger without IP cuts • HLT (with offline tracking and muID): • ~600 Hz of dimuon candidates with high-mass (J/ or above) • Clean and abundant signals: • J/, (1S), … Z mass peaks • Unique opportunity to understand the tracking (pin down systematics) • Mass and momentum (B field calibration): • use dimuons from resonances of known masses • IP, decay length, proper time resolution: • use J/ dimuons, which have common origin • Check of trigger biases: • flat acceptance (vs proper time) for all BJ/X channels • could be used as a handle to understand acceptance vs proper time for other channels where IP cuts are applied  mass within 500 MeV of J/ or B mass, or above B mass Huge statistics enables study as a function of many parameters: geometry, kinematics (phase space) LHCb Computing Model. 40 Domenico Galli

  41. J/ signal • Loose offline selection, after L0 and L1-dimuon without IP cut (no HLT yet) • ~130 Hz of signal J/, dominated by prompt production • O(109) signal J/ per year = O(103)  CDF’s statistics • Possible conceptual use for calibration of proper-time resolution (to be studied): • Bs Dsh CP/mixing fit sensitive to ~5% variations in the global scale factor on the proper-time resolution  would help to know it to ~1%  O(105) J/ needed for such precision. • To check event-by-event errors, extract scale factors in “phase space” cells  can envisage up to 104 cells (e.g. 10 bins in 4 variables). LHCb Computing Model. 41 Domenico Galli

  42. D* events • Dedicated selection can collect abundant and clean D*  D0(K) peak without PID requirements • Such events can be used for PID (K and ) calibration + additional constraint for mass scale, etc … • Large statistics again allows study in bins of phase space LHCb Computing Model. 42 Domenico Galli

  43. b events • Straightforward trigger at all levels: • Require muon with minimum pT and impact parameter significance (IPS) • Rely only on one track (robustness !) • No bias on other b-hadron • Handle to study and understand our other highly-biasing B selections • Example: • Set pT threshold at 3 GeV/c and IPS threshold at 3  900 Hz output rate, including 550 Hz of events containing true b decay LHCb Computing Model. 43 Domenico Galli

  44. Estimate of Reconstruction CPU & MSS LHCb Computing Model. 44 Domenico Galli

  45. Estimate of Reconstruction CPU & MSS (II) • Required CPU for 1 pass of 1-year data set: 1.52 MSi2k•a. • Performed twice in a year. • 1st pass (during data taking, over a 7-month period): • CPU power required (assuming 85% CPU usage efficiency):(1.52-0.15)*12/7*100/85 = 2.8 MSi2k. • CPU power required for each Tier-1: 2.8/7 = 0.39 MSi2k. • 2nd pass (re-processing, during winter shut-down, over a 2-month period): • CPU power required (assuming 85% CPU usage efficiency): 1.52*12/2*100/85 = 10.7 MSi2k. • CPU power provided by Event Filter Farm: 5.5 MSi2k. • CPU power to be shared between CERN and Tier-1s: 5.2 MSi2k. • CPU power required for each Tier-1: 5.2/7 = 0.74 MSi2k. LHCb Computing Model. 45 Domenico Galli

  46. Estimate of Pre-selection Analysis CPU & Storage LHCb Computing Model. 46 Domenico Galli

  47. Estimate of Pre-selection Analysis CPU & Storage (II) • Required CPU for 1 pass of 1-year data set: 0.29 MSi2k•a. • Performed 4 times in a year. • 1stpass (during data taking, over a 7-month period): • CPU power required (assuming 85% CPU usage efficiency):0.29*12/7*100/85 = 0.58 MSi2k. • CPU power required for each Tier-1/CERN: 0.58/7 = 0.08 MSi2k. • 2nd pass (after data taking, over a 1-month period): • CPU power required (assuming 85% CPU usage efficiency):0.29*12/1*100/85 = 2.1 MSi2k. • CPU power required for each Tier-1/CERN: 4.1/7 = 0.59 MSi2k. LHCb Computing Model. 47 Domenico Galli

  48. Estimate of Pre-selection Analysis CPU & Storage (III) • 3rdpass (after re-processing, during winter shut-down, over a 2-month period): • CPU power required (assuming 85% CPU usage efficiency): 0.29*12/2*100/85 = 2.05 MSi2k. • CPU power provided by Event Filter Farm: 42% = 0.86 MSi2k. • CPU power to be shared between CERN and Tier-1s: 1.19 MSi2k. • CPU power required for each Tier-1/CERN: 1.19/7 = 0.17 MSi2k. • 4thpass (before next year data taking, over a 1-month period): • CPU power required (assuming 85% CPU usage efficiency):0.29*12/1*100/85 = 2.1 MSi2k. • CPU power required for each Tier-1/CERN: 4.1/7 = 0.59 MSi2k. LHCb Computing Model. 48 Domenico Galli

  49. Estimate of Simulation CPU & MSS LHCb Computing Model. 49 Domenico Galli

  50. Estimate of Simulation CPU & MSS (II) LHCb Computing Model. 50 Domenico Galli

More Related