1 / 35

DATA QUALITY AND STRATEGIES FOR EARLY CALIBRATION at LHC

missing energy. energy. energy. ATLAS. energy. DATA QUALITY AND STRATEGIES FOR EARLY CALIBRATION at LHC. Strategy for commissioning with early data Data quality assessment Calibration and alignment Expected performances in early stage Evaluation of detector performances

adeola
Télécharger la présentation

DATA QUALITY AND STRATEGIES FOR EARLY CALIBRATION at LHC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. missing energy energy energy ATLAS energy DATA QUALITY AND STRATEGIES FOR EARLY CALIBRATION at LHC • Strategy for commissioning with early data • Data quality assessment • Calibration and alignment • Expected performances in early stage • Evaluation of detector performances • Mostly based on MC (+testbeam) studies Claude Guyot (Saclay) Most examples taken from ATLAS. But apply also to CMS in most cases Susy 2010: Performances with early data C.Guyot

  2. Expected luminosities for the first year Proposed scenario for early physics: First collisions at 14 TeV: July 2008 ? after system and beam commissioning ~4 months of proton-proton physics run in 2008 phase 1: 43 bunches, L ~ 5 x 1030 phase 2: 75 ns, L ~1 x 1031 3 x 1031 End 2008: pilot run 25 ns, L ~ 1 x 1032 Integrated luminosity end of 2008: 100 - 300 pb-1 ? (100 pb-1 = 120 effective days @ 1031 cm-2s-1) Restart in spring 2009 with L ~ a few 1032 => Integrated luminosity end of 2009: 1-3 fb-1 ? Susy 2010: Performances with early data C.Guyot

  3. How many events per experiment for calibration at the beginning ? l  e or  Assumed selection efficiency: W l, Z ll : 20% tt  l+X : 1.5% (no b-tag, inside mass bin) similar statistics to CDF, D0 today + lots of minimum-bias and jets (107 events in 2 weeks of data taking if 20% of trigger bandwidth allocated) 100 pb-1 3 months at 1031 or few days at 1032 , =50% 1 fb-1  6 month at 1032, =50% F. Gianotti Susy 2010: Performances with early data C.Guyot

  4. Aiming at pragmatic approach to Data quality monitoring and assessment for initial data taking. Sophisticated automatic DQ assessment should follow ! DØ Run II, V. Shary, CALOR04 SUSY ? Data Quality Will Drive the Success of LHC Experiments • Though current effort concentrates on building DQA infrastructure, the aim of it is of course to achieve the best possible quality of ATLAS data ! • CDF, DØ found that physics output of first 3 years of Run II was limited (a.o.) by: • Calorimeter calibration and noise • Tracking and calorimeter alignment • Luminosity • Perform. & speed of reconstruction • “Monitoring of data quality is the key” • Must have tools ready • Dedicated (recognized!) manpower • Another problem identified : • “Too high standards, perfectionism” Susy 2010: Performances with early data C.Guyot

  5. Data Quality assessment tools • Main tools for spotting detector and reconstruction problems: • Online DQA: • DCS: HV, LV, gas, temperatures, optical alignments… • ROD level histos: hit rates, noisy/dead channels, pedestals, T0, drift time behaviours…. • Monitoring farm: • Full reconstruction of sampled events (rather low stat, a few Hz)=> look at combined recons. • Trigger checks: • rates, reconstruction quality in high level triggers (LVL2, EF), threshold curves, first estimate of efficiencies… • Offline DQA at Tier0: • Express stream (start 1-2h after data taking): • selection of “interesting” events by triggers: • Events with very high pT leptons (e.g. > 200GeV) or missing ET, multijets, dileptons (J/y,U,Z), prescaled minimum biased and low pT triggers • First full reconstruction (calibration constants from previous iteration) • Fast feedback to shift crew through automates tools • Bulk reconstruction (24-48 hours after data taking): • Full statistics • Use updated calibrations Susy 2010: Performances with early data C.Guyot

  6. DB from online Config, Calib DCS, DQ status CERN Tier0 storage express Calibration stream 2h Xpress recons RAW data streams: 200Hz, 320MB/s Offline DQA 1 Calibration /Alignment processing 8h status updates updated calib + Calib/align + DQ status + Archives Prompt bulk reconstruction 24h updated status Offline DQA 1 T1 transfer ESD 100MB/s AOD 20MB/s status updates updated status 3 months + Calib/align + DQ status + Archives T1 (Late Reprocessing) Time Data flow and Data Quality Assessment in ATLAS Detector Control System Front-end Trigger Level 1 RODs Online DQA Trigger Level 2 ~500 nodes Event Builder Monitoring farm Online Date Base EF EF EF EF + Shift Log + DQ status + Archives >1000 nodes SFOs Online Susy 2010: Performances with early data C.Guyot

  7. ATLAS & CMS: Performance Overview Susy 2010: Performances with early data C.Guyot

  8. Expected Day 0 Goals for Physics EMCAL uniformity ~ 1% ATLAS ~ 4% CMS < 1% (e.g. H gg) Lepton energy scale 0.5—2% 0.1% (W mass) HCAL uniformity 2—3% < 1% (Etmiss) Jet energy scale <10% 1% (top mass) Tracker alignment O(5-10 mm) (B tagging) 20—200 mm in Rf Muon spectro alignment 100 mm (ATLAS) 30 mm (ATLAS) (Z’mm) Calibration and alignment • What are the goals in term of momentum/energy scale measurement and resolution? Susy 2010: Performances with early data C.Guyot

  9. EM calibration • Online (electronic) calibration: • Used to monitor and correct for short term (< day) response variations (pedestal, gains, noisy/dead channels) • Allow connection with test beam calibrations (ATLAS) => 1% EM calo uniformity and <2% on electron energy scale at Day0 • So far in ATLAS Liquid Argon EM calorimeter, less than 0.1% of dead channels => no significant impact on physics • In situ calibration using physics data: • mapping ID material with photon conversions in EM calo • understanding isolated e identification and reconstruction • intercalibration with Z→e+e- • E/p from W→e±ν and inclusive electrons Susy 2010: Performances with early data C.Guyot

  10. EM calibration: ID material mapping Affects electrons and g: energy loss, conversions => Need to know the material distribution to control the electron/g identification efficiency • ATLAS study (in progress): • Use p0->gg from minimum bias and ratios of double conversion to single conversion and no conversion to determine the amount of material in front of the calo. • Provided that we know the reconstruction efficiency of converted photons, one can get the total X0 with 1% error in a few days Susy 2010: Performances with early data C.Guyot

  11. Events for EM calibration • Expected statistics of interesting events for calibration and for assessing the EM calorimetry performances Only at very low lumi Low energy calibration Uniformity studies • Use W samples for E/P (calo/tracker) studies • Challenge: How to extrapolate from medium Z->ee energies to very high energies (>200GeV) => rely on test beam and MC simulations Susy 2010: Performances with early data C.Guyot

  12. EM calibration from Z-> ee First collisions : ~105 Z -> ee events expected in 2008 Constant term in the calorimeter resolution: ctot = cL+cLR • cL ≈ 0.5% demonstrated at the test-beam over units Δη x ΔF = 0.2 x 0.4 • cLR ≡ long-range response non-uniformities from unit to unit (400 total) (module-to-module variations, different upstream material, etc.) Use Z-mass constraint to correct long-range non-uniformities: From full simulation : ~ 250 e± / unit are needed to achieve cLR ~0.4%=> ~105 events => Get the absolute energy scale at <0.1% Main difficulty: Disentangle energy scale with material effects Susy 2010: Performances with early data C.Guyot

  13. Electron reconstruction efficiencyusing Z-> ee events • Based on Tag&probe method: • Well identified electron (based on ID+calos info) on one side (tag electron) • Simple object (e.g. isolated ID track or calo EM cluster) on the other side (probe electron) • Efficiency derived from the number of events in the Z mass window Tag&probe agrees with truth matching to ~0.1% in average! A similar tag&probe method can be used for muon reconstruction efficiency (use ID and Muon spectrometer) Susy 2010: Performances with early data C.Guyot

  14. Jet energy scale Validation of the energy of a jet is a BIG challenge Startup: uncertainty ~5-10% , from MC studies, test beam, cosmics First data: embark on data-driven JES derivation e.g. D0: 5 years of run II data: Using +jet and dijet events CMS and ATLAS:  2-3% above 20 GeV after 1-10 fb-1 and 1% eventually? Ambitious! Susy 2010: Performances with early data C.Guyot

  15. Jet energy scale calibration (1) Jet energy calibration can be divided in 4 steps • Calorimeter tower/cluster reconstruction • jet making (cone 0.4/0.7, kT…) • jet calibration from calorimeter to particle scale • jet calibration from particle scale to the parton scale • Difficulties: • Different response to EM and non EM showers • Correct for escaping and invisible energy (K0, neutrons, dead matter in calo..) Susy 2010: Performances with early data C.Guyot

  16. Jet Calibration Approaches • Global Jet Calibration • use towers or clusters on EM-scale as input to jets • match a truth particle jet with each reco jet • fit a calibration function in h, E to all matched jet pairs • Local Hadron Calibration • calibrate 3D clusters independent of any jet algorithm making an assumption on their EM or non EM nature • make jets out of calibrated clusters • Hadronic Scale: • tune simulation to describe reco jet level and map to corresponding truth particle jet • from single isolated hadrons in test-beam, minimum bias events and t decays (E/p-ratio) • Non Uniformity in h : • from di-jet events • Final In-situ Calibration • with W-mass in tt  WbWb  lnb jjb • with pT balance in Z/g+ jet Monte Carlo Level: minimize c2 function to find calibration constants (weights) to match Ereco with MC truth: ERECO = ΣwiEi Test beam and physics data Susy 2010: Performances with early data C.Guyot

  17. longitudinal energy leakage detector signal inefficiencies (dead channels, HV…) pile-up noise from (off-time) bunch crossings electronic noise calo signal definition (clustering, noise suppression ,…) dead material losses (front, cracks, transitions…) detector response characteristics (e/h ≠ 1) jet reconstruction algorithm efficiency jet reconstruction algorithm efficiency added tracks from in-time (same trigger) pile-up event added tracks from underlying event lost soft tracks due to magnetic field physics reaction of interest (parton level) Jet energy scale: Contributions to the jet signal Susy 2010: Performances with early data C.Guyot

  18. Jet energy scale (ATLAS): MC level • Compare reconstructed jets with MC truth (parton level) for the following corrections (local hadron calibration algorithm): • EM scale (red) • Weighted (hadronic/EM difference) (blue) • Weigthed + Out Of Cone corrections (green) • Weighted + OOC + Dead Material losses (black) To correct for the missing 8%, further calibration is needed due to: -misclassification of EM/hadron clusters -magnetic field (bending of tracks, charged particles don't reach calorimeter) -physics (fragmentation, pile-up, underlying event,...) Susy 2010: Performances with early data C.Guyot

  19. 3 jets with largest ∑ pT 2 jets M(jj) ~ M(W) t Isolated lepton pT> 20 GeV t 4 jets pT> 40 GeV ETmiss > 20 GeV Jet Energy Scale: In situ calibration • Several in-situ calibrations being actively worked on: • e/p: minimum bias, t decays (isolated pions) • via energy flow using phi symmetry • Light (< 200 GeV) JES from W  jj from tt events • ~5000 tt  lnbWb reconstructed events for 1fb-1 • JES already limited by systematic: • Mainly biases due to pT cuts • Studies still in progress Note: Colourless dijets from W are different from QCD dijets: • Effect of underlying event may lead to a different JES when referred to the parton scale Calorimeter response uniformity Susy 2010: Performances with early data C.Guyot

  20. Jet Energy Scale: In situ calibration (2) • g/(Z->ll) + jets energy balance Jet at EM scale Jet at particle scale • Potential biases: • sensitivity to ISR/FSR (more to ISR) • contributions from the underlying event • h dependent corrections • jet clustering effects • pileup effect (high lumi) Susy 2010: Performances with early data C.Guyot

  21. Jet Energy Scale: In situ calibration (3) • pT balance in QCD dijet for h/F dependence • Bootstrapping for high energy scale Basic idea: – select events with at least 3 jets, one having significantly more pT than all others – Balance this jet with the vector sum of all others Advantage: - Huge statistics available Disadvantage: - Indirect method, JES is determined in respect to a lower pT region which has to be sufficiently known - Intercalibration in h and F required to utilize full statistics Susy 2010: Performances with early data C.Guyot

  22. Jet Energy Scale: In situ calibration (4) Bootstrapping 300k events JES assumed to be known up to 350 GeV 10 Million events JES assumed to be known up to 380 GeV Susy 2010: Performances with early data C.Guyot

  23. Conclusion on Jet Energy Calibration Susy 2010: Performances with early data C.Guyot

  24. Missing transverse energy: ETmiss Goal: Look for escaping particles via transverse momentum unbalance But: - detector effects (holes, noise…) - finite resolution - wrong assignment of clusters (EM vs hadron) - fake and wrongly reconstructed muons - QCD jets can have real ETmiss Expected “Nominal” resolution deduced from pT balance analysis of dijet and minimum bias events ETmiss resolution Punch-through at very high ET Difficult! Day-1: poor resolution Susy 2010: Performances with early data C.Guyot

  25. Missing ET performance assessment In situ ETmiss scale determination from Z    lepton-hadron channel • Use Single lepton Trigger events • Select Z  lepton-hadron candidates (Opposite Sign lepton-hadron) • Reconstruct the invariant Z mass (need assumptions on neutrino directions) • Subtract the backgrounds using the Same Sign lepton-hadron events • Use the reconstructed invariant mass to tune the EtMiss scale with the first data: what can we do with 100pb-1 ? In 100 pb-1, expect: ~150000 Z   ~ 70000 Z    lepton-hadron ~ 7000 with pTe or pTmu> 15GeV Susy 2010: Performances with early data C.Guyot

  26. In situ EtMiss scale determination fromZ    lepton-hadron channel Backgrounds: • Inclusive W e (filter cuts: pte>10GeV, |h|<2.7) • Inclusive W  (filter cuts: ptm>5GeV, |h|<2.8) • tt decaying to at least 1 lepton • Z  ee (filter: mee> 60 GeV, 1e: |h|<2.7, pt>10 GeV) • Jets…bb events NOT PRODUCEDyet starting from 70000 Z    lepton-hadronevents, only ~215 events are selected ~5% on EtMiss scale achievable after ~3 months The same event sample (with different cuts) can be used to assess the tjet scalefrom the reconstructed visible mass Susy 2010: Performances with early data C.Guyot

  27. Inner detector alignment At start-up: hardware based-alignment, plus cosmics  20-200 m accuracy at startup ATLAS: frequency scanning interferometry in silicon strip detector 842 grid line lengths measured precisely  measures structure shapes, not sensors  monitor movements over ~hours CMS: laser alignment Track-based alignment using minimum bias, Zee,  Few days of data taking: sufficient statistics (~105 clean tracks). Challenge: <10 m precision (5 m for the pixel layers) 120000 parameters (CMS), 36000 parameters (ATLAS) Susy 2010: Performances with early data C.Guyot

  28. ideal After alignment Inner detector alignment with tracks (ATLAS) • 3 procedures for SCT/pixel alignment: • Robust and Local c2 Algorithms: • Break up 36k×36k matrix into 6×6 matrices for each module • Correlations are incorporated through iterations • Robust: • Use only residuals from hits in adjacentoverlapping modules (overlap residuals) • Global: • Invert a giant 36k×36k matrix • Limitation: 9 GB of memory, speed M(Z->mm)after alignment • When using only pointing tracks, the fit nicely converges (residuals <10mm), but towards a geometry leading to momentum shifts (presence of so-called weak modes) • Need to add non-pointing tracks (e.g. cosmics) Work still in progress Susy 2010: Performances with early data C.Guyot

  29. ATLAS muon spectrometer calibration and alignment (1) • Muon reconstruction efficiency and resolution depends on the knowledge of the following effects : • Chambers Positions (Alignment) • Chamber Deformations (Including Temperature Effects) • Wire Sag control (MDT) • T0, R-T Relations (MDT) • B Field map determination (good progress,should be OK) • Dead / Noisy / Anomalous Channels (data quality) • n /  Cavern Background • Geometric Material Distribution • Reconstruction Algorithm Optimization Use first data to evaluate the cavern background level and the validity of the MC calculations Susy 2010: Performances with early data C.Guyot

  30. MDT tube calibration with first data (ATLAS) Single tube resolution will not restrict the spectrometer performance at the start Susy 2010: Performances with early data C.Guyot

  31. Muon spectrometer alignment (ATLAS) Muon spectrometer alignment primarily based on optical systems (~10000 CCD/CMOS sensors) End Cap Barrel In the absolute mode (based on positioning and calibration precision of the sensors), a level of ~100-200 mm on sagitta measurement should be reached. But, from X-ray tomography, we know that a significant fraction of the sensors are badly positioned (>500mm, especially in the barrel) Susy 2010: Performances with early data C.Guyot

  32. Muon spectrometer alignment (2) • Alignment with pointing straight tracks (run with B=0 in the toroids) is required. With ~1000 tracks per chamber tower (600 towers in ATLAS, run a few days at L=1031)), a precision of ~100mm on sagitta measurement can be reached. Use the optical system in relative mode (precision<20mm) to measure the movements when field is switched on Full precision obtained with ~10000 muons/towers (2009). Susy 2010: Performances with early data C.Guyot

  33. High mass di-muon pairs High mass: sensitive to Z’, graviton resonances, etc. Also: large extra dimensions: deviations from SM spectrum Impact of misalignments on signal and Drell Yan background reconstruction Susy 2010: Performances with early data C.Guyot

  34. Summary and comments (1) • Not an exhaustive review! • No discussed: • Tau calibration • B tagging • All trigger level calibrations • Trigger efficiencies assessment (using pass-through triggers) • B field • Relative ID-Muon spectro alignment with tracks (ATLAS) • Very low energy muon identification (calo only or calo + first muon layer) • Gamma, jet pointing precisions..… • + • Test beam results • Simulation aspects (detector description and Geant4 tuning to reproduce the detectors response at test beams • Software infrastructures (online + offline) • ….many other topics related to Data Quality! Thanks to the numerous ATLAS collegues who provided me with a lot of this material Susy 2010: Performances with early data C.Guyot

  35. Summary and comments (2) • Hope that we can start LHC operations with reasonably efficient and calibrated detectors: • E.g. <2% on e/g and m energy scale, <5% on jet scale and <10% on ETmiss. • Should be sufficient for hunting the possible non-standard phenomena which can show up with the first data (Susy, Black holes!...) • Data Quality Assessment will be of utmost importance • DQA software infrastructure should be ready in time for fast feedback on the hardware side: • Dead/noisy channels, cabling maps, wrong calibration/ alignment constants,… • Fast feedback on trigger from trigger performance assessment is also mandatory • Keep it simple at the start (few basic histos) but flexible (changing conditions) • Commissioning is very important: Do as much as possible with cosmics and beam halo prior to collisions: • e.g. alignment, dead/noisy channels, cabling maps… • DQA tools Susy 2010: Performances with early data C.Guyot

More Related