260 likes | 363 Vues
This document details the discussions from the ATLAS Workshop held in Les Houches in May 2003, led by Fabiola Gianotti from CERN and Giacomo Polesello from INFN Pavia. It covers crucial topics such as the physics goals, optimal detector utilization, and data sample requirements necessary for robust calibration and physics measurements. Emphasis is placed on understanding the Standard Model processes and preparations for new physics discoveries, including potential Higgs boson signals. The workshop lays the groundwork for analyzing event rates expected from the ATLAS detector at the LHC.
E N D
The first 10 fb-1 in ATLAS Les Houches Workshop, 27/5/2003 Fabiola Gianotti (CERN) Giacomo Polesello (INFN Pavia) • Which physics ? • Which detector ? • Which data samples for physics and calibration ?
Process Events/sEvents for 10 fb-1Total statistics collected at previous machines by 2007 W e 15 108 104 LEP / 107 Tevatron Z ee 1.5107107 LEP 1107104 Tevatron 1061012–1013109 Belle/BaBar ? H m=130 GeV 0.02105 ? m= 1 TeV 0.001104--- Black holes 0.0001103--- m > 3 TeV (MD=3 TeV, n=4) Expected event rates at production in ATLAS at L = 1033 cm-2 s-1 • Already in first year, large statistics expected from: • -- known SM processes understand detector and physics at s = 14 TeV • -- several New Physics scenarios • ~ 107 events to tape every 3 days assuming 30% data taking efficiency • statistical errors negligibleafter few days
Understand and calibrate detector and trigger in situ using well-known physics samples e.g. - Z ee, easy-to-trigger channels - tt b bjj Also useful to debug/optimize software (reconstruction, …) • Look for New Physicspotentially accessible in first year : • e.g. -- SM Higgs • -- SUSY (squarks, gluinos, Higgs bosons, ….) • -- others …. including surprises ? over some mass ranges Which studies and physics the first year ? • Understand basic SM physics at s = 14 TeV first checks of Monte Carlos • e.g. - measure cross-sections for e.g. minimum bias, W, Z, tt, QCD jets (to ~ 10-20 %), • events features, particle multiplicities, pT and mass spectra, angular distributions, etc. • - measure top mass (to 5-7 GeV) give feedback on detector performance (jet scale …) • Prepare the road to discovery: • -- measure backgrounds to New Physics : e.g. tt and W/Z+ jets (omnipresent …) • -- look at specific “control samples” (background-enriched) for the individual channels: • e.g. ttjj “gauges” ttbb background to ttH Note:if mH 120 GeV : fast Higgs discovery may be crucial in case of competition with Tevatron May be most difficult physics goal for first year …
The first “physics data” to commission the detector : cosmics • End 2006- beg 2007: 1-3 months of data • taking with cosmics. • Full simulations of cosmic muons in ATLAS • started (include cavern overburden) • Expected rates in the detector : few Hz • very useful events for debugging the • detector, first tracker alignment, • calorimeter energy reconstruction, etc.
A typical event …. • One track reconstruted in Muon chambers • Two tracks reconstructed in Inner Detector
April-May 2007 : only one beam in the machine (most of the time …) • here “physics data” are beam-halo muons and beam-gas interactions. Expected in ATLAS: -- ~ 105 (108) beam-halo muons in tracker (muon chambers) -- ~ 106 tracks in tracker from beam-gas
Well-known, clean processes from standard trigger menu: e.g. • Z ee : ECAL inter-calibration, absolute E-scale to ~ 0.1%, etc. • Z : p-scale in tracker and Muon Spectrometer, etc. • tt b bjj : absolute jet-scale from W jj (~1%), study b-tag, • reconstruction of complex final states (for ttH), etc. ~ 6 x 104 evts/day after cuts ~ 104 evts/day after cuts, S/B ~ 65 Additional lower-thresholds samples (pre-scaled triggers) : • Minimum-bias events :pp interaction properties, MC tuning, LVL1 efficiency, • radiation background in Muon chambers, etc. • QCD jets (20 ET400 GeV): QCD cross-sections and MC tuning, • trigger efficiency, calorimeter inter-calibration, • jet algorithms, background to Higgs, SUSY, etc. • Inclusive e pT > 10 GeV: trigger efficiency, ECAL calibration, ID alignment, • E/p, e reconstruction at low-pT, etc. • Inclusive pT > 6 GeV: trigger efficiency, reconstruction at low-pT, • E-loss in calorimeters, ID alignment, etc. These are only few examples … ~ 107 events per sample needed 10% of total trigger rate under normal operation (more at the beginning) First pp collisions : collect data for calibration and to understand “basic” physics
here discovery “easy” with H 4 mH > 114.4 GeV H ttH ttbbqqH qq ( + -had) S 13015~ 10 B 4300 45~ 10 S/ B2.02.2~ 2.7 total S/ B SM Higgs Then : the “discovery phase” can start ? mH ~ 115 GeV 10 fb-1
H ttH tt bb b bjj bb qqH qq b b H Remarks: Each channel contributes ~ 2 to total significance observation of all channels important to extract convincing signal in first year(s) The 3 channels are complementary robustness: • different production and decay modes • different backgrounds • different detector/performance requirements: • -- ECAL crucial for H (in particular response uniformity) : /m ~ 1% needed • -- b-tagging crucial for ttH : 4 b-tagged jets needed to reduce combinatorics • -- efficient jet reconstruction over || < 5 crucial for qqH qq : • forward jet tag and central jet veto needed against background Note : all require “low” trigger thresholds. E.g. ttH analysis cuts : pT () > 20 GeV, pT (jets) > 15-30 GeV
H qqH qqH 4qqH qqWW ( + -had) S 120~ 8~ 518 B 3400~ 6< 115 S/ B2.0~ 2.72.83.9 mH ~ 130 GeV 10 fb-1 = e, total S/B 6 • 4 complementary channels for physics and for detector requirements • S/B < 3 per channel (except qqWW counting channel) observation of all channels • important in first year • H 4 low rate but very clean: small background, narrow mass peak • Detector requirements: • -- 90% e, efficiency at low pT (analysis cuts : pT 1,2,3,4 > 20, 20, 7, 7, GeV) • in particular low di-lepton LVL1 thresholds • -- /m ~ 1%, tails < 10% E, p measurement and resolution in ECAL and tracker at low pT
A/H , tg = 38 m~11 GeV MSSM Higgs bosons h, H, A, H mh < 135 GeV mA mH mH at large mA 5 discovery curves A, H, H cross-section ~ tg2 Best sensitivity from A/H , H bbA/H : -- covers good part of region not excluded by LEP -- experimentally easier than A/H -- crucial detector :Muon Spectrometer (high-pT muons from narrow resonance)
Here 5 discovery of bbA/H 4b possible at Tevatron with 15 fb-1
Large cross-section 100 events/dayat 1033 for • Multijet + ETmiss is most powerful and model-independent signature (if R-parity conserved) ATLAS 5 discovery curves ~ 100 days : up to 2.3 TeV ~ “10 days” : up to 2 TeV ~ “ 1 day” : up to 1.5 TeV SUPERSYMMETRY Signal could be confirmed by (more model-dependent) lepton signatures
Peak position correlated to MSUSY Events for 10 fb-1 background signal Events for 10 fb-1 Tevatron reach ATLFAST ET(j1) > 80 GeV ETmiss > 80 GeV signal background From Meff peak, first/fast measurement of SUSY mass scale to 20%(10 fb-1, mSUGRA) Detector/performance requirements: -- calorimeter coverage and hermeticity for||<5 -- calorimeter energy scale calibration to ~5% -- “low” Jet+ETmiss trigger thresholds for low masses at overlap with Tevatron region (~400 GeV)
reconstructed Events with ETmiss > 50 GeV if leading jet undetected these 2 events contain a high-pT neutrino • Cracks : • -- can be monitored with Z ( ) + jets • -- impact minimised by ETmiss isolation, removal of jets in cracks Z ( ) + jet full simulation • “Poor” initial calorimeter calibration may increase trigger rates impact on low-mass SUSY (Very) pessimistic uncorrected non-compensation simulated by + 20% enhancement of EM scale + 50% rate for ETmiss > 80 GeV
Which detector the first year ? staged staged in part staged staged Staged detector components: -- 1 pixel layer -- TRT outer end-cap -- Gap scintillator -- EEL/EES MDT and half CSC -- Part of forward shielding -- Part of LAr ROD -- Large part of HLT/DAQ processors staged Guiding physics principles: -- all sub-detectors needed already in 1st year -- physics potential decreases fast with decreasing h coverage (e.g. H significance decreases linearly) -- full radial redundancy in tracking less crucial at ~ 1033 Technical (e.g. installation) and schedule constraints
Staged items Main impact during Effect first run on 1 pixel layerttH ttbb~8% loss in significance Gap scintillatorH 4e~8% loss in significance MDTA/H 2~5% loss in significance for m~ 300 GeV Trigger processorsB-physicsprogram jeopardised High-pT physicsno safety margin (e.g. for EM triggers) Requires 10-15% more integrated luminosity to compensate. Complete detector needed at high luminosity: -- robust pattern recognition (efficiency, fakes rate) in the presence of pile-up and radiation background -- muon measurement -- powerful b-tag -- robustness against detector aging and L > 1034 -- precise measurements (e.g. light Higgs) may require low trigger thresholds at (very) high pT Summary of physics impact of staging initial detector
Conclusions • ATLAS has potentially an impressive physics programme right from the beginning. • Event statistics : 1 day at LHC 10 years at previous machines in some cases • Construction quality checks and beam tests of series detector modules show that the • detector “as built” should give a good starting-point performance. • However,a lot of data (and time …) will be neededat the beginning to: • --commission (to < 1% in most cases) the detector and trigger in situ • --reach the performance needed to optimise the physics potential • --understand “basic” physics at s = 14 TeVand normalise MC generators • --measure backgrounds to New Physicsextract a convincing “early” signal • Redundancy from several control samples is mandatory, especially for difficult channels • (e.g. light Higgs) and measurements (e.g. W mass) • The initial “physics data”(cosmics, beam-halo, beam-gas, first collisions, etc.) are • therefore crucial to reach quickly the “discovery-mode”.
--HLT/DAQ deferrals limit available networking and computing for HLT limit LVL1 output rate -- Large uncertainties on LVL1 affordable rate vs money (component cost, software performance, etc.) Selections (examples …) LVL1 rate (kHz) LVL1 rate (kHz)LVL1 rate (kHz) L= 1 x 1033 L= 2 x 1033L= 2 x 1033 Real thresholds set for no deferrals no deferrals with deferrals 95% efficiency at these ETAn example for illustration… MU6,8,20 23 19 0.8 2MU6 --- 0.20.2 EM20i,25,25 11 12 12 2EM15i,15,15 24 4 J180,200,200 0.20.2 0.2 3J75,90,90 0.20.2 0.2 4J55,65,65 0.20.2 0.2 J50+xE50,60,60 0.40.4 0.4 TAU20,25,25 +xE30 2 2 2 MU10+EM15i --- 0.10.1 Others (pre-scaled, etc.) 5 55 Total ~ 44 ~ 43~ 25 LVL1 designed for 75 kHz room for factor ~ 2 safety Likely max affordable rate, no room for safety factor
Selection (examples …) Rate to storageat 2x1033 (Hz) Physics motivations (examples …) e25i, 2e15i ~ 40(55% W/b/c eX)Low-mass Higgs (ttH, H 4, qq) 20i, 210 ~ 40(85% W/b/c X) W, Z, top, New Physics ? 60i, 220i ~ 40(57% prompt )H , New Physics (e.g. X yy mX~ 500 GeV ) ? j400, 3j165, 4j110 ~ 25 Overlap with Tevatron for new X jj in danger … j70 + xE70 ~ 20 SUSY : ~ 400 GeV squarks/gluinos t35 + xE45 ~ 5 MSSM Higgs, New Physics (3rd family !) ? More difficult high L 2m6 (+ mB )~ 10 Rare decays B X Others ~ 20 Only 10% of total ! (pre-scaled, exclusive, …) Total ~ 200 No safety factor included. “Signal” (W, , etc.) : ~ 100 Hz Best use of spare capacity when L < 2 x 1033 being investigated Which data samples ? Total trigger rate to storage at 2 x 1033 reduced from ~ 540 Hz (HLT/DAQ TP, 2000) to ~ 200 Hz (now) High-Level-Trigger output
Assuming : • 109 events recorded in first year • Raw data event size : 1.6 MB • Reconstructed event size : 0.5 MB • Time to reconstruct 1 event : 640 SI95-sec -- was 2.2 MB --zero-suppression in calorimeters not included yet Storage for raw data 1.6 PB(back-up not included) Storage for reconstructed data (ESD) 1.0 PB(present + previous version) CPU for reconstruction 178 kSI95 (1st pass + calibration + 2 re-processings) Implications on Computing at CERN Tier-0 functionality
448 channels in total Source Expected contribution to cL (over x = 0.2 x 0.4) Geometry (e.g. residual Accordion modulation) 0.25-0.35 % Mechanics (absorber and gap thickness) < 0.25% Calibration (amplitude uniformity, ~ 0.4 % difference physics-calibration) Total 0.5-0.6% Expected “realistic” initial performance and calibration strategy One example : EM calorimeter H : constant term ctot 0.7% over || < 2.5needed to observe signal peak on top of huge background Strategy to achieve this goal By construction (e.g. mechanical tolerances): expect cL ( “local” constant term) 0.5% over x =0.2 x 0.4 There are ~ 400 such regions in || < 2.5
Beam tests of 4 (out of 32) barrel modules and 3 (out of 16) end-cap modules in 2001-2002: from first results step is achieved • 1 barrel module: • x = 1.4 x 0.4 • ~ 3000 channels Scan of a barrel module with 245 GeV e- over ~ half module r.m.s. 0.67% over ~ 500 spots “On-line” uniformity : ~ 1.3% Uniformity after corrections (e.g. optimal filtering) :~ 0.67 % Uniformity after corrections over x = 0.2 x 0.4 :~ 0.55%
rate ~ 1 Hz, ~ no background, allows standalone ECAL calibration • Nevertheless, let’s consider the worst (unrealistic ?) scenario : no corrections applied • cL = 1.3 % measured “on-line” non-uniformity of individual modules • cLR = 1.5 % no calibration with Z ee ~ 250 e per region needed to achieve cLR 0.4% ctot = 0.5% 0.4% 0.7% ~ 105 Z ee events (few days of data taking at 2 x 1033) ctot 2% conservative : implies very poor knowledge of upstream material (to factor ~2) H significance mH~ 115 GeV degraded by ~ 25% In situ calibration with Z ee events ctot = cL cLR cLR long-range response non-uniformities of the 400 regions (module-to-module variations, different upstream material, etc.) (from full simulation)