1 / 31

The Trigger System of the CMS experiment

The Trigger System of the CMS experiment. Trigger conditions determines the picture we see. 10-th INTERNATIONAL CONFERENCE ON INSTRUMENTATION FOR COLLIDING BEAM PHYSICS. Budker Institute of Nuclear Physics, Siberian Branch of Russian Academy of Science, Novosibirsk, Russia

tao
Télécharger la présentation

The Trigger System of the CMS experiment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Trigger System of the CMS experiment Trigger conditions determines the picture we see 10-th INTERNATIONAL CONFERENCEON INSTRUMENTATION FOR COLLIDING BEAM PHYSICS Budker Institute of Nuclear Physics,Siberian Branch of Russian Academy of Science,Novosibirsk, Russia February 28 - March 5, 2008 Marta Felcini University College Dublin on behalf of the CMS Collaboration

  2. Marta Felcini, UCD CMS Collaboration The CMS Trigger System • Outline • - The LHC design parameter and status • - The CMS detector and status • - The CMS trigger system • - The Level-1 trigger system • - The High Level Trigger system • - Trigger tables for initial luminosities • - Trigger performance • - Summary and conclusion

  3. Marta Felcini, UCD CMS Collaboration LHC Design Luminosity ≤ See V. Papadimitriou talk for comparison with Tevatron luminosities and parameters

  4. LHC Startup Luminosity (see also W. Witzeling’s talk) • Approx 30 days of beam time to establish first collisions • 1 to 43 to 156 bunches per beam • N bunches displaced in one beam for LHCb • Pushing gradually one or all of: • Bunches per beam • Squeeze • Bunch intensity Interaction points P 1 (ATLAS) & 5 (CMS) Each step lasts ~week

  5. Marta Felcini, UCD CMS Collaboration Exploded View of CMS

  6. Marta Felcini, UCD CMS Collaboration The CMS Detector ECAL Magnet HCAL Tracker Muon chambers

  7. CMS Detector Status • Construction of the CMS experiment is almost completed and the installation is very advanced. • See talks from H.-J- Simonis, M. Ryan, M. Sobron, A. Staiano • Commissioning work already carried out gives confidence that CMS will operate with the expected performance. • Commissioning using cosmics with more and more complete setups (complexity and functionality) going apace. • Computing, Software & Analysis: 24/7 Challenges @ 50% of 2008 conducted. • Preparations for the rapid extraction of physics being made. • Around May’08 CMS will be in the closed configuration; Field ON, taking cosmics, in anticipation of beam.

  8. Marta Felcini, UCD CMS Collaboration CMS Taking Data (Cosmic Challenge) Run 2605 / Event 3981/ B 3.8 T Reconstructed global muon Every aspect of final CMS from detector to trigger to offline software has to work to produce these plots Barrel Muon Chambers

  9. Marta Felcini, UCD CMS Collaboration CMS Trigger and DAQ Architecture • The unique CMS trigger architecture only employs two trigger levels: • Level-1 trigger, implemented using custom electronics • High Level Trigger, implemented on a large cluster of commercial CPUs • Emphasis on • modularity • high-speed networking • CPU computing power 40 MHz 100 kHz 100 Hz

  10. CMS Online: Trigger and DAQ Surface Control Room (SCX) HLT Filter Farms Readout Builders Data to Surface (D2S) LHC L1 accept RAW Data 100 GB/s Underground Control Room (USC) Readout L1 Electronics

  11. Marta Felcini, UCD CMS Collaboration CMS Level-1 Trigger . The Level-1 trigger, implemented using custom electronics, inspects events at the full bunch-crossing rate, while selecting hundred kHz for further processing

  12. Example:ECAL L1 trigger electronics Front/Rear view with 20 supercrystals mounted Supercrystals: 25 PbWO4 crystals L1 Electron Identification L1 Electron Isolation Card Triggerprimitivescomputedonthedetector Modularity:TriggerTower(25channelsinBarrel) 5 VFE Boards (5 channels each) /1 FE Board 1Fibre sends trig primitives (everybunchXing) 1Fibre sends data (on Level1 accept) See also M: Ryan’s talk

  13. Marta Felcini, UCD CMS Collaboration CMS High Level Trigger Further online selection is performed in the high-level trigger (HLT) which reduces the hundred kHz input stream to hundred Hz of events written to permanent storage. The HLT system consists of a large cluster of commercial CPUs : the HLT Filter Farm L1 Input rate <~40 MHz L1 latency: 3.2 μs L1 Output rate: 100kHz-50kHz (startup) L2 and L3 merged into High Level Trigger (HLT) HLT ~ startup: 50kHz input rate ~2000 CPUs ~40 ms average per event The HLT accesses full event information (full granularity) seeded by L1 objects using “off-line quality” algorithms

  14. Marta Felcini, UCD CMS Collaboration The Plan: from start-up to discovery The trigger criteria at each luminosity must be adapted to the plan and goals of the experiment @ 1032s-1cm-2 collect O(100/pb)/mo, O(1/fb)/ y (Tevatron now: ~4/fb/expt) Define trigger criteria to optimize trigger performance in serving the above plan In the next slide I will show examples for 1032s-1cm-2 luminosity (intermediate luminosity range)

  15. Marta Felcini, UCD CMS Collaboration Trigger Criteria and Performance • Trigger criteria (thresholds) must be chosen to optimize the trigger performance depending on luminosity, machine and detector conditions. • The trigger performance is measured by three quantities: • Background rejection – determine total rate to storage – must be kept low • Signal efficiency – determine signal rate to storage – must be kept high • CPU time consumption – must be kept low - avoid dead-time and inefficiencies. The actual experimental (machine and detector) conditions will only be known when collisions and data come .The actual trigger performance will be measured with real data In preparation for data taking, we can use our present best knowledge of the detector response and possible machine condition scenarios to define trigger criteria before data taking, and adjust them when real collisions and data will be available. The flexibility of the trigger system allows to introduce modifications in an efficient manner for optimal performance adapted to the actual running conditions while taking data The robustness of the algorithms which determine the trigger primitives (quantities on which the trigger decision is taken) also ensure that the system will not be too sensitive to detailed changes with respect to the expected conditions.

  16. Marta Felcini, UCD CMS Collaboration Design of Trigger Tables for Early Data Taking The set of kinematic requirements (trigger criteria) on physics objects at each trigger level is called trigger table. In preparation for data taking, to optimize trigger performance at a given luminosity we prepare trigger tables depending on the expected luminosity and expected pile-up conditions as well as the expected detailed response of all sub-detectors • Procedure to design a trigger table: • start with fully simulated events of all known physics processes (QCD, W/Z, top) including • pile-up (overlapping) events depending on luminosity; • reconstruct physics objects (electron, muon, jet, etc) coarsely at Level-1 and precisely • at the HLT; • calculate trigger rates for all (single object, double object, cross-object) triggers; • choose trigger criteria in order to • minimize background rate • maximize signal rate • keep the CPU time consumption within the capability of the system • define a trigger table for the given luminosity and pile-up conditions.

  17. Marta Felcini, UCD CMS Collaboration Level-1 Trigger Rates and Tables Design tables for 17kHz L1 output rate 1/3 actual initial capability of 50 kHz How do we allocate bandwidth between the different triggers? 10kHz 1 kHz For single object triggers: - Muon rates are low - Electron rates are high at low Et - Jet rates are high also at high Et For double object triggers (not shown here) rates are one to more orders of magnitude lower than for single object triggers: allow to keep low thresholds at small bandwidth cost. Trigger rates for QCD Events L=1032s-1cm-2 General guidelines: Keep muons down to tow pT at modest bandwidth cost Give higher bandwidth to electron/photons for further processing in the HLT Give even higher bandwidth to energy and jet triggers (energy calibration)

  18. HLT Rates and Tables Single muon HLT rates L=1032s-1cm-2 Design HLT tables for 150 Hz HLT output rate 50% of actual initial capability of 300 Hz Share bandwidth according to detector and physics priorities at the given luminosity Examples of muon and electron (next page) triggers rates and tables at L=1032s-1cm-2: At this luminosity set muon trigger thresholds as low as possible (detector and physics studies) Allow 1/3 of total bandwidth for muon triggers Muon HLT table -Total rate: 30 Hz Dimuon HLT rates L=1032s-1cm-2 Muon HLT – Signal efficiencies

  19. HLT Rates and Tables Electron HLT rates L=1032s-1cm-2 Overview of bandwidth sharing among the different trigger classes a L=1032s-1cm-2 Electrons/Photons HLT table Total rate: 30 Hz Given the relatively low trigger thresholds affordable at this luminosity Signal (W/Z, top, Higgs, etc) efficiencies are high: 70 to 100% depending on topology

  20. Marta Felcini, UCD CMS Collaboration CPU Time Performance Key issue for the HLT selection is the CPU power required for the execution of the algorithms Measured average processing times (Core 2 5160 Xeon 3.0 GHz) for running the complete HLT Table including the data unpacking time on L1-accepted QCD (pT= 0 - 300 GeV/c), EWK (W /Z) and pp->μX samples. Weighted sum of contributions from all processes: 43 ± 6 ms For start-up scenario with DAQ processing capability of 50kHz of L1 accepted events the average of 40 ms per event translates into ~2000 commercial CPUs for the HLT filter farm This was the projected size of the farm from extrapolations back in 2002 at the time of the DAQ and HLT TDR

  21. Marta Felcini, UCD CMS Collaboration The CMS Trigger System: Summary The CMS experiment will collect data from the proton-proton collisions delivered by the Large HadronCollider (LHC) at a centre-of-mass energy of 14 TeV, starting this year, Summer 2008 • The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels: • The Level-1 trigger, implemented using custom electronics, inspects events at the full bunch-crossing rate, while selecting up to hundred kHz for further processing. • The high-level trigger (HLT) reduces the hundred kHz input stream to hundred Hz of events written to permanent storage. The HLT system consists of a large cluster of commercial CPUs (the “HLT Filter Farm"), running off-lne quality reconstruction algorithms on fully assembled event information. L1 and HLT tables have been developed for early luminosities. A total DAQ bandwidth of 50 kHz is assumed. Fast selection and high efficiency is obtained for the physics objects and processes of interest using inclusive selection criteria. The overall CPU requirement is within the system capabilities Conclusion: CMS is ready to take data efficiently when collisions come …

  22. The Trigger System of the CMS experiment Thank you to the Organizing Committee for your kind invitation and to my CMS friends and colleagues for the opportunity to represent CMS in this prestigious and inspiring Conference 10-th INTERNATIONAL CONFERENCEON INSTRUMENTATION FOR COLLIDING BEAM PHYSICS Budker Institute of Nuclear Physics,Siberian Branch of Russian Academy of Science,Novosibirsk, Russia February 28 - March 5, 2008 Marta Felcini University College Dublin on behalf of the CMS Collaboration

  23. Marta Felcini, UCD CMS Collaboration Reserve

  24. CMS data flow and on(off)-line computing Data Flow Tier 0 50 TeraFlops To regional centers 10 Gbit/s To regional centers Remote Remote control rooms control rooms Controls: High Level Trigger 10 TeraFlops 1 Gbit/s Events Data: 10 Gbit/s Controls: Controls: 1 Gbit/s 1 Gbit/s Raw Data: 2000 Gbit/s

  25. Marta Felcini, UCD CMS Collaboration Data Acquisition

  26. Expectations of Luminosity Buildup

  27. Marta Felcini, UCD CMS Collaboration Early Physics Programme • Prior to beam: early detector commissioning • Readout & trigger tests, runs with all detectors (cosmics, test beams) • Early beam, up to 10pb-1: • Detector synchronization, alignment with beam-halo events, minimum-bias events. Earliest in-situ alignment and calibration • Commission trigger, start “physics commissioning”: • Physics objects; measure jet and lepton rates; observe W, Z, top • And, first look at possible extraordinary signatures… • Physics collisions, 100pb-1: measure Standard Model, start search • 106 W  l  (l = e,); 2x105 Zll (l =e, ); 104 ttbar+X • Improved understanding of physics objects; jet energy scale from W  j j’; extensive use (and understanding) of b-tagging • Measure/understand backgrounds to SUSY and Higgs searches • Initial MSSM (and some SM) Higgs sensitivity • Early look for excesses from SUSY& Z/jj resonances. SUSY hints (?) • Physics collisions, 1000pb-1: entering Higgs discovery era • Also: explore large part of SUSY and resonances at ~ few TeV

  28. LHC-vs-Tevatron rates Huge stats for Standard Model signals. Rates@1033 ~108 events/1fb-1 W (200 Hz) ~107 events/1fb-1 Z (50 Hz) ~106 events/1fb-1 tt (1 Hz) These will be used as control/ calibration samples for searches beyond the Standard Model They can also be used to scrutinize the Standard Model further. e.g. top sample is excellent for understanding lepton id. (incl. taus), jet corrections, jet energy scale, b tagging, ….

  29. Marta Felcini, UCD CMS Collaboration Experimental Challenge LHC Detectors (especially ATLAS, CMS) are radically different from the ones from the previous generations High Interaction Rate pp interaction rate 1 billion interactions/s Data can be recorded for only ~102 out of 40 million crossings/sec Level-1 trigger decision takes ~2-3 s a electronics need to store data locally (pipelining) Large Particle Multiplicity ~ <20> superposed events in each crossing ~ 1000 tracks stream into the detector every 25 ns need highly granular detectors with good time resolution for low occupancy a large number of channels (~ 100 M ch) High Radiation Levels a radiation hard (tolerant) detectors and electronics

  30. LHC Luminosity Geometric factor F by which the luminosity is reduced, full crossing angle transverse and longitudinal rms beam size at the IP.

  31. Marta Felcini, UCD CMS Collaboration The CMS Trigger System Abstract The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy of 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics and inspects events at the full bunch-crossing rate, while selecting hundred kHz for further processing. The Level-1 electronics has many tunable parameters and look-up-tables, whose configuration we have optimized for early data taking. Further online selection is performed in the high-level trigger (HLT) which reduces the hundred kHz input stream to hundred Hz of events written to permanent storage. The HLT system consists of a large cluster of commercial CPUs (the "Filter Farm"), running sophisticated reconstruction algorithms on fully assembled event information. The HLT software includes all major features of the offline reconstruction code. The flexibility provided by a fully programmable environment implies that algorithms can be easily changed to improve the events selection in multiple physics channels, as well as deal with diverse experimental conditions. The coherent tuning of the HLT algorithms to accommodate multiple physics channels is a key issue that defines the physics reach of the experiment. In this presentation we will discuss the strategies and trigger configuration developed for the start-up physics program of the CMS experiment. We will also discuss the expected CPU performance of the HLT algorithms.

More Related