1 / 22

The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking

The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking. Ricardo Gon ç alo, Royal Holloway University of London On behalf of the ATLAS High-Level Trigger group EPS HEP2007 – Manchester, 18-25 July 2007. Outline. The ATLAS High-Level Trigger

nasim-white
Télécharger la présentation

The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf of the ATLAS High-Level Trigger group EPS HEP2007 – Manchester, 18-25 July 2007

  2. Outline • The ATLAS High-Level Trigger • Overall system design • Selection algorithms and steering • Selection configuration • Trigger selection for initial running • Trigger groups (slices) • Selection optimisation • Calibration triggers, ( and passthrough and prescales?)XXXX • Performance monitoring • Measuring trigger efficiency from data • High-Level Trigger Commissioning • Technical runs • Cosmic-ray runs • Timeline • Summary and outlook Ricardo Goncalo, Royal Holloway University of London

  3. The ATLAS High-Level Trigger Ricardo Goncalo, Royal Holloway University of London

  4. Three trigger levels: Level 1: Hardware based (FPGA/ASIC) Coarse granularity detector data Calorimeter and muon syst. only Latency 2.2 s (buffer length 2.5) Output rate up to ~75 kHz Level 2: ~500 dual-core CPUs Software based Only detector sub-regions processed (Regions of Interest) seeded by level 1 Full detector granularity in RoIs Fast tracking and calorimetry Average execution time ~10 ms Output rate up to ~1 kHz Event Builder: ~100 dual-core CPUs Event Filter (EF):~1600 dual-core CPU Seeded by level 2 Full detector granularity Potential full event access Offline algorithms Average execution time ~1 s Output rate up to ~200 Hz Calo MuTrCh Other detectors 40 MHz 1 PB/s Pipelines 2.5 ms LVL1 2.5 ms Muon Trigger Calorimeter Trigger ROD ROD ROD LVL1 Acc. 75 kHz CTP RoI’s 120 GB/s (Region of Interest) ROB ROB ROB RoI requests H L T ROIB ~10ms LVL2 ROS L2SV L2P L2P L2P RoI data L2N 3 GB/s LVL2 Acc. Event Builder 2 kHz Event Filter ~1sec EB EFN EFP EFP EFP EF Acc. 200 Hz Event Size ~1.5 MB 300 MB/s Trigger DAQ Ricardo Goncalo, Royal Holloway University of London

  5. Selection method Level1 Region of Interest is found and position in EM calorimeter is passed to Level 2 EMROI L2 calorim. Event rejection possible at each step cluster? Electromagnetic clusters L2 tracking Level 2 seeded by Level 1 Fast reconstruction algorithms Reconstruction within RoI track? match? E.F.calorim. E.F.tracking Ev.Filter seeded by Level 2 Offline reconstruction algorithms Refined alignment and calibration track? e/reconst. e/ OK? Ricardo Goncalo, Royal Holloway University of London

  6. Steering and Configuration • Algorithm execution managed by Steering • Based on static configuration • Step-wise processing and early rejection • Chains stopped as soon as a step fails • Reconstruction step done only if earlier step successful • Any chain can pass an event • Prescales applied at end of each level • Specialized algorithm classes for all situations • Multi-objects: e.g. 4-jet trigger • Topological: e.g. 2  with m ~ mZ • … Ricardo Goncalo, Royal Holloway University of London

  7. Steering and Configuration • Trigger configuration: • Active triggers • Their parameters • Prescale factors • Passthrough fractions • Needed for: • Online running • Monte Carlo production • Offline analysis • Relational Database (TriggerDB) for online running • User interface (TriggerTool) • Browse trigger list (menu) through key • Read and write menu into XML format • Menu consistency checks • After run, configuration becomes conditions data (Conditions Database) Ricardo Goncalo, Royal Holloway University of London

  8. Trigger Selection for Initial Running

  9. Algorithm organisation • High-Level Trigger code organised in groups (“slices”): • Minimum bias, e/, , , jets, B physics, B tagging, ETmiss, cosmics, combined algorithms • For initial running: • Crucial to have e/, , , jets, min.bias • Cosmics already started! • ETmiss and B tagging will require significant understanding of the detector • Will need to understand trigger efficiencies and rates • Zero bias triggers (passthrough) • Minimum bias: • Coincidence in scintilators placed in calorimeter cracks • Counting inner-detector hits • Prescaled loose triggers • Tag-and-probe method Ricardo Goncalo, Royal Holloway University of London

  10. Slices… Ricardo Goncalo, Royal Holloway University of London

  11. High-Level Trigger Commissioning

  12. Technical runs • A subset of the High-Level Trigger CPU farms and DAQ system were exercised in techical runs • Simulated Monte Carlo events in bytestream format preloaded into DAQ readout buffers and distributed to farm nodes • Level 1 trigger simulated in a dedicated algorithm • A realistic trigger list is used (e/, jets, , B physcs, ETmiss, cosmics) • HLT algorithms, steering, monitoring infrastructure, configuration database • Measure and test: • Event latencies • Algorithm execution time • Monitoring framework • Configuration database • Network configuration • Run-control Ricardo Goncalo, Royal Holloway University of London

  13. Cosmics runs • A section of the detector was used in cosmics runs (see previous talk) • Several sub-detectors used: • Muon spectrometer • LAr calorimeter • Tile calorimeter • The High-level trigger took part and selected real events for the first time! Ricardo Goncalo, Royal Holloway University of London

  14. Conclusions and outlook

  15. Conclusions and outlook • The LHC will turn on in less than a year • Looking forward to triggering on real data at the LHC! Ricardo Goncalo, Royal Holloway University of London

  16. +30 Minimum Bias What we’re up against… H 4m Ricardo Goncalo, Royal Holloway University of London

  17. Backup Slides Ricardo Goncalo, Royal Holloway University of London

  18. Calo MuTrCh Other detectors 40 MHz 1 PB/s Pipelines 2.5 ms LVL1 2.5 ms Muon Trigger Calorimeter Trigger ROD ROD ROD LVL1 Acc. 75 kHz CTP RoI’s 120 GB/s (Region of Interest) ROB ROB ROB H L T RoI requests ROIB LVL2 ~10ms ROS L2SV L2P L2P L2P RoI data L2N 3 GB/s LVL2 Acc. Event Builder 2 kHz Event Filter ~1sec EB EFN EFP EFP EFP EF Acc. 200 Hz Event Size ~1.5 MB 300 MB/s LVL1: Hardware Trigger • EM, TAU, JET calo. clusters • µ trigger chambers tracks • Total and missing energy • Central Trigger Processor HLT: PC farms • LVL2: special fast algorithms • Access data directly from the ROS system • Partial reconstruction seeded with L1 Regions of Interest (RoIs) • EF: offline reco. algorithms • Access to fully built event • Seeded with LVL2 objects (full event reconstruction possible) • Up to date calibrations Ricardo Goncalo, Royal Holloway University of London

  19. SDX1 dual-CPU nodes CERN computer centre 6 ~1600 ~100 ~ 500 Local Storage SubFarm Outputs (SFOs) Event Filter (EF) Event Builder SubFarm Inputs (SFIs) LVL2 farm Event rate ~ 200 Hz Second- level trigger Data storage SDX1 pROS DataFlow Manager Network switches stores LVL2 output Network switches LVL2 Super- visor Gigabit Ethernet Event data requests Delete commands Requested event data USA15 Regions Of Interest USA15 Data of events accepted by first-level trigger 1600 Read- Out Links UX15 ~150 PCs VME Dedicated links Read- Out Drivers (RODs) Read-Out Subsystems (ROSs) RoI Builder First- level trigger Timing Trigger Control (TTC) Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each Trigger / DAQ architecture Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz Ricardo Goncalo, Royal Holloway University of London

  20. High-Level Trigger The ATLAS trigger Three trigger levels: • Level 1: • Hardware based (FPGA/ASIC) • Coarse granularity detector data • Calorimeter and muon spectrometer only • Latency 2.2 s (buffer length) • Output rate ~75 kHz • Level 2: • Software based • Only detector sub-regions processed (Regions of Interest) seeded by level 1 • Full detector granularity in RoIs • Fast tracking and calorimetry • Average execution time ~10 ms • Output rate ~1 kHz • Event Filter (EF): • Seeded by level 2 • Full detector granularity • Potential full event access • Offline algorithms • Average execution time ~1 s • Output rate ~200 Hz Ricardo Goncalo, Royal Holloway University of London

  21. The HLT Steering Controller • Algorithm executions based on trigger chains (L2_e60, EF_e60, etc) • Activate chains based on result of previous level • Step-wise processing • Each chain divided in steps • Each step executes an algorithm sequence (one or more HLT algos) • A step failed to produce a wanted result ends the chain • Any chain can pass the event. • Sequence and algorithms caching • Avoid same sequence/algo instance run twice on identical input • Several algo classes for all situations • inclusive, multi-objects, topological, unseeded, … Ricardo Goncalo, Royal Holloway University of London

  22. HLT Navigation and HLTResult • HLT Navigation : a data tree keeps the history of algorithm execution and reconstructed trigger objects • Implemented as external tool to HLT Steering Controller • Don’t need to run HLT Steering Controller to accessed the data structure (useful for offline trigger analyses) • HLTResult: what you get in the raw data • Header word (event number, pass/fail, error code, …) • Serialized trigger chains info (chain ID, pass/fail, last successful step) • Serialized navigation structure Ricardo Goncalo, Royal Holloway University of London

More Related