1 / 61

ATLAS High Level Trigger/DAQ

ATLAS High Level Trigger/DAQ. S.Falciano - INFN Roma1. Outline presentazione. Progetto, partecipazione, milestones 2003 HLT/DAQ TDR : stato e contenuto Global view : system requirements and “Baseline architecture” System components and functions System performance Organization and plan

Télécharger la présentation

ATLAS High Level Trigger/DAQ

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS High Level Trigger/DAQ S.Falciano - INFN Roma1 Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  2. Outline presentazione • Progetto, partecipazione, milestones 2003 • HLT/DAQ • TDR : stato e contenuto • Global view : system requirements and “Baseline architecture” • System components and functions • System performance • Organization and plan • TDR : contributi italiani • Milestone 2004 • Testbeam 2003/2004 • Conclusioni Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  3. Attività italiane • Trigger di Livello-1 muoni (barrel) (Napoli, Roma1, Roma2) • Trigger di Livello-2 muoni (barrel) (Pisa, Roma1) • Trigger di Livello-2 pixel (Genova) • Event Filter (Lecce, Pavia, Roma3) • DAQ (LNF, Pavia, Roma1) • DAQ testbeam (TDAQ + gruppi detector) 9 Sezioni INFN più Rivelatori e Offline (32 fisici per HLT/DAQ) Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  4. Incarichi nel progetto TDAQ • S.Veneziano (Roma1) -> Coordinatore trigger LVL1 muoni barrel+endcap+MUCTPI • V.Vercesi (Pavia) -> Coordinatore PESA (Physics and Event Selection Architecture) • A.Nisati (Roma1) -> IB Chairperson e Coordinatore algoritmi muoni in PESA • F.Parodi (Genova) -> Coordinatore algoritmi b-tagging in PESA • A. Negri (Pavia) -> Coordinatore Data Flow Software per l’Event Filter • S.Falciano (Roma1) -> Coordinatore Detector Readout nel DIG e Detector HLT slices ... e per i testbeam • P.Morettini (Genova) -> Coordinatore DAQ testbeam H8 pixel • E.Pasqualucci (Roma1) -> Coordinatore DAQ testbeam H8 muoni Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  5. Stato Milestones HLT/DAQ 06/2003 • Marzo • Scrittura del nuovo software di Data Flow dell'Event Filter e Monitoring. 100% Il software, il cui disegno era iniziato nel 2002, e‘ stato completamente sviluppato ed e' ora utilizzato su testbeam. • Integrazione completa del software di calibrazione dellecamere MDT nell'EF/DAQ del Testbeam. 100% Anche questa integrazione, iniziata nel 2002, è stata effettuata ed e' operativa nell'attuale testbeam. • Aprile • Integrazione e test "slice" verticale LVL1/HLT/DAQ/DCS per unrivelatore ATLAS (e.g. rivelatore di muoni) in laboratorio. 100% Integrazione effettuata in laboratorio per LVL1/RPC/TGC e camere MDT (elettronica di lettura e software di acquisizione e trigger). Il testbed e' stato particolarmente utile per la messa a punto del software e di parte dello hardware per il testbeam del 2003. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  6. Stato Milestones 06/2003 (cont.) • Giugno • Sottomissione al LHCC del Technical Design Report diHLT/DAQ/DCS. 100% Milestone raggiunta il 30/6/2003. Il TDR è ora in ristampa e verrà presentato al LHCC il 24/9/2003. I contributi italiani sono stati notevoli ed hanno riguardato il DAQ e i trigger di alto livello (LVL2 per Pixel e Muoni, adattamento dei programmi di ricostruzione offline all'Event Filter, framework software e responsabilità di alcuni capitoli importanti quali quello di PESA). Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  7. TDR : stato e contenuto • Sottomesso al comitato LHC il 30/6/2003 • ATLAS High-Level Trigger, Data Acquisition and Controls Technical Design Report, CERN/LHCC/2003-022 • Feedback molto positivo dalla “LHCC comprehensive review” del 2 luglio su HLT/DAQ “The architectural open issues have essentially been resolved, an offline-online collaboration is building up, beam- and laboratory tests have been performed, the extensive DCS-implementation and the management structure.” • Presentazione di ATLAS alla“Open LHCC session” del 24/9/2003 al CERN Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  8. Part 1 - Global View 1. Overview 2. Parameters 3. System Operations 4. Physics selection strategy 5. Architecture 6. Fault tolerance and error handling 7. Monitoring Part 2 - System Components 8. Data-flow 9. High-level trigger 10. Online Software 11. DCS 12. Experiment Control Part 3 - System Performance 13. Physics selection and HLT performance 14. Overall system performance and validation Part 4 - Organisation and Plan 15. Quality assurance and development process 16. Costing 17. Organisation and resources 18. Workplan and schedule Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  9. TDR Part 1 : Global view La scelta dell’architettura si è basata sui seguenti criteri : • La copertura del programma di fisica prevista da ATLAS • L’esistenza di prototipi funzionanti • Misure di “performance” che soddisfano o le specifiche finali di ATLAS o possono essere tranquillamente estrapolate alle performance richieste sulla scala dei tempi reali (CPU speed dei PC, ….) • La chiarezza di come evolvere dallo scenario iniziale di set-up ridotto, quale quello utilizzato su testbeam, al sistema completo ad alta luminosità • Uno scenario dei costi che parte dallo “staged detector” fino al completamento del sistema • La possibilità di trarre vantaggio dall’evoluzione della tecnologia mentre l’esperimento è in corso L’architettura proposta potrebbe essere costruita oggi con le tecnologie attuali e raggiungere le performance richieste. Poichè sono previsti avanzamenti significativi nel campo del networking e computing, ciò ci aiuterà a semplificare ulteriormente alcuni aspetti complessi del sistema. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  10. ARCHITECTURE 40 MHz ROD 75 kHz 120 GB/s ROB ROB ROB ROIB L2SV L2N ~2+4 GB/s L2P ~2 kHz EBN SFI DFM EFN EFP EFP EFP EFP SFO ~ 200 Hz ~ 300 MB/s Trigger DAQ Calo MuTrCh Other detectors 40 MHz D E T RO LV L1 FE Pipelines 2.5 ms Lvl1 acc = 75 kHz Read-Out Drivers RoI RoI data = 1-2% 120 GB/s Read-Out Links D A T A F L O W H L T ~ 10 ms Read-Out Buffers LVL2 ROS RoI requests RoI Builder L2 Supervisor L2 N/work L2 Proc Unit Read-Out Sub-systems Lvl2 acc = ~2 kHz Dataflow Manager EB Event Building N/work ~ sec Event Filter Sub-Farm Input ~4 GB/s Event Builder Event Filter Processors Event Filter N/work EFacc = ~0.2 kHz Sub-Farm Output Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  11. Meccanismo delle RoI - Implementazione • C’è una corrispondenza semplice tra :h-f region <-> ROB number(s)(per ciascun rivelatore)-> per ciascuna RoI, i processori di LVL2 possono identificare rapidamente la lista dei ROB con i correspondenti dati di ciascun rivelatore • Questo meccanismo fornisce un modo potente ed economico per avere un importante fattore di reiezione prima dell’ Event Building completo ==> ATLAS RoI-based Level-2 trigger … ~ ReadOut network più piccolo di un ordine di grandezza … … al costo di un maggiore traffico di controllo … 4 RoI -faddresses Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  12. Level-2 Trigger Three parameters characterise the RoI-based Level-2 trigger: the amount of data required : 1-2% of total the overall CPU time : 10 ms average the rejection factor: x 30 Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  13. TDR Part 2 : System components • Data Flow (DAQ) • High-Level Triggers • LVL2, EF, Event Selection Software (ESS) • Online Software (DAQ) • DCS • Experiment Control Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  14. R C P R O D R O D R O D R O D NIC ROD Crate Workstation ROBIN ROBIN ROBIN RCD and ROS F.E. Electronics Event Fragments (Detector specific) Config & Control VME bus Event sampling & Calibration data … ROD Crates Config & Control Event sampling & Calibration data LAN (GbEth.) ROLs ROD Fragments • Total number of ROD crates: 90 • Total number of ROS PCs : 144 • Total number of racks : ~15” ==>All in USA15 (underground) … ROS PCs … PCI bus ROB Fragments ROS Fragments GbEth. L2 & Event Builder Networks Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  15. The ROBin Prototype Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  16. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  17. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  18. HLT Event Selection Software Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  19. Realistic Data Access Realistic bytestream format data generated using simulated events from DataChallenge-1. Used to measure data access and preparation times Different implementations for LVL2, EF and Offline Bytestream converters produce objects required by algorithms. Handle ROB mapping, calibration, etc. Requires detailed understanding of the detector and read-out Use of off-line services. TES uses Storegate Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  20. TDR Part 3 : System Performance • DataFlow for LVL2 (RoI collection) and EF tested and required performance demonstrated • LVL2 processing with algorithms and simulated data in realistic format tested in full trigger environment • Required performance demonstrated with LAr and Muon Detectors using dedicated data preparation code • Further optimisation needed for data preparation code from offline (specially true for calorimetry code) • Functional test made of HLT vertical slice • Results validate the RoI mechanism,  only ~2% of the data after LVL1 needs to be moved over networks • Further work needed to validate use of off-line services in LVL2, but outlook promising Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  21. LVL2 track reconstruction for b-tagging selectionImpact parameter resolution vs pT u-jet rejection vs b-tagging efficiency Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  22. Perfect match of the two resolutions for pt = 20 GeV mFast physics performance HLT TP, layout M HLT TP, layout M HLT TDR, layout P Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  23. Risoluzione in impulso Efficienza MOORE – Event Filter muon reconstruction Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  24. LVL2 Processing Task LVL1 Result Input Handler Event Queue LVL1 Result Event Selection Threads LVL2Supervisor LVL2 Result LVL2 Performance Test • The LVL2 performance has been measured on a cluster using a 2.2 GHz Dual Xeon for the L2P fetching data via Gigabit Ethernet • Simulated LVL1 selected Di-jets eventsloaded in the ROS(E) • Ran LAr LVL2 selection algorithms in LAr data. • LVL2 selection uses specially written algorithms in multi-threaded tasks • Highly optimized (decision time ~10ms) • Thread-safe Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  25. Integral plots of each part of the LVL2 Calo processing time - after optimization Largest contribution is from Data Preparation Algorithm is the smallest contribution to the processing time Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  26. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  27. EFP EFD SFO SFI EF Data Flow InputTask Sorting Task • In each EF Processor (EFP) the EFD application handles the flow of events • The Event Selection Software runs in separate PT applications • The complete event is in shared memory and the PT is passed a pointer (avoids copying) • The PT write the EF Result into the shared memory • For accepted events the Output task combines the EF Result into the event Counting Task PreProc. Task PT PT PT ExtPTs Task ExtPTs Task PT PT PostProc. Task Histogr.Task Output Task Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  28. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  29. TDR Part 4 : Organization and Plan (3) Workplan and schedule Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  30. Contributi italiani al TDR • Studio e validazione delle componenti DAQ • Data Flow (Data Collection e Event Filter) • Online software (Configurazione, Monitoring, Run Control) • Detector software (ROD Crate DAQ, Data Format, Monitoring) • Studio e validazione degli HLT • Software framework (Athena) • Algoritmi di livello-2 (Pixel e Muoni) • Algoritmi di filtro e calibrazione ottenuti dai programmi offline (e.g. MOORE per la ricostruzione e CALIB per la calibrazione dei muoni) • Uso online dello schema di acceso ai dati dei rivelatori e alla loro geometria secondo l’Event Data Model (definizione e sviluppo software dei formati ByteStream dei dati dei rivelatori utilizzati e della loro definizone ad oggetti, Raw Data Objects utile per gli algoritmi di trigger) • Contributo alla scrittura e al coordinamento di importanti capitoli del TDR (vedi PESA) Sono state prodotte dai gruppi italiani o in collaborazione con altri gruppipiù di 20 note ATLASquali documenti di supporto al TDR. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  31. Milestones 2004 • Marzo • Integrazione Livello-2/Event Filter/DAQ su Testbed al CERN con il Livello-1 in emulazione. • Ottobre • Integrazione Detector/Livello-1/Livello-2/Event Filter/DAQ/DCS su Testbeam Combinato (Pixel-Lar-Tile-MDT-RPC). Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  32. Trigger scintillator The HLT/DAQ at the test beam - 2003 In ATLAS, we value the strategy of real life use (test beam, test sites) of “final system” software releases, used for performance measurements on test beds The same complete DAQ (and HLT framework) software release is used on test beds and at the test beam ATLAS Combined run at H8 - Sep 2003Pixel - SCT - “Phantom EM” - TileCal - MDT - RPC - TGC ROD Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  33. H8 TDAQ H8 test beam setup - 2003 Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  34. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  35. Dataflow performance at H8 Event rate Dataflow measured on a test-beam-like implementation of standard release without and withEvent Filter framework==> Performance beyond detector data-taking capability Data throughput Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  36. TileCal Monitoring at H8 • Event Display and Monitoring task based on Online Software tools Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  37. Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  38. In 2004 in H8… • A Calorimeters Combined Test Beam • LAr & Tilecal on the same rotating table • A Inner Detector Combined Test Beam • Pixels and SCT (in the same box?) • TRT with a barrel slice • Muon chambers: MDT, RPC, TGC Why not to have an ATLAS Barrel slice? Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  39. Set-up 2004 Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  40. What type of measurements? • This is a unique occasion to intercalibrate the Barrel e.m. and the hadronic calorimeters • Energy sharing • Shower containment • Weighting techniques studies • Linearity, resolution, e/h, etc. • Alignment and tracking of the Inner Detector components • All interesting combinations Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  41. HLT/DAQ al Testbeam 2004 • Common Trigger • Local Network • Integrazione degli HLT (LVL2 & EF) • Ultima versione del software DAQ (Data Flow & Online) • Migrazione da DAQ-1 a DAQ-0 (Pre-series prototype) • Integrazione del DCS (Common Infrastructure & Sub-Detector layers) • Altri item comuni importanti anche per il DAQ Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  42. Many levels where to combine… • Electronics and trigger • Master trigger & common busy in “normal” operations; combination of TTC partitions; Timing of each sub-detector (long baseline) • Readout: event-by-event; will all sub-detectors be read out in fully pipelined mode? • Detectors & LVL1 • Making sure that LVL1-sources are there, as well as destinations • Tilecal & LAr tower signals • RPC (& TGC) Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  43. Many levels where to combine… (2) • Detectors (via DAQ) & LVL2 • All sub-detectors “contribute” to LVL2 • Detectors & DAQ • Review and install : • Read-Out Links (input from sub-detectors) • DAQ ROS machines (Read-Out Systems) • DAQ SFI machines (Keeping today situation of 1 PC per detector?) • Use of the ATLAS TDAQ Data Format • Detectors & DCS Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  44. Many levels where to combine… (4) • Event Filter • One of the most interesting places where to combine the sub-detectors • It forces offline programs to be fast and ready well before offline data analysis starts • Last year we had Pixel, Tilecal and Muons analysis programs working together in EF, but never combining data of a given sub-detector with the others: this is what we must do,even if already done this year with Muon & Tile Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  45. Many levels where to combine… (5) • Offline • As soon as the geometry of the setup is defined, the simulation of all detector components can start • A way to use the “combined reconstruction” • If everything works in ATHENA (ATLAS software framework) also EF can benefit and vice versa • Analysis programs of the sub-detectors have to converge to a unique output Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  46. Tentatively in 2004…(schedule dated 3/9/2003) Fully combined partially combined Stand alone runs SPS proton run: 23 weeks + 2 “25 ns” weeks 4-6 weeks at the beginning 4-6 weeks at the end Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  47. Beam availability • Pions, electrons • 1-9 GeV, 10-20 GeV, 30-300 GeV • Intensity up to 108/spill in specially shielded zones (4.8 sec spill). Typical 106/spill • Muons • 20-300 GeV • Intensity up to 106-107 (limited by radiation protection issues because the zone is not completely shielded) • Photons • Production of electron/photon by secondary beam at 180 GeV maximum Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  48. SCT TRT Tilecal Tilecal S B C R O D s G P M S B C R O D G P M S B C R O D G P M S B C R O D G P M S B C G P M G P M … … … … … ROD Crate ROD Crate ROD Crate ROD Crate Beam Crate Muons S B C G P M G P M … Beam Crate RPC S B C R O D G P M LAr MDT … CTP & CTPI S B C R O D G P M S B C R O D G P M … … S B C R O D R O D … ROD Crate ROD Crate ROD Crate ROD Crate Pixel * n ID ROS * n LAr ROS Tilecal ROS Combined run 2004 - Global layout Muon ROS1 LVL1 Calo Muon ROS2 S B C R O D R O D … LVL1 Calo ROS * n CTP ROS ROD Crate To EB/EF Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  49. Additional h/w for HLT Vertical Slice • Additional hardware needed • RoIB (Mk II ?) • 1 LVL2 Supervisor • 1 LVL2 Processor • 1 pROS • DAQ + EF Hardware already in use at the Test Beam • 1 DFM • 1 SFI • 1 EF Processor • 1 SFO Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

  50. DSS Operator IF Data Viewer Alarm IF DAQ IS LHC DAQ MRS Magnet DIP DAQ RC CERN DCS_IS Config DB Conditions DB Tile Pixel SCT TRT MDT TGC RPC (CSC) LAr OPC LCS EB- LCS B- LCS B+ LCS EB+ LCS LCS PC LCS LCS OPC OPC OPC OPC OPC HEC HV Temp Purity ISEG HV FE Crates HV/ LV Cooling LV HV Misc. CAN PVSS DDC OPC DCS Back-End Architecture Global Control Station (GCS) Subdetector Control Stations (SCS) CIC Local Control Stations (LCS) Gruppo1 - Lecce 24/09/2003 S.Falciano - INFN Roma1

More Related