1 / 12

LHCb front-end electronics and its interface to the DAQ

LHCb front-end electronics and its interface to the DAQ. Quick LHCb Front-end overview. ~ 1 million detector channels. 10 different sub-detector front-end implementations. 40 MHz bunch crossing rate . ~1/3 has interaction Two trigger levels in front-end.

lalo
Télécharger la présentation

LHCb front-end electronics and its interface to the DAQ

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb front-end electronicsand its interface to the DAQ

  2. Quick LHCb Front-end overview • ~ 1 million detector channels. • 10 different sub-detector front-end implementations. • 40 MHz bunch crossing rate . • ~1/3 has interaction • Two trigger levels in front-end. • L0: 4.0 us constant latency (pipeline buffer) Max 1.11 MHz accept rate • L1: Variable latency, max 1900 event (event FIFO) Trigger decisions distributed in chronological order 40 (100) KHz accept rate. • Front-end architecture: • Simple front-end architecture where possible. • Central prevention of of buffer overflows. • Architecture extensively simulated in VHDL to insure correct function under all conditions. J.Christiansen/CERN

  3. L0 Buffer L0 buffer L0 buffer L0 Derandomizer L0 derandomizer L0 derandomizer L1 Buffer L0 buffer L0 buffer L1 Derandomizer L0 derandomizer L0 derandomizer General architecture Front-End system Trigger & TFC system L0 Trigger + Readout Supervisor TTC system L0 Trigger L0 Throttle L0 L1 L1 Trigger + Readout Supervisor TTC system L1 Trigger Zero Suppression & Multiplexing L1 Throttle Output Buffer DAQ J.Christiansen/CERN

  4. L0 Buffer L0 buffer L0 buffer L0 Derandomizer L0 derandomizer L0 derandomizer L0 front-end • Constant latency: 4.0 us • Maximum 1.11MHz trigger rate • 16 events deep L0 derandomizer. • Events from L0 derandomizer defined to be max 36 words @ 40MHz • Derandomizer overflows prevented by central emulator in Readout Supervisor, based on a set of strictly defined front-end parameters. Raw Data @ 40 MHz Raw Data @ 40 MHz L0 Trigger 4 µs L0 Throttle L1 Trigger system L1 Buffer Monitor L0 Trigger Max 1.111 MHz 16 events L0 Derandomizer Emulator 36 words = 32 ch + 4 tags 15 events L0 Data @ 1.111 MHz Readout supervisor J.Christiansen/CERN

  5. L0 buffer L0 buffer L0 derandomizer L0 derandomizer L1 front-end L0 Data @ 1.111 MHz L0 Throttle L0 Data @ 1.111 MHz L1 Trigger 34 words/event @ 40MHz per word CPU CPU L1 Buffer Reorganizer 1927 events L1 Trigger + commands Command Max 40 (100) kHz L1 Trigger Derandomizer Spacer 15 events L1 Derandomizer Nearly full Zero Suppression & Multiplexing L1 Buffer Monitor Board Output Buffer 2 us Readout supervisor Nearly full L1 throttle DAQ System J.Christiansen/CERN

  6. L1 front-end • Variable latency. • L1 trigger decisions distributed to front-end in chronological order via TTC broadcast message.(for both accepts and rejects) • L1 buffers in front-ends implemented as simple FIFOs. • L1 buffer occupancy monitored centrally by Readout Supervisor that throttles L0 triggers in case of risk of overflow. • L1 trigger decisions sent to the front-end at a rate that can be handled by all front-ends (no local buffering of trigger decisions needed) • 15 events deep L1 derandomizer: • 3 events to handle L1 throttle delay (2us) • 12 events for derandomization • L1 derandomizer and following data buffers protected against overflow by hardwired L1 throttle signal. • Zero-suppression (sparcification) and event data formatting. J.Christiansen/CERN

  7. L1 trigger derandomizer Centralized front-end control:Readout supervisor • Receives L0 and L1 trigger decisions from trigger systems. • Only distributes trigger accepts to front-end that will not generate buffer overflows. • L0 derandomizer overflows prevented by L0 derandomizer emulator. • L1 buffer overflows prevented by L1 buffer emulator. • L1 trigger decisions spaced to match processing speed of front-end • Buffer overflows in L1 derandomizer and following buffers prevented by hardwired L1 throttle network. • Resets , calibration signals, testing and debugging functions. • High level of programmability to allow system level optimizations. L0 trigger decision L1 trigger decision L0 throttle L0 derand. emulator L0 trigger L1 buffer emulator L1 trigger L1 decision spacer L1 throttle TTC encoder TTC distribution J.Christiansen/CERN

  8. Front-end control and monitoring • Clock synchronous control of front-end handled by Readout Supervisor via TTC system. • Local monitoring in front-ends of buffer overflows and event consistency based on event tags. ( Bunch ID, L0 event ID, L1 event ID). • Error conditions sets error flags in event fragments and sets status bits to Experiment Control System (ECS). • Front-end parameters down loaded via ECS system ( With enforced read-back capability ). • Standardized ECS interfaces for front-end: • Credit card PC • SPECS ( simple serial protocol ) • CAN ELMB ( from ATLAS ) J.Christiansen/CERN

  9. Detailed front-end architecture J.Christiansen/CERN

  10. Interface to DAQ Event formatting Sub-detector N N+1 • Interface between front-end and DAQ system is handled by Readout Units. • Standardized event formatting on optical links from front-ends. • Data bandwidth per front-end branch limited to ~25Mbytes/s under nominal conditions to have headroom for unexpectedly high channel occupancies and allow upgrade from 40 to 100 KHz trigger rate. L1FE L1FE L1FE L1FE L1 Front-End electronics Transport header Event building header Front-End Multiplexing (optional) FEM FEM FEM FEM Event data header Fragment 0 Event data RU RU Readout Units Event data trailer Event data header Fragment N Event data Event building network Event data trailer Event building trailer L2/L3 CPU farm Transport trailer J.Christiansen/CERN

  11. Data Link from front-end to DAQ • Standardized unidirectional optical link handling distances of up to 100m. • No Xon/Xoff backpressure foreseen. • In some sub-detectors the link transmitters are located in the cavern with limited levels of radiation (few Krad). • Required bandwidth: 10 – 50Mbytes/s. • Use of S-link enables: • Flexibility in choice of link technology. • Use of standardized link interface cards. • Standardization on Gigabit Ethernet. • Defacto standard in computer industry. • Event building in DAQ will be based on Gigabit Ethernet. • Many relatively cheap components available. • Gigabit Ethernet S-link transmitter under development in Argonne. • Question of framing overhead: • Event data is not heavily concentrated in LHCb. • Reduced Ethernet framing can be used on data to Readout Units. J.Christiansen/CERN

  12. LHCb Front-end in numbers J.Christiansen/CERN

More Related