1 / 22

ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, mehsys@slac.stanford

ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, mehsys@slac.stanford.edu November 19, 2008. Outline. DAQ support for next generation HEP experiments… “survey the requirements and capture their commonality” One size does not fit all… generic building blocks

amanda
Télécharger la présentation

ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, mehsys@slac.stanford

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS TDAQ upgrade proposal TDAQ week at CERN Michael Huffer, mehsys@slac.stanford.edu November 19, 2008

  2. Outline • DAQ support for next generation HEP experiments… • “survey the requirements and capture their commonality” • One size does not fit all… • generic building blocks • the (Reconfigurable) Cluster Element (RCE) • the ClusterInterconnect (CI) • industry standard packaging • ATCA • Packaged solutions • the RCE board • the CI board • Applicability to ATLAS (the proposal): • motivation • scope • details • ROM (Read-Out-Module) • CIM (Cluster-Interconnect-Module) • ROC (Read-Out-Crate) • physical footprint, scaling & performance • Summary

  3. The Reconfigurable Cluster Element (RCE) • employs System-On-Chip technology (SOC) • The Cluster Interconnect (CI) • based on 10-GE Ethernet switching • ATCA • Advanced Telecommunication Computing Architecture • crate based, serial backplane Three building block concepts • Computational elements • must be low-cost • $$$ • footprint • power • must support a variety of computational models • must have both flexible and performanent I/O • Mechanism to connect together these elements • must be low-cost • must provide low-latency/high-bandwidth I/O • must be based on a commodity (industry) protocol • must support a variety of interconnect topologies • hierarchical • peer-to-peer • fan-In & fan-Out • Packaging solution for both element & interconnect • must provide High Availability • must allow scaling • must support different physical I/O interfaces • preferably based on a commercial standard

  4. reset & bootstrap options (Reconfigurable) Cluster Element (RCE) Processor 450 MHZ PPC-405 data instruction Core Memory Subsystem Boot Options Configuration 512 MByte RLD-II 128 MByte Flash Data Exchange Interface (DEI) Resources Combinatoric logic MGTs DSP tiles • Bundled software: • GNU cross-development environment (C & C++) • remote (network) GDB debugger • network console • Bundled software: • bootstrap loader • Open Source kernel (RTEMS) • POSIX compliant interfaces • standard I/P network stack • exception handling support • Class libraries (C++) provide: • DEI support • configuration interface

  5. Resources • Multi-Gigabit Transceivers (MGTs) • up to 24 channels of: • SER/DES • input/output buffering • clock recovery • 8b/10b encoder/decoder • 64b/66b encoder/decoder • each channel can operate up to 6.5 gb/s • channels may be bound together for greater aggregate speed • Combinatoric logic • gates • flip-flops (block RAM) • I/O pins • DSP support • contains up 192 Multiple-Accumulate-Add (MAC) units

  6. PGP PGP PGP PGP PGP PGP PGP PGP Ethernet MAC Ethernet MAC Derived configuration - Cluster Element (CE) 3.125 gb/s MGTs Combinatoric logic Core Combinatoric logic MGTs E1 E0 1.0/2/5/10.0 gb/s

  7. The Cluster Interconnect (CI) Q0 Q1 Management bus 10-GE L2 switch 10-GE L2 switch 10-GE L2 switch RCE Q2 Q3 • Based on two Fulcrum FM224s • 24 port 10-GE switch • is an ASIC (packaging in 1433-ball BGA) • XAUI interface (supports multiple speeds including 100-BaseT, 1-GE & 2.5 gb/s) • less then 24 watts at full capacity • cut-through architecture (packet ingress/egress < 200 NS) • full Layer-2 functionality (VLAN, multiple spanning tree etc..) • configuration can be managed or unmanaged

  8. Cluster Interconnect To back-end systems To back-end systems A cluster of 12 elements From Front-End systems Elements switching fabric switching fabric

  9. Why ATCA as a packaging standard? • An emerging telecom standard… • Its attractive features: • backplane & packaging available as a commercial solution • generous form factor • 8U x 1.2” pitch • hot swap capability • well-defined environmental monitoring & control • emphasis on High Availability • external power input is low voltage DC • allows for rack aggregation of power • Its very attractive features: • the concept of a Rear Transition Module (RTM) • allows all cabling to be on rear (module removal without interruption of cable plant) • allows separation of data interface from the mechanism used to process that data • high speed serial backplane • protocol agnostic • provision for different interconnect topologies

  10. flash memory RCE board + RTM (Block diagram) P3 RCE MFD slice0 slice1 E0 slice2 fabric slice3 E1 Fiber-optic transceivers P2 slice4 E1 slice5 base slice6 E0 slice7 P3 RCE MFD Payload RTM

  11. RCE board + RTM RTM Zone 1 (power) transceivers Zone 2 RCE Zone 3 Media Slice controller Media Carrier with flash

  12. 1-GE 1-GE 10-GE XFP XFP 10-GE 10-GE Cluster Interconnect board + RTM (Block diagram) P3 XFP XFP 10-GE XFP (fabric) XFP (fabric) Q0 Q2 XFP fabric CI MFD P2 base Q1 Q3 (base) XFP XFP (base) XFP P3 Payload RTM

  13. Cluster Interconnect board + RTM RTM XFP Zone 1 10 GE switch 1G Ethernet Zone 3 CI RCE XFP

  14. Typical (5 slot) ATCA crate fans Shelf manager Front CI board Power supplies RCE board RCE RTM Back CI RTM

  15. Motivation • Start with the premise that ROD replacement is inevitable… • detector volume will scale upwards with luminosity • modularity of Front-End-Electronics will change • Replacement is an opportunity to address additional concerns… • longevity of existing RODS • long-term maintenance & support • many different ROD flavors • difficult to capture commonality & reduce duplicated effort • ROS Ingress/Egress imbalance • capable of almost 2 Gbytes/sec input • capable of less than 400/800 Mbytes/sec output • scalability • each added ROD requires (on average) adding one ROS/PC • one ROD (on average) drives two ROLs • one ROS/PC can process (roughly) 2 ROLs worth of data • physical separation adds mechanical & operational constraints

  16. Scope & the Integrated Read-Out System (IROS) • ROD crates • RODs • crate controller • L1 distribution & control • “back-plane” boards • ROS/PC racks • ROS/PCs • ROBins • ROLs (between ROS & ROD) • “wires” connecting these components • Proposal calls out for the replacement of the IROS… • Intrinsic modularity of the scheme allows replacing a subset Event Builder (EB) Level-2 (L2) trigger Integrated Read Out System (IROS) Detector Front-End Electronics • Upstream & downstream systems would remain the same… • Proposal is constructed out of three elements: • ROM (Read-Out-Module) • combines functionality of ROD + ROS/PC • CIM (Cluster-Interconnect-Module) • ROC (Read-Out-Crate)

  17. From detector FEE Rear Transition Module P3 Cluster Elements (X4) 3.2 gb/s 10 gb/s CIM 10-GE switch L1 fanout 10-GE switch (X2) 2.5 gb/s switch management ROM (X2) 10 gb/s P2 ROC backplane Read-Out-Module (ROM)

  18. To L2 & Event Building (X12) 10 gb/s Rear Transition Module Rear Transition Module P3 P3 L1 fanout L1 fanout switch management switch management 10-GE switch 10-GE switch 10-GE switch 10-GE switch CIM CIM Backplane (X2) 10 gb/s ROMs (X2) 2.5 gb/s Shelf Management from L1 To monitoring & control from L1 Read-Out-Crate (ROC)

  19. Cal RPC TGC TPI CTP IROS (X24) 10 gb/s (X384) 3.2 gb/s Calorimeter Inner Tracker Muon L1 trigger UX15 LAr TileCal Pixel SCT TRT MDT CSC IROS USA15 Switching fabric Switching fabric SDX1 Event Builder farm L2 farm

  20. IROS plant footprint current proposed

  21. Scaling & performance • Performance requirements as a function of luminosity upgrade phase • numbers derived from Andy’s upgrade talk (TDAQ week, May 2008) • both ROI size & number change as a function of luminosity • Proposal scales linearly with number of ROMs… • 2 Gbytes/sec/ROM for L2 network • .5 Gbytes/sec/ROM for Event Building network • For plug replacement example this implies an output capacity of… • 270 Gbytes/sec for L2 • 118 Gbytes/sec for Event Building • As a comparison current system has a total output capacity of… • 116 Gbytes/sec (8 NIC channels)

  22. Summary • SLAC is positioning itself for a new generation of DAQ… • strategy is based on the idea of modular building blocks • inexpensive computational elements (the RCE) • interconnect mechanism (the CI) • industry standard packaging (ATCA) • architecture is now relatively mature • both demo boards (& corresponding RTMs) are functional • RTEMS ported & operating • network stack fully tested and functional • performance and scaling meet expectations • costs have been established (engineering scales): • ~$1K/RCE (goal is less then $750) • ~$1K/CI (goal is less then $750) • This is an outside view looking in (presumptuous + sometimes useful) • Initiate discussion on the bigger picture • Separate proposal abstraction from its implementation • common substrate ROD • integration of ROD + ROL + ROBin functionality • Inherent modularity of this scheme allows piece-meal (adiabatic) replacement • can co-exist with current system • Leverage recent industry innovation • System-On-chip (SOC) • High speed serial transmission • Low cost, small footprint, high-speed switching (10-GE) • Packaging standardization (serial backplanes and RTM)

More Related