1 / 37

JEM FDR: Design and Implementation

JEM FDR: Design and Implementation. JEP system requirements Architecture Modularity Data Formats Data Flow Challenges : Latency Connectivity, high-speed data paths JEM revisions JEM 1.1 - implementation details Daughter modules Energy sum algorithms FPGA resource use Performance

keena
Télécharger la présentation

JEM FDR: Design and Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JEM FDR: Design and Implementation • JEP system requirements • Architecture • Modularity • Data Formats • Data Flow • Challenges : • Latency • Connectivity, high-speed data paths • JEM revisions • JEM 1.1 - implementation details • Daughter modules • Energy sum algorithms • FPGA resource use • Performance • Production tests JEM FDR

  2. JEP system requirements • Process –4.9 < η < 4.9 region • ~32×32×2 = 2k trigger towers of Δη×Δφ=.2×.2 • 9 bit input data (0-511 GeV) • 32x32 10-bit “jet elements” after em/had pre-sum • 2 multiplications per jet element: ET (EX,EY) • 3 Adder trees spanning the JEP (JEMs, CMMs) • Sliding window jet algorithm, variable window size within 3×3 environment • Output data to CTP • Thresholded ET , ET • Jet hit count • Output data to RODs • Intermediate results, mainly captured from module boundaries • RoI data for RoIB JEM FDR

  3. JEP system design considerations • Moderate data processing power • Tough latency requirements • Large amount of signals to be processed  partition into parallel operating modules • Algorithm requiring environment to each jet element  high bandwidth inter-module lanes • Data concentrator functionality, many  few • Severely pin bound design, dominated by input connectivity • Modules • Processors (FPGAs) • Benefit from similarities to cluster processor • Common infrastructure (Backplane) • Common serial link technology JEM FDR

  4. System modularity • Two crates, each processing two quadrants in φ  32 × 8 bins (jet elements) per quad • η range split over 8 JEMs  4 × 8 jet elements per JEM • Four input processors per JEM • Single jet processor per JEM • Single sum processor per JEM JEM FDR

  5. Replication of environment elements - system and crate level - • JEM has 32 core algorithm cells • 4 × 8 jet elements • Directly mapped : 4 PPMs (e,h)  1 JEM • JEM operates on a total of 77 jet elements including ‘environment’ : 7 × 11 • Replication in φ via multiple copies of PPM output data • Replication in ηvia back-plane fan-out JEM FDR

  6. JEM data formats – real-time data • JEM Inputs from PPM: • Physical layer : LVDS, 10 bits, 12-bit encoded w. start/stop bit • D0 odd parity bit • D(9:1) 9 bit data, D1 = LSB= 1 GeV • Jet elements to jet processor: • No parity bit • D(9:0) 10 bit data, D0 = LSB= 1 GeV • 10 data bits muxed to 5 lines, least significant first • Energy sums to sum processor: • No parity bit • ET(11:0) 12 bit data, D0 = LSB= 1 GeV • EX(13:0) 14 bit data, D0 = LSB= .25 GeV • EY(13:0) 14 bit data, D0 = LSB= .25 GeV • JEM output to CMM: • J(23:0) 8 x 3 bit saturating jet hits sent on bottom port • J24 odd parity bit • S(23:0) 3 x 8 bit quad-linear encoded energy sums on top port • 6 bit energy • 2 bit range • Resolution 1GEV, 4 GeV, 16 GeV, 64 GeV • S24 odd parity bit JEM FDR

  7. JEM data formats - readout • Physical layer : 16bits, 20-bit encoded (CIMT, alternating flag bit, fill-frames 1A/1B, HDMP 1022 format) • Event separator : Minimum of 1 fill-frame sent after each event worth of data • All data streams odd parity protected (serial parity) • DAQ readout : 67-long stream per L1A / slice being read out • Input data on D(14:0) : 11 bit per channel, nine bit data, 1 bit parity error, 1 bit link error • 12 bit Bcnum & 25 bit sum & 25 bit jet hits on D15 • RoI readout : 45-long stream per L1A • D(1:0) : total of 8ROIs • 2 bits location & saturation flag & 8 bits threshold passed • D2 : 12 bits Bcnum • D(4:3) : used on FCAL JEMs only (forward jets) • D(15:5) : always zero JEM FDR

  8. JEM data flow 400 Mbit/s serial data (480 Mbit/s with protocol) Multiple protocols and data speeds and signaling levels used throughout board • Multiplexing up and down takes considerable fraction of latency budget • Re-synchronisation of data generally required on each chip and board boundary • FiFo buffers • Phase adjustment w. firmware-based detection • Delay scans LVDS deserialiser 40 MHz parallel Input processor 80 Mb/s 40Mb/s Jet processor+ readout controller Sum processor + readout controller 40 Mb/s 40 Mb/s parallel To CMM Link PHY Link PHY To CMM 640 Mbit/s serial data (800 Mbit/s with protocol) Not synchronous to bunch clock JEM FDR

  9. Challenges : latency & connectivity • Latency budget for energy sum processor:18.5 ticks (TDR) • Input cables : ~2 ticks • CMM : ~ 5 ticks • Transmission to CTP <2 ticks • ~ 9.5 ticks available on JEM from cable connector to backplane outputs to CMM Module dimensions imposed by use of common backplane • Large module : 9U*40cm • Full height of backplane used for data transmission due to high signal count  long high-speed tracks unavoidable  need to use terminated lines throughout  need to properly adjust timing • High input count : 88 differential cables JEM FDR

  10. Connectivity : high-density input cabling • 24 4-pair cable assemblies arranged in 6 blocks of 4 (2 φ bins × em, had) • Same coordinate system now on cables and crate: φ upwards, η left to right (as seen from front) • V cable rotated • Different cablingfor FCAL JEMs re-map FCAL channels in jet FPGA firmware JEM FDR

  11. Connectivity : details of differential data paths • Differential 100Ω termination at sink • 400 (480) Mbit/s input data • Use de-serialisers compatible to DS92LV1021 (LVDS signal level, not DC-balanced) • 88 signals per JEM arriving on shielded parallel pairs • Run via long cables (<15m) and short tracks (few cm) • Require pre-compensation on transmitting end • 640 (800) Mbit/s readout data • PECL level  electro-optical translator • HDMP1022 protocol, 16-bit mode • Use compatible low-power PHY JEM FDR

  12. Connectivity : details of single ended data paths • CMOS signals • point-to-point • 60ΩDCI source termination throughout on all FPGAs • 40Mb/s (25ns) • at 1.5V, no phase control • Energy sum path into sum processor : 40 lines per input processor • General control paths • At 2.5V : CMM merger signals via backplane (phase adjustment on receiving end) • 80Mb/s (12.5ns) at 1.5V : jet elements • 7x11x5bit =385 lines into jet processor • 2x3x11x5bit=330 lines on backplane from/to adjacent modules • Global phase adjustment via TTCrx • All signals latched into jet processor on same clock edge JEM FDR

  13. JEM history • JEM0.0 built from Dec. 2000 • LVDS de-serialiser DS92LV1224 • 11 input processors covering one phi bin each, Spartan2 • Main processor performing jet and energy algorithms, Virtex-E • Control FPGA, ROC, HDMP1022 PHY, coaxial output • Complete failure due to assembly company • JEM 0.x built from Dec. 2003 • Minor design correction wrt to JEM0.0 • New manufacturer (PCB / assembly ) • Fully functional prototype except CAN slow control and FPGA flash configuration • TTC interface not to specs due to lack of final TTCrx chip • Successfully tested all available functionality JEM FDR

  14. JEM 0 11 input processors VME-Interface 2 x HDMP1022 Backplane Conn. Main ROC TTCrx CAN 88 x DS92LV1224 JEM FDR

  15. JEM history (2) • JEM1.0 built in 2003 • All processors Virtex-2 • Input processors on daughter modules (R,S,T,U) • LVDS de-serialiser SCAN921260 (6-channel) • 4 input processors covering three phi bins each • 1 Jet processor on main board • 1 Sum processor on main board • 1 Board control CPLD (CC) • Readout links (PHY & opto) on daughter module (RM) • Flash configurator : system ACE • Slow control / CAN : Fujitsu microcontroller • Successfully tested algorithms and all interfaces • Some tuning required on SystemACE clock • CAN not to new specs (L1Calo common design) JEM FDR

  16. VME CC RM U Sum T TTC Jet CAN S ACE R Flash power History: JEM 1.0 JEM1.0 successfully tested • Algorithms • All interfaces • LVDS in • FIO inter-module links • Merger out • Optical readout • VME • CAN slow control • Mainz, RAL slice test, CERN test beam JEM FDR

  17. JEM 1.1 • JEM1.1 in production now • Identical to JEM 1.0 • Additional daughter module: Control Module (CM) • CAN • VME control • Fan-out of configuration lines • Expected back from assembly soooon JEM FDR

  18. JEM details –main board • 9U*40cm*2mm, bracing bars, ESD strips, shielded b’plane connector • 4 signal layers incl. top, bottom, 2*Vcc, 4*GND  total 10 layers • Micro vias on top, bottom, buried vias • All tracks controlled impedance : controlled / measured by manufacturer • Single ended 60Ω • Differential 100Ω • Point-to-point links only • All hand-routed • 60Ω DCI source termination on processors (CMOS levels) • Power distribution • All circuitry supplied by local step-down regulators, fused 10A (estimated maximum consumption < 5A on any supply, 50W tot.) • 10A capacity, separate 1.5V regulator for daughter modules • Defined ramp-up time (Virtex2 requirement) • staged bypass capacitors, low ESR • VME buffers scannable 3.3V (DTACK: open drain 3*24mA), short stubs on signal lines, 20-75 mm • Vccaux for FPGAs : dedicated quiet 3.3V • Merger signals (directly driven by processors) on 2.5V banks • FPGA core and inter-processor and inter-module links 1.5V JEM FDR

  19. JEM details –main board (2) • Timing • TTC signals terminated and buffered (LVPECL, DC) near backplane • TTCdec module with PLL and crystal clock automatic backup • DESKEW1 bunch clock used as a general purpose clock • Low skew buffers (within TTCdec PLL loop) with series terminators • DESKEW2 clock used for phase-controlled sampling 80Mb/s jet element data (local & FIO) on jet processor only • VME • Synchronised to bunch clock • Sum processor acts as VME controller • Basic pre-configure VME access through CM • Readout located on RM (ROCs on sum and jet processor) • DCS/CAN located on CM (except PHY - near backplane) • Configuration via SystemACE / CF • P2P links to keep ringing at bay • Multiple configurations, slot dependent choice JEM FDR

  20. JEM details –main board (3) • JTAG available on most active components. Separate chains • FPGAs (through SystemACE) • Non-programmable devices on input daughters • TTCdec and Readout Module • Buffers • Control Module • JTAG used for • Connectivity tests at manufacturer & MZ • CPLD configuration • FPGA configuration (ACE) JEM FDR

  21. Input modules • 24 LVDS data channels per module • 12 layer PCB with micro vias • Impedance controlled tracks • 60 Ω single ended • 100 Ω differential • LVDS signals entering via 100Ω differential connector on short tracks (<1cm) • Differential termination close to de-serialiser • 4 × SCAN921260 6-channel de-serialiser • PLL and analogue supply voltage only (3.3V) supplied from backplane • Digital supply from step-down regulator on main board • Reference clock supplied via FPGA • XC2V1500 input processor • 1.5V CMOS 60Ω DCI signals to sum and jet processor • SMBus device for Vcc and temperature monitoring (new) JEM FDR

  22. Readout Module RM 2 channels, 640 Mb/s 16bit  20 bit CIMT coded, fill-frame FF1, alternating flag bit, as defined in HDMP1022 specs • 2xPHY, 2xSFP opto transceiver, so far 2-layer boards • High-speed tracks <1cm • PHYs tested: • HDMP1022 serialiser 2.4W/chip (reference, tested in 16-bit and 20-bit mode) • HDMP1032A serialiser 660mW/chip, €27.86 @ 80pc (16-bit) • TLK1201A serdes 250mW/chip, < €5.00 @ 80pc, uncoded, requires data formatter firmware in ROC (16-bit, 20-bit) • Successfully run off bunch clock • Converted to Xtal clock due to unknown jitter situation on ATLAS TTC clock • Problems with Xtal clock distribution to ROI PHY (RAL, MZ) • RM seems to work with clock linked from DAQ PHY to ROI PHY • Want a local crystal oscillator on RM • Need new iteration of RM (HDMP1032A, TLK1201A) JEM FDR

  23. Control Module CM Combines CAN/DCS, VME pre-configure access and JTAG fanout • CAN • Controller to L1Calo specs now (common design for all processors, see CMM/CPM • Link to main board via SMBus only (Vcc, temperatures) • VME CPLD (pinout error corrected) • generating DTACK for all accesses within module sub-address range to avoid bus timeout • Providing basic access for • FPGA configuration via VME • configuration reset • ACE configuration selection / slot dependent • ACE configuration selection via VME • Buffers for SystemACE-generated JTAG signals to FPGAs • TTCdec parallel initialisation (ID from geographical address) JEM FDR

  24. JEM Cost @ 40pcs • Main board, 10 layer • PCB                 21,393 € • Assembly            7,657 € • Components     49,725 € • Sub-total 78,775 € • Input module, 12 layer • PCB          11,625 € • Assembly           6,500 €  • Components      43,000 €    • Sub-total 61,152 € • Total 139,927€ +control + readout + SFP JEM FDR

  25. Energy Sum Algorithm • In all stages saturate outputs if input is saturated or arithmetic overflow occurs • Operate on 40Mb/s data from LVDS de-serialisers :88 channels per JEM, 9-bit ET data, parity, link error • Latch incoming data on bunch clock, 2 samples per tick • Select the stable sample under VME control • Automatic phase detection in f/w (remove that feature ?) • Delay scan (VME) • Correct for upstream latency differences, up to 3 ticks (shift register, VME controlled) • Send data to readout and spy circuitry • Zero data on parity error • Apply channel mask • Sum up electromagnetic and corresponding hadronic channel to 10-bit jet element • Multiplex jet elements to 80Mb/s and send to jet processor and backplane JEM FDR

  26. Energy Sum Algorithm (2) • Threshold jet elements and sum to ET (12 bits, 1GeV resolution) • Threshold jet elements and multiply (cosφ,sinφ), .25GeV resolution • Sum to 2*14 bit (EX,EY) missing energy vector • Transmit (EX,EY,ET) to sum processor • Calculate board-level total vector sum • Quad-linear encoding to 8 bit each • 6-bit value and 2-bit range indicator • Resolution 1,4,16,64 GeV, full scale 4 TeV • Send 25 bits of data incl. odd parity bit D(24) to backplane JEM FDR

  27. FPGA resources used • Fully synchronous designs, I/O Flip-flops used on all data lines • Input FPGAs XC2V1500-4FF896C • Slice Flip Flops: 27%  • LUTs: 59% total • IOBs 90% • Block RAMs: 68%  • Multipliers 50%  • GCLKs: 12%  • DCMs: 12% 40.6MHz • SUM FPGA XC2V2000-4BF957C • Slice Flip Flops: 7%  • LUTs: 11% total • IOBs 83% • Block RAMs: 12%  • GCLKs: 25%  • DCMs: 12% 42.8MHz JEM FDR

  28. Performance All interfaces and the algorithms have been tested on JEM1.0 in Mainz, at the RAL slice test and in the CERN test beam. Problems revealed: • SystemACE configuration fails if incoming clock or TCK signal are of insufficient quality : signal distortions confirmed re-layout of crystal clock and TCK distribution on JEM1.1 • At CERN 2 out of 4 PPR channels could not be received error-free : signal distortions confirmed modifications required on the PPR LCD module • Errors observed on ROI readout only recently : problems with on-JEM crystal clock distribution confirmed re-layout of readout module, use local clock Apart from the above problems all interfaces and the algorithms have shown to work error free in all tests JEM FDR

  29. Test setup Up to 3 JEMs in a 9U crate allowing for FIO tests either direction, along with VMM, TCM, CMM (and CPMs!) Control: Concurrent CPU on VMM or via flat cable External data sources for • TTC : TTCvx, TTCvi, TTCex (CERN/RAL) via TCM • LVDS : • 1 DSS 16-channel (MZ) • Several DSS (RAL) • LSM (RAL) • PPR (CERN) (4 channels) External data sinks for • Merger signals : 2 CMMs (RAL) • Readout path: • Complete ROS (RAL) • G-link tester with f/w pattern comparison (MZ) JEM FDR

  30. Test strategies • Test the full system including all interfaces and algorithms at moderate statistics. Generally use physics-like test vectors • Requires operation of a ROS and data comparison on a computer. Therefore even in relatively long test runs very low bit error rates would go undetected • Test interfaces with firmware-based test adapters and on-JEM diagnostic firmware allowing for real-time detection of pattern errors • These tests will reveal even low-level errors quickly Choice of test patterns - have a look at possible failure mechanisms: FIO data and merger data on backplane source-terminated lines at moderate speed: no signal dispersion expected nor observed 800 Mb/s readout data : due to optical transmission no dispersion expected nor observed LVDS links : the pre-compensation circuitry is required to compensate at a single time constant only, well below a single bit period. At the receiving end a slight overshoot should be observed • no inter-symbol interference expected on neither of the transmission lines. Main source of errors: system noise. Any non-constant data pattern should do. • Use binary counter pattern. Useful on serial links: • Has long stretches of many ones / many zeroes • Has transitions all-one to all-zero • Easy to detect errors JEM FDR

  31. System test at RAL (slice test) Setup with 2-stage merging in a single crate: DSS  JEM  crate CMM  system CMM  ROD  ROS  ROD  ROS Comparing readout data against simulation. ROD type : 6U modules Data format : old format (6U module specific) Results (June 2004): Data taken up to 5 slices of JEM DAQ data. Trigger rate up to 60kHz, 4*106 events analysed, no errors observed on JEM readout. JEM FDR

  32. Interface tests At RAL: Playback from JEM (ramps) into CMM (parity detection). Merger signals crossing 2/3 of backplane length: no error in 1013 bits In Mainz: FIO tests 3 JEMs (ramps, pattern comparison on central JEM) : no error in 1013 bits LVDS input tests Source : DSS, 16 inputs exercised at a time, pattern comparison (ramp) in input module : no error in 1015 bits Readout link tests : G-link tester with pattern comparison(ramp) no error in 1014 bits (problems with crystal clock from jet processor) JEM FDR

  33. FIO tests : delay scan All data latched into jet processor on a common clock edge Sweep TTCrx delay setting, 104ps steps Measure data errors on each channel : 10 bits, 5 signal lines Single channel 8ns error free All channels 6.5ns error free JEM FDR

  34. latency Latency Energy path 183ns Jet path 234ns < 9.5BC JEM FDR

  35. CERN test beam Within a wider test setup the following modules were available to generate / analyse JEM ‘test vectors’ based on true calorimeter signals PPR  JEM  CMM  CTP  ROD  ROS ROD type : 6U modules • Data received from PPR error free on 2 channels • Readout from PPR not possible  could not verify input signal integrity except parity error check • Energy sum signal processing verified internally JEM FDR

  36. Test beam results Sum algorithm error-free (see effects of quad-linear encoding) Energy sum to CMM Input data Eem+Ehad JEM FDR

  37. Production tests • Boundary scan at manufacturer: high coverage due to large fraction of scannable components  verify connectivity (static test) • Standalone tester for input module LVDS inputs, pattern comparison in firmware (high statistics) • Standalone tester for readout module, pattern comparison in firmware (high statistics) • DCI operation verified w. oscilloscope (drive unterminated 50 Ω cable into scope, record pulse shape) dynamic test • System-level tests in Mainz : 1 crate, 1 JEM supplied with LVDS data at a time, playback and spy facilities used to generate / capture data on board boundaries. • FIO delay scan • High statistics FIO BER tests, pattern detection in firmware, test full crate at a time with maximum activity on LVDS, VME, readout • System-level tests at CERN JEM FDR

More Related