1 / 36

Level-1 Calorimeter (L1Calo) upgrade

Level-1 Calorimeter (L1Calo) upgrade. Upgrade overview Introduction in the ATLAS trigger Upgrade phase 1 Backplane test module Upgrade phase 2  High-speed optical links  Jitter cleaner Summary. Physics Goals of the upgraded ATLAS. LHC = Discovery machine

veta
Télécharger la présentation

Level-1 Calorimeter (L1Calo) upgrade

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Level-1 Calorimeter (L1Calo) upgrade Upgrade overview Introduction in the ATLAS trigger Upgrade phase 1 Backplane test module Upgrade phase 2 High-speedoptical links Jittercleaner Summary

  2. Physics Goals of the upgraded ATLAS LHC = Discovery machine Higgs, SUSY, ... sLHC = Understanding what has been discovered and extending the region of discovery to higher masses e.g. Higgs couplings to WW, ZZ, HH Check if it is a SM model Higgs WW, ZZ scattering at high centre of mass (~1 TeV): understand EWSB W', Z' SUSY If sparticles found at LHC, sparticle spectroscopy at sLHC If not, increase search region ...and more: Need results from LHC to understand which will be most important Mass reach of Z’ Factor 10 in luminosity extends the reach for Z’ by 1-1.5 TeV.

  3. Peak and integrated luminosity… New injectors + IR upgrade phase 2 IR upgrade phase 1 Piwinski-angle Φ q/2

  4. Motivation and upgrade plans

  5. Triggerchallengescurrentsystem Interaction rate ~1 GHz Data recording rate ~200 Hz Reduction: 10^6 Level-1: hardwareprocessing, onlycalorimeter and muontriggerchamberdatawithcoarsegranularity Level-2: softwareprocessing, RoIs (~2% of fulldata), fullgranularity, all detectors Level-3/Event Filter: softwareprocessing, fulldata, eventreconstruction 2.5 µs ~ 10 ms ~ s

  6. Overview of thecurrent level-1 calorimetertriggersystem • 4 crates à 14 CPMs + 2CMMs: • carry out the e/γ and τ algorithm(Δφ x Δη = 0.1 x 0.1) • 16 thresholds (8 for e/γ and 8 for e/γ/τ) and 16 RoIs per module • 25 links to each CMM @ 40 MHz At that point wehaveonlyinformation of multiplicities and no topologicalinformation. • 2 crates à 16 JEMs + 2 CMMs: • Carry out jet and sumalgorithm (Δφ x Δη = 0.2 x 0.2) • 8 thresholds and 8 RoIs per module • 25 links to each CMM @ 40 MHz

  7. Crateconfiguration and algorithms E/γ and τ/hadronicclusters Jets and energy sums

  8. Upgrade phaseoneWhat will happen in phaseone? LHC/beam: Trigger: Level-1 trigger rate can‘tincrease (output rate still < 100 kHz) Have to bemoreselective Acceptanceshould still be as high as possible Simplyraisingthethresholdsis no solution! (EW-scale) (pile-upeffects?) • Interaction rate and thereforenumber of Events per BC will approximately double to triple 2-3 x • have to improvethetriggeralgorithm • Buthow and where?

  9. Phase 1 Digitized ET Backplane Calorimetersignals(~7200) Cluster Processor 56 modules Merging 8 modules 0.10.1 PreProcessor 124 modules Analogue Sums Receivers New Topology Processor To CTP 0.20.2 Backplane Jet/Energy 32 modules Merging 4 modules L1 Muon Readout Driver 14 modules Readout Driver 6 modules Region-of-interest data Readoutdata Latency • No fundamental changes to L1Calo (only 6-8 months shutdown) • EM/ cluster, jet identification unchanged • perhaps more thresholds? • Keep existing L1Calo trigger items • Cluster/Jet multiplicities • ET, missing ET, jet ET • Add topological trigger algorithms • New subsystem using ROIs at level-1 • Example: detect overlapping features (e.g. 1 electron plus 2 jets that are NOT the electron) • Use data from muon level-1 trigger

  10. What will thedatavolumebe? • Ifwe‘dusetheRoIs • 8 RoIs per JEM, 2 locationbits per RoI, 8 threshholdbits per RoI • 8 x (2 + 8) bits = 80 bitsfortheRoIs per JEM • Maybe send ET, Ex and Eywithfull 12 bitprecision (and notthescaled 6 bit) • 3 x 12 bits = 36 bits per fortheenergy sums per JEM • Total data per JEM: • 80 + 36 bits= 116 bitshave to besent per JEM each 25 ns With 50 links on the backplane to theCMMs • data rate needed: 4 x 40 Mbps = 160 Mbps • Maybemorethresholds? Moremultiplicities? • Maybeonly a subset of all RoIs? 12bit to 8bit scaling  Need to understand wich dataisuseful(Monte-Carlo simulations?)

  11. And whatistheBandwidth on the backplane? Each CPM or JEM is sourcing a total of 50 lines into two mergers total capacity per module into both mergers is : 50 bit @ 40 Mb/s (25 bit of jet data) 100 bit @ 80 Mb/s (75 bit of jet data) * 200 bit @ 160 Mb/s (175 bit of jet data) * 400 bit @ 320 Mb/s (375 bit of jet data) * * On the JEMs, if we do not need to increase energy sum data volume, any increase on both backplane links can be used for jet data, if the increase in latency on the jet-to-sum processor path is acceptable (requires re-work of current jet processor DAQ interface)

  12. Results: backplane signals Seriesterminated: Scopeshots of JEM signalsoverthelongest backplane trackswithnoise on CMM and VME lines: Parallel terminated: 160 Mbits, CMOS 2.5V 320 Mbits, CMOS 2.5V •  For rates ≥ 160 Mbitsweneedtermination at the sink

  13. Plan for rate tests • The rate limit on the backplane is about 160-320 Mb/s on JEMs • We need to understand what the required bandwidth will be on CP and JEP • Understand which data we want to use • We will need to do bit error rate tests on backplane data with a module able to • terminate the signals for data rates ≥ 160Mb/s • deskew each signal line individually • Not possiblewithcurrentCMMs • Build backplane tester based on recent FPGA family providing both termination and deskew

  14. General considerationsforthe backplane test module • datafrom JEMs/CPMs: 400 lines (at themoment 2.5V CMOS, sourceterminated) • Parallel termination at the sink isrequired • externalto ½ VCCO • internalby DCI • 1.5 V CMOS shouldbepossibleas well • variable VCCO • Regional clocks for source synchronous transmission (clock forwarding) • fine tunable per pindeskew • VME––bus(3.3 V, about 49 lines) • TTC clock jittercleaner • PCB iscomparatively expensive • useit also totestsomehigh-speedoptical links • high-speed links upto 6.25 Gbps • SNAP12  upto 75 Gbpseach • route theremaining parallel links on a headerforgeneralusage Xilinx Virtex-5 FX70T FPGA

  15. Schematicoverview Backplane Reduced VME CPLD VME buffer JTAG buffer JTAG header VME 40MHz clock JTAG chain config CF-card Full VME-- System ACE from CPM/JEM FPGA 400 Additional circuitry (opto) Ring ofdiscreteterminators Parrallel links TTC signal High-speed links Clockmuxandfanout 16 pairs Recoveredclock Cleaned clock config Power converters 160 MHz crystalclock JitterCleaner SPI flashmemory

  16. Power dissipation Externalterminationat sink DCI-/internal-terminationat sink • P = (VCCO)2 / 4 * Z0 ≈ 26 mW • Ptotal ≈ 10.4 W a lot of power dissipation (challengingforcooling)  changingthesignalstandard to 1.5 V CMOS, i.e. VCCO = 1.5 V • Ptotal = ~ 3.8 W But: onlypossibleforthe JEMs not forthe CPMs

  17. Overview of the BERT setup Backplane Eachmodule: 1 forwardedclock 24 datalines 1.5V-2.5V CMOS 40-320Mbits • JEMs/CPMscanbemixed (butinbetween has to beoneslotleftempty) • theJEMs/CPMssimplygenerate a test pattern • data (test pattern) issentoverthe backplane • 1 link out of 25 isusedforclockforwarding • the backplane test modulereceives and comparesthedata to expectedpatter • thebiterrorsarecalculated and an errorcountercanberead out via VME • Phase correction of clock and datalines also via VME Test boardshouldbeready in about 1-2 months. CPU CPU Backplane Tester JEM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM/CPM JEM Backplane Tester TCM

  18. Data merging in phase 1 160 Mhz(4x): 62 Gbps per JEP quadrant 124 Gbps per CP quadrant • Per quadrant ~250 Gbps • Needforopticalhigh-speed links!

  19. Upgrade phase 2 • Changes in full Phase II sLHC • Up to 400 pile up events / BC • high radiation levels • new readout and electronic at LAr and Tile Calorimeter • new inner detector  Finergranularity at level-1  replace L1Calo S-LHC will bring a huge increase in data volume, so we have to look for higher data rates • need for optical multi-gigabit links

  20. High speedoptical links we‘dlike to test someoptical links: • Snap12 optical link module: • upto 120 Gbps (12 x 10 Gbps) • up to 300 m lengthover OM3 ribbonfiber • board-edgeormid-boardmounted • low power consumption: ~ 2.5 W TX, ~ 1 W RX per channel • 850nm VCSEL laserarray, MTP/MTO opticalconnector • electrical interface: 12 x 6.25 Gbps CML (compatible with Xilinx GTXs Transceiver) • Rocket I/O GTX Transceiver • cheaplyavailable in manyXilinx Virtex-5 FPGAs • up to 6.5 Gbps per link • up to 48 Transceiver per FPGA (16 in FX70T) • Requires a low-jitterclock (6.5 Gbpsjitter < ~20ps) • For comparisoncurrentsystem: • High-speedlvdsfromprepocessorJEMs: 480 Mbps • JEM Read out over G-Link (optical): 800 Mbps

  21. Serializer/deserializer Serialdata D Q D Q > > D Q D Q > > parallel data parallel data D Q D Q > > jitter Serialdata D Q D Q > > clk div clk div

  22. High speeddatapath and clockpath FPGA inputsignals inputs 6.25 Gbps GTX GTX logic/ patterngenerator GTX data GTX GTX TTC signal Snap12 (TX) GTX GTX 12 x 6.25 Gbpselectrical 12 x 6.25 Gbpsoptical GTX Jitter Cleaner Clockrecovery clock GTX GTX GTX GTX lowjitterclock

  23. Quality of the TTC signal FPGA inputsignals inputs 6.25 Gbps GTX GTX 300 logic/ patterngenerator GTX data GTX GTX TTC signal Snap12 (RX) GTX GTX 12 x 6.25 Gbpselectrical 12 x 6.25 Gbpsoptical GTX Jitter Cleaner Clockrecovery clock GTX GTX GTX Jitter: ~ 800 ps @ 50% GTX lowjitterclock

  24. Clock recovery Goal: Serial link ~ 6.5 Gbps Xilinx reference clock requires jitter better than ~ 20ps

  25. High speedclock and datapath Peak-peakjitter: 840 ps Std. devjitter: 243 ps 2 edges! FPGA inputsignals inputs 6.25 Gbps GTX GTX logic/ patterngenerator GTX data GTX GTX TTC signal Snap12 (RX) GTX GTX 12 x 6.25 Gbpselectrical 12 x 6.25 Gbpsoptical GTX Jitter Cleaner Clockrecovery clock GTX GTX GTX GTX lowjitterclock

  26. Cleanedsignalby FPGA pll FPGA internal PLL Peak-peakjitter: 840 ps Std. devjitter: 243 ps 2 edges! • After FPGA pll: • One risingedge • Jitterbyfartoo high •  need an externaljittercleaner Peak-peakjitter: 1240 ps Std. devjitter: 290 ps 1 (not so clean) edge!

  27. The jitter cleaner board diff. line driver (conversion LVDS to LVCMOS) Si5326 Jitter Attenuator (empty) 160MHz Reference Oscillator (divided by 4) CPLD µTCA-Backplane Connector (interface to the board) SMA-Connector (distribution of low jitter clock over coax) Supply Voltage and Linear Regulator JTAG-Connector (for programming of the CPLD) LMK03000C Clock Conditioner

  28. The jitter cleaner board Supply Voltage and Linear Regulator LMK03000C Clock Conditioner

  29. First tests with the recovered TTC clock Peak-peakjitter: 840 ps Std. devjitter: 240 ps 2 (not so clean) edges! Peak-peakjitter: 7 ps Std. devjitter: 890 fs 1 very clean edge! • Improvement of factor 120 forpk-to-pk and factor 250 for std. dev. jitter • Phase drift 50°C – 80°C: < 50 ps • Shouldbesufficientfor 6.5 Gbps

  30. First tests on multi-gigabit data transmission • use the recovered and cleaned clock to drive multi-gigabit links • line rate: 3.2 Gb/s • transmission medium: standard single mode fiber with lc-connector • send known data (a ramp/counter); check it at the receiving end and count the errors • result: no errors occurred in a measuring time of several hours • bit error ratio < 10-13 more tests are planned with higher data rates

  31. summary • firststudies on SLHC and ATLAS lvl-1 triggerupgrade • Bandwidth and BERT tests on the backplane neededforphase 1 • backplane test moduleisunderdevelopment and will beassembledsoon • Monte Carlo studiesneeded to understandwhichdatacansignificantlyimprovethetriggerdecision at L1Calo • forphase 2 therearemanythingspossible and still open • Depends on experiencewithcurrentsystem • Granularity will influencenewdesign • calorimeterelectronic will change  l1calo also • forphase 1 and phase 2 istheneedforhigh-speedoptical links • First tests on high-speed links areverypromising, butnewer and fastersolutions has to betested • measurements on thejittercleaner to drivethese links arelooking good

  32. Backup

  33. Why upgrade the LHC ? Radiation damage limit • Hardware ageing • Foreseeable luminosity evolution Þ Need for a major luminosity upgrade in ~2017 (SLHC) ã J. Strait Error halving time 33

  34. Schemes comparison ã F. Zimmermann

  35. Latency, and Level-1 Track Trigger • If L1 track trigger, need hits in USA15 by about t0+3.7 s (my guess) • To take part in matching process before SCTP and L1C distribution • Could be seeded by “L0A” (Calo,Muon) features at ~1.5 - 2 s (my guess)

More Related