1 / 43

Upgrade of Trigger and Data Acquisition Systems for the LHC Experiments

Upgrade of Trigger and Data Acquisition Systems for the LHC Experiments. Nicoletta Garelli CERN. Acknowledgment & Disclaimer.

eara
Télécharger la présentation

Upgrade of Trigger and Data Acquisition Systems for the LHC Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Upgradeof Trigger and Data Acquisition Systems for the LHC Experiments NicolettaGarelli CERN XXIII International Symposium on Nuclear Electronics and Computing, 12-19 September 2011, Varna, Bulgaria

  2. Acknowledgment & Disclaimer • I would like to thank David Francis, BenedettoGorini, Reiner Hauser, FransMeijers, Andrea Negri, Niko Neufeld, Stefano Mersi, Stefan Stancu and all other colleagues for answering my questions and sharing ideas. • My apologizes for any mistakes, misinterpretations and misunderstandings. • This presentation is far to be a complete review of all the trigger and data acquisition related activities foreseen by the LHC experiments from 2013 to 2022. • I will focus on the upgrade plans of ATLAS, CMS and LHCb only. N. Garelli (CERN). NEC'2011

  3. Outline • Large Hadron Collider (LHC) • today, design, beyond design • LHC experiments • design • trigger & data acquisition systems • upgrade challenges • Upgrade plans • ATLAS • CMS • LHCb N. Garelli (CERN). NEC'2011

  4. LHC: a Discovery Machine Goal: explore TeV energy scale to find Higgs Boson & New Physics beyond Standard Model How: Large Hadron Collider (LHC) at CERN, with possibility of steady increase of luminosity large discovery range CMS LHC • LHC Project in brief • LEP tunnel: 27 km Ø,~100 m underground • pp collisions, center of mass E = 14 TeV • 4 interaction points  4 big detectors • Particles travel in bunches at ~ c • Bunchesof O(1011) particles each • Bunch Crossing frequency: 40 MHz • Superconducting magnets cooled to 1.9 K with 140 tons of liquid He. (Magnetic field strength ~ 8.4 T) • Energy of one beam = 362 MJ (300x Tevatron) Alice SPS ATLAS PS LHCb N. Garelli (CERN). NEC'2011

  5. LHC: Today, Design, Beyond Design 1 2 Interventions needed to reach design conditions b* = beam envelope at Interaction Point (IP), determined by magnets arrangements & powering. Smaller b*= Higher Luminosity LHC can go further  Higher Luminosity N. Garelli (CERN). NEC'2011

  6. LHC Schedule Model • Yearly Schedule • operating at unexplored conditions  long way to reach design performance  need for commissioning & testing periods • one 2-month Technical Stop (TS). Best period for power saving: Dec-Jan • every ~2 months of physics a shorter TS followed by a Machine Development (MD) period necessary • 1 month of heavy ion run (different physics program) • Every 3 years a 1 year long (at least) shutdown needed for major component upgrades • … and the experiments? • profit from LHC TS & shutdown periods for improvements & replacements • LHC drives the schedule  experiments schedule has to be flexible N. Garelli (CERN). NEC'2011

  7. LHC: Towards Design Conditions • Don’t forget that life is not always easy • Single Event Effects due to radiation • Unidentified Falling Objects (UFO), fast beam losses • What LHC can do as it is today: • with 50 ns spacing: nb = 1380, bunch intensity = 1.7 1011, b* = 1.0 m L = 5 1033 cm-2s-1 at 3.5 TeV • with 25 ns spacing: nb = 2808, bunch intensity = 1.2 1011, b* = 1.0 m L = 4 1033 cm-2s-1 at 3.5 TeV • Not possible to reach design performance today: • Beam Energy: joints between s/c magnets limits to 3.5 TeV/beam • Beam Intensity: collimation limits luminosity to ~5 1033 cm-2s-1 with E = 3.5 TeV/beam N. Garelli (CERN). NEC'2011

  8. LHC Draft Schedule –Consolidation LCH activities after Shut-Down Long Shut-Down Upgrade Phases • fully repair joints between s/c magnets • install magnet clamps E = 6.5-7 TeV L = 1034 cm-2s-1 2013 CONSOLIDATION • Electrical fault in bus between super conducting magnets caused 19.9.2008 accident  limit E to 3.5 TeV • After joints reparation 7 TeV will be reached, after dipole training: O(100) quench/sector  O(month) hardware commissioning N. Garelli (CERN). NEC'2011

  9. LHC Upgrade Draft Schedule – Phase1&2 LCH activities after Shut-Down Long Shut-Down Upgrade Phases • fully repair joints between s/c magnets • install magnet clamps E = 6.5-7 TeV L = 1034 New collimation system necessary to be protected from high losses at higher luminosity CONSOLIDATION 2013 • collimation upgrade • injector upgrade (Linac4) E = 7 TeV L = 2 1034cm-2s-1 2017 PHASE 1 • new bigger quadrupoles smaller b* • new RF Crab cavities E = 7 TeV L = 5 1034cm-2s-1 PHASE 2 2021 N. Garelli (CERN). NEC'2011

  10. LHC Upgrade Draft Schedule LCH activities after Shut-Down Long Shut-Down Upgrade Phases • fully repair joints between s/c magnets • install magnet clamps E = 6.5-7 TeV L = 1034cm-2s-1 CONSOLIDATION 2013 • collimation upgrade • injector upgrade (Linac4) E = 7 TeV L = 2 1034cm-2s-1 2017 PHASE 1 • new bigger quadrupoles smaller b* • new RF Crab cavities PHASE 2 The Super-LHC E = 7 TeV L = 5 1034 cm-2s-1 2021 3000 fb-1 by the end of 2030 x103wrt today N. Garelli (CERN). NEC'2011

  11. LHC Experiments Design • LHC environment (design) • sppinelastic ~ 70 mbEvent Rate = 7 108 Hz • Bunch Cross (BC) every 25 ns (40 MHz) ~ 22 interactions every “active” BC • 1 interesting collision is rare & always hidden within ~22 minimum bias collisions = pile-up • Stringent requirements • fast electronics response to resolve individual bunch crossings • high granularity (= many electronics channels) to avoid that a pile-up event(1) goes in the same detector element as the interesting event(1) • radiation resistant (1) Event = snapshot of values of all front-end electronics elements containing particle signals from single BC N. Garelli (CERN). NEC'2011

  12. LHC Upgrade: Effects on Experiments • Higher peak luminosity  Higher pile-up • more complex trigger selection • higher detector granularity • radiation hard electronics • Higher accumulated luminosity  radiation damage: need to replace components • sensors: Inner Tracker in particular (~200 MCHF/experiment) • electronics? not guaranteed after 10 y use Challenge for experiments: LHC luminosity x10 higher than today after second long shutdown (phase 1) 2013 2014 2017 2018 2021 2022 N. Garelli (CERN). NEC'2011

  13. Interesting Physics at LHC Total (elastic, diffractive, inelastic) cross-section of proton-proton collision Higgs ->4m DESIGN ~22 MinBias Find a needle … …in the haystack! BEYOND DESIGN  5x bigger haystack ~100 MinBias Cross-section of SM Higgs Boson production Fluegge, G. 1994, Future Research in High Energy Physics, Tech. rep N. Garelli (CERN). NEC'2011

  14. Trigger & Data Acquisition (DAQ) Systems • @ LHC nominal conditions  O(10) TB/s of data produced • mostly useless data (min. bias events) • impossible to store them • Trigger&DAQ: select & store interesting data for analysis at O(100) MB/s • TRIGGER: select interesting events (the Higgs boson in the haystack) • DAQ: convey data to local mass storage • Network: the backbone, large Ethernet networks with O(103) Gbit & 10-Gbit ports, O(102) switches • Until now: high efficiency (>90%) 40 MHz O(10)TB/s Trigger & DAQ Local Storage O(100)MB/s CERN Data Storage N. Garelli (CERN). NEC'2011

  15. Comparing LHC Experiments Today ATLAS CMS LHCb Similar read-out links ATLAS: partial & on-demand read-out @L2 CMS & LHCb: read-out everything @L1 N. Garelli (CERN). NEC'2011

  16. Trigger Info ATLAS Data ATLAS Trigger & DAQ (today) Trigger DAQ ATLAS Event 1.5 MB/25 ns Calo/Muon Detectors Other Detectors <2.5 s Level 1 40 MHz 40 MHz Detector Read-Out FE FE FE L1 Accept 75 (100) kHz ROD ROD ROD 75 kHz Regions Of Interest 112 GB/s Data- Flow High Level Trigger ReadOut System ~40 ms Level 2 ROI data (~2%) ROI Requests Event Builder ~ 3 kHz ~4.5 GB/s Data Collection Network L2 Accept ~3 kHz ~4 sec SubFarmInput Event Filter Event Filter Network EF Accept ~200 Hz SubFarmOutput ~ 300 MB/s ~ 200 Hz CERN Data Storage N. Garelli (CERN). NEC'2011

  17. CMS Trigger & DAQ (today) • LV1 trigger HW: • custom electronics • rate from 40 MHz to 100 kHz • Event Building • 1st stage based on Myrinet technology: FED-builder • 2nd stage based on TCP/IP over GBE: RU-builder • 8 independent identical DAQ slices • 100 GB/s throughput • HLT: PC farm • event driven • rate from 100 kHz to O(100) Hz Detectors 40 MHz O(s) Level 1 Trigger Front-End pipelines Read-out buffers 100 kHz Switching Networks O(s) High Level Trigger Processors farms Mass storage 100 Hz N. Garelli (CERN). NEC'2011

  18. Experiments Challenges Beyond Design • Beyond design  new working point to be established • Higher pile-up  increase pattern recognition problems • Impossible to change calorimeter detectors (budget, time, manpower) • Necessary to change inner tracker • current damaged by radiation • needs for more granularity • Level-1 @ higher pile-up  select all interesting physics • simple increase of thresholds in pTnot possible: lot of physics will be lost • more sophisticated decision criteria needed • move software algorithms into electronics • muon chambers  better resolution for trigger required • add inner tracker information to Level-1 • Longer Level-1 decision time  longer latency • More complex reconstruction in HLT • more computing power required N. Garelli (CERN). NEC'2011

  19. DAQ Challenges • Problem: • which read-out ? • at which bandwidth? • which electronics? • Higher detector granularity higher number of read-out channels  increased event size • Longer latency for Level-1 decisions  possible changes in all sub-detector read-out systems • Larger amount of data to be treated by network & DAQ • higher data rate  network upgrade to accommodate higher bandwidth needs • need for increased local data storage • Possibly higher HLT output rate if increased global data storage (Grid) allows N. Garelli (CERN). NEC'2011

  20. As of Today: Difficult Planning • Hard to plan • while maintaining running experiments • with uncertain schedule • Upgrade plans driven by • Trigger: guarantee good & flexible selection • DAQ: guarantee high data taking efficiency • New technologies might be needed • Trigger: new L1 trigger & more powerful HLT • DAQ: read-out links, electronics &network • To be considered • replacing some components may damage others • new architecture must be compatible with existing components in case of partial upgrade N. Garelli (CERN). NEC'2011

  21. ATLAS AToroidalLHC ApparatuS N. Garelli (CERN). NEC'2011

  22. ATLAS Draft Schedule – Consolidation ATLAS Activities TDAQ related after Shut-Down Long Shut-Down Upgrade Phases • TDAQ farms & networks consolidation • Sub-detector read-out upgrades to enable Level-1 output of 100 kHz • Current innermost pixellayer • will have significant radiation damage, largely reduced detector efficiency • replacement needed by 2015 • Insertable B-Layer (IBL) built around a new beam-pipe & slipped inside the current detector E = 6.5-7 TeV L = 1034 cm-2s-1 2013 CONSOLIDATION N. Garelli (CERN). NEC'2011

  23. Evolution of TDAQ Farm • Today: architecture with many farms & network domains: • cpu&network resources balancing on 3 different farms (L2, EB, EF) requires expertise • 2 trigger steering instances (L2, EF) • 2 separate networks (DC & EF) • huge configuration • Proposal: merge L2, EB, EF within a single homogeneous system • each node can perform the whole HLT selection steps • L2 processing & data collection based on ROIs • event building • event filter processing on the full event • automatic system balance • a single HLT instance To be approved N. Garelli (CERN). NEC'2011

  24. TDAQ Network Proposal ROS ROS ROS ROS ROS ROS • Current network architecture: • system working well • EF core router: single point of failure • new technologies • 2013: replacement of cores mandatory (exceeded life-time) DC SV SV SFI SFI XPU XPU EF EF EF XPU XPU PU XPU PU PU XPU XPU PU XPU • Proposal:merge DC&EF networks  OK with new chassis • some cost reduction • perfect for TDAQ farms evolution • mixing functionalities • reduce scaling potential with actual TDAQ farms configuration SFO EF SFO N. Garelli (CERN). NEC'2011

  25. ATLAS Upgrade Draft Schedule – Phase1 ATLAS activities TDAQ related after Shut-Down Long Shut-Down Upgrade Phases • TDAQ farm & network consolidation • L1 @ 100 kHz • IBL E = 6.5-7 TeV L = 1034cm-2s-1 2013 CONSOLIDATION • Level-1 Upgrade to cope with pile-up after phase-1 • New muon detector Small Wheel (SW) • Provide increased calorimeter granularity • Level-1 topological trigger • Fast Track Processor (FTK) E = 7 TeV L = 2 1034 cm-2s-1 2017 PHASE 1 N. Garelli (CERN). NEC'2011

  26. New Muon Small Wheel (SW) • Muon precision chambers (CSC & MDT) performance deteriorated • need to replace with a better detector • Exploit new SW to provide also trigger information • today: 3 trigger stations in barrel (RPC) & end-caps (TGC)  New SW = 4th trigger station • reduce fake • improve pT resolution • level-1 track segment with 1 mrad resolution • Micromegas detector: new technology which could be used N. Garelli (CERN). NEC'2011 Small Wheel

  27. L1 Topological Trigger • Proposal: additional electronics to have a Level-1 trigger based on topology criteria, to keep it efficient at high luminosities: Df, Dh, angular distance, back-to-back, not back-to-back, mass • di-electron  low lepton pT in Z, ZZ/ZW,WW, H→WW/ZZ/tt and multi-leptons SUSY modes • jet topology, muon isolation, … • New topological trigger processor with input from calorimeter & muon detectors, connected to new Central Trigger Processor • Consequence: longer latency, develop common tools for reconstructing topology both in muon & calorimeter detectors Under discussion N. Garelli (CERN). NEC'2011

  28. Fast Track Processor (FTK) Good match between Pre-stored & Recorded patterns • Introduce highly parallel processor: • for full Si-Tracker • provides tracking for all L1-accepted events within O(25μs) • Reconstruct tracks >1 GeV • 90% efficiency compared to offline • track isolation for lepton selection • fast identification of b & τ jets • primary vertex identification • Tracks reconstruction has 2 time-consuming stages: • pattern recognition  Associative memory • track fitting  FPGA • After L1, before L2 • HLT selection software interface to FTK output (tracks available earlier) Pattern from reconstruction Pre-stored patterns Discarded patterns N. Garelli (CERN). NEC'2011

  29. ATLAS Upgrade Draft Schedule – Phase2 ATLAS activities TDAQ related 1. Full digital read-out of calorimeter (data & trigger) • faster data transmission • trigger access to full calorimeter resolution (provides finer cluster and better electron identification)  proposed solution: fast rad-tolerant 10 Gb/s links after Shut-Down Long Shut-Down Upgrade Phases E = 6.5-7 TeV L = 1034cm-2s-1 • Reduce heterogeneity in TDAQ farms & networks 2013 PHASE 0 • FTK • L1 Topological trigger E = 7 TeV L = 2 1034 cm-2s-1 2017 PHASE 1 • 2. Precision muon chambers used in trigger logic  dismount as less as possible • 3. L1 Track Trigger E = 7 TeV L = 5 1034 cm-2s-1 2021 PHASE 2 N. Garelli (CERN). NEC'2011

  30. Improve L1 Muon Trigger – Phase2 Current muon trigger: • trigger logic assumes tracks to come from interaction point (IP) • pT resolution limited by IP smearing (Phase2: 50mm  ~150mm) • MDT resolution 100 times better than trigger chambers (RPC)  Proposal: use precision chambers (MDT) in trigger logic • reduce rates in barrel • no need for vertex assumption • improve selectivity for high-pTmuons • Current limitation: MDT read-out serial & asynchronous  Phase2: improve MDT electronics performance (solve latency problem) • Fast MDT readout options: • seeded/tagged methoduse information from trigger chambers to define RoI & only consider small # of MDT tubes which falls into the RoI. Longer latency • unseeded/untagged methodstand-alone track finding in MDT chambers. Larger bandwidth required to transfer MDT hit pattern N. Garelli (CERN). NEC'2011

  31. Track Trigger – Phase2 • Possible to introduce L1 track trigger  keep L1 rate @ 100 kHz • combine with calorimeter to improve electron selection • correlate muon with track in ID & reduce fake tracks • possible L1 b-tagging • L1 track trigger Self Seeded • use high pTtracks as seed • need fast communication to form coincidences between layers • latency of ~3ms • L1 track trigger ROI Seeded • need to introduce a L0 trigger to select RoI at L1 • long ~10ms L1 latency • New Inner Detector • only with silicon sensors • better resolution, reduced occupancy • more pixel layers for b-tagging N. Garelli (CERN). NEC'2011

  32. multi-jet event at 7 TeV CMS The Compact MuonSolenoid N. Garelli (CERN). NEC'2011

  33. CMS Consolidation Phase CMS activities TDAQ related after Shut-Down Long Shut-Down Upgrade Phases E = 6.5-7 TeV L = 1034 cm-2s-1 • Trigger & DAQ consolidation • x3 increase HLT farm processing power • replace HW for Online DB CONSOLIDATION 2013 • Muons • CMS design: space for a 4th layer of forward muon chambers (CSC & RPCs) • better trigger robustness in 1.2<|h|<1.8 • preserve low pT threshold N. Garelli (CERN). NEC'2011

  34. CMS Upgrade Draft Schedule – Phase1 CMS activities TDAQ related after Shut-Down Long Shut-Down Upgrade Phases • Trigger & DAQ consolidation • 4th layer muon detectors E = 6.5-7 TeV L = 1034 cm-2s-1 CONSOLIDATION 2013 E = 7 TeV L = 2 1034 cm-2s-1 • New pixel detector • Upgrade hadron calorimeter (HCAL)  silicon photomultipliers. Finer segmentation of readout in depth • New trigger system • Event Builder & HLT farm upgrade 2017 PHASE 1 • Phase-1 requirements&plans as ATLAS • radiation damage  change silicon innermost tracker • maintain Level-1 < 100 kHz, low latency, good selection  tracking info @ L1+ more granularity in calorimeters •  DAQ evolution to cope with new design N. Garelli (CERN). NEC'2011

  35. CMS New Pixel Detector –Phase1 • New pixel detector (4 barrel layers, 3 end-caps) • Need for replacement • radiation damage(innermost layer might be replaced before) • read-out chips just adequate for L=1034 cm-2s-1 with 4% dynamic data loss due to read-out latency & buffer  to improve • Goal • gives better tracking performance • improved b-tagging capabilities • reduce material using a new cooling system CO2 instead of C6F14 N. Garelli (CERN). NEC'2011

  36. CMS New Trigger System – Phase1 • Introduce regional calorimeter trigger • to use full granularity for internal processing • more sophisticated clustering & isolation algorithms to handle higher rates and complex events • New infrastructure based on μTCA for increased bandwidth, maintenance, flexibility • Muon trigger upgrade to handle additional channels & faster FPGA moving from custom ASICs to powerful modern FPGAs with huge processing & I/O capability to implement more sophisticated algorithms Advanced TelecommunicationsComputing Architecture (ATCA). Dramatic increase in computing power & I/O N. Garelli (CERN). NEC'2011

  37. CMS Upgrade Draft Schedule – Phase2 CMS activities TDAQ related after Shut-Down Long Shut-Down Upgrade Phases E = 6.5-7 TeV L = 1034 cm-2s-1 • Trigger & DAQ consolidation • 4th layer muon detectors CONSOLIDATION 2013 • New pixel detector • Upgrade HCAL  silicon photomultipliers • New trigger system • EventBuilder&HLT farm upgrade E = 7 TeV L = 2 1034 cm-2s-1 2017 PHASE 1 • Install new tracking system  track trigger • Major consolidation of electronics systems • Calorimeter end-caps • DAQ system upgrade E = 7 TeV L = 5 1034cm-2s-1 PHASE 2 2021 N. Garelli (CERN). NEC'2011

  38. New Tracker pass fail 2 pass fail • R/D projects for new sensors, new front-end, high speed link (customized version of GBT), tracker geometry arrangement • >200M pixels, >100M strips • Level-1 @ high luminosity  need for L1 tracking • Delivering information for Level-1 • impossible to use all channels for individual triggers • Idea: exploit strong 3.8 T magnetic field and design modules able to reject signals from low-pTparticles • Different discrimination proposals to reject hits from low-pTtracks  data transmission at 40 MHz feasible: • within a single sensor, based on cluster width • correlating signals from stacked sensor pairs 1 ~ 1 mm N. Garelli (CERN). NEC'2011 ~ 100 μm

  39. B0s meson  μ+ μ- LHCb The Large HadronCollider beauty experiment N. Garelli (CERN). NEC'2011

  40. LCHb Trigger & DAQ Today Single-arm forward spectrometer (~300 mrad acceptance) for precision measurements of CP violation & rare B-meson decays • Designed to run with average # of collisions per BX ~ 0.5 & nb~2600  L ~ 2 1032 cm-2s-1  running with L = 3.3 1032 cm-2s-1 • Reads-out 10 times more often than ATLAS/CMS to reconstruct secondary decay vertices  very high rate of small events (~55 kB today) • L0 trigger: high efficiency on dimuon events, but removes half of the hadronic signals • All trigger candidates stored in raw data & compared with offline candidates: • HLT1: tight CPU constraint (12 ms), reconstruct particles in VELO, determine position of vertices • HLT2: Global track reconstruction, searches for secondary vertices 40 MHz L0 e, g L0 had L0 m HW < 1 MHz HLT1. High pT tracks with IP != 0 SW 30 kHz Global reconstruction HLT2. Inclusive & exclusive selection 3 kHz Event size ~35 kB N. Garelli (CERN). NEC'2011

  41. LCHb Upgrade – Phase1 • 2011: L ~O(150%) of design, O(35%) of bunches • after 2017: Higher rate  higher ET threshold  even less hadronic signals Interesting physics with ~ 50 fb-1 (design: 5 fb-1): • precision measurements (charm CPV, …) • searches (~1 GeV Majorana neutrinos,…) 40 MHz • UPGRADE NEEDED • increase read-out to 40 MHz & eliminate trigger limitations • LLT will not simply reduce rate as L0, but will enrich selected sample • new VELO detector • no major changes for muon & calo • upgrade electronics & DAQ • data link from detector: components from GBTreadout-network made for ~ 24 Tb/s • common back-end read-out board: TELL40. Parallel optical I/Os (12 x > 4.8 Gb/s), GBT compatible Calo, Muon Custom electronics LLT pT of had, m, e,/y 1-40 MHz All sub-detectors HLT Tracking, vertexing, inclusive/exclusive selections CPU farm 20 kHz N. Garelli (CERN). NEC'2011

  42. Need for Bandwidth – Phase2 Read-out from cavern to counting room • New front-end GigaBit Transceiver (GBT) chipset • point-to-point high speed bi-directional link to send data from/to counting room at ~5Gb/s • simultaneous transmission of data for DAQ, Slow Control, Timing Trigger & Control (TTC) systems • robust error correction scheme to correct errors caused by SEUs • Advanced Telecommunications Computing Architecture (ATCA) • point-to-point connections between crate modules • higher bandwidth in output • Which electronics in 20 y? Will VME be still ok? Do we need ATCA functionality? VME ATCA Ethernet ~40 Gb/s Board Board Front-End ~200 Mb/s S-link ~200 Mb/s PC Ethernet 1 Gb/s GBT ~5 Gb/s Read-Out System ~40 Mb/s ~40 Gb/s N. Garelli (CERN). NEC'2011

  43. Conclusion • Trigger & DAQ systems worked extremely well until now • After the long LHC shutdown of 2017:beyond design • increased luminosity • increased pile-up • Experiments need to upgrade to work beyond design • New Inner Tracker: radiation damage & more pile-up • Level-1 trigger: more complex hardware selection & deal with longer latency • New read-out links: higher bandwidth • Scale DAQ and Network • Difficult to define upgrade strategy as of today • unstable schedule • maintaining current experiments • One thing is sure: LHC experiments upgrade will be exciting N. Garelli (CERN). NEC'2011

More Related