1 / 26

High Speed Links

High Speed Links. Francois Vasey, CERN PH-ESE. High speed links in LHC and commercial applications On-going common projects for LHC Phase I upgrades Towards HL-LHC Conclusions. 1. High Speed Links in LHC. Trigger. Timing/triggers/ sync. Control/monitoring. Switching network. CPU.

joie
Télécharger la présentation

High Speed Links

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Speed Links Francois Vasey, CERN PH-ESE • High speed links in LHC and commercial applications • On-going common projects for LHC Phase I upgrades • Towards HL-LHC • Conclusions francois.vasey@cern.ch

  2. 1. High Speed Links in LHC Trigger Timing/triggers/sync Control/monitoring Switching network CPU Front-end DAQ interface Front-end DAQ interface CPU Front-end DAQ interface Front-end DAQ interface 1 CPU Front-end DAQ interface 2 CPU CPU Front-end DAQ interface N CPU francois.vasey@cern.ch

  3. 1.1 For instance: Link diversity in ATLAS francois.vasey@cern.ch ~15’000

  4. 1.2 For instance: Link diversity in CMS ~60’000 francois.vasey@cern.ch

  5. 1.3 High Speed Optical Links in LHC • Large quantity o(100k), large diversity • Majority running @ o(1Gbps) • From full custom to qualified COTS • Tuned to specific application and environment • Developed by independent teams • Successful adoption of technology in HEP francois.vasey@cern.ch

  6. 1.4 High Speed Optical Links in LHC: Lessons Learned • Increase Link Bandwidth • amortize system cost better • Share R&D effort • use limited resources optimally • Strengthen quality assurance Programs • Identify problems early • Test at system level Joint ATLAS/CMS NOTE ATL-COM-ELEC-2007-001 CMS-IN-2007/066 https://edms.cern.ch/document/882775/3.8 francois.vasey@cern.ch

  7. 1.5 High SpeedOptical Links outside LHC (short distance) • Rapid progress driven by: • Successful standardization effort • 100 GbE standard ratified in 2010 by IEEE • Availability of hardware cores embedded in FPGAs • 50+ x 10G transceivers in modern FPGAs • Commercial availability of MultiSourceAgreement-based hardware • Commodity 10G and 40G devices • Emerging 100G and 400G parallel optics engines • Current LAN rates @ o(10Gbps), ramping up to 40Gbps • Widening performance gap compared to HEP • But consider: • Specific environmental constraints and long qualification time • Long detector development time: vintage 2000 hardware in LHC R&D necessary to keep up, use and develop technology francois.vasey@cern.ch

  8. 2. On-going Development Projects for LHC Phase I Upgrades Initiatives initially aiming at a single target: SLHC • Launched in 2008, timely for phase I upgrades • Working Groups • Microelectronics User Group (MUG) • Optoelectronics Working Group (Opto WG) • Topical WorkshoponElectronicsfor Particle Physics • Common Projects • Rad Hard Optical Link • GigaBit Transceiver (GBT) project (chip-set) • & GBT-FPGA project • Versatile Link (VL) project (opto) • & Gigabit Link Interface Board (GLIB) • Many others … francois.vasey@cern.ch

  9. 2.1 Rad Hard Optical Link Common Project • Requirements: • General • Bi-directional • High data rate: 4.8Gbits/s • Low and constant latency (for TTC and trigger data paths) • Error detection and correction • Environment • Radiation hard ASICs (130nm) and radiation qualified opto-electronics at Front-End • Magnetic Field tolerant devices at Front-End • Flexible chip interface (e-links) to adapt to various architectures • Compatibility with legacy fibre plants • Back-end COTS • High-end FPGAs with embedded transceivers • GBT-FPGA firmware • Parallel optics • Commitment to deliver to LHC experiments in 2014-2015 francois.vasey@cern.ch

  10. 2.2 Impact on System Architecture Timing/triggers/sync Trigger • Custom development for difficult Front-End • Firmware only for FPGA-based Back-End • Evaluation platform for system-tests Switching network CPU CPU Front-end interface Rad-hard optical links CPU Front-end interface CPU Front-end interface CPU DCS network Control/monitoring CPU francois.vasey@cern.ch

  11. 2.3 Optical Link Project Status: GBT ~30 Man-Years • Project started in 2008 • GBT (Serializer/Deserializer) • GBT-Serdes prototype in 2009 • GBTx in 2012 • Packaging in 2013 • 2nd iteration and prod in 2014 • GBLD (Laser Driver) • Final iteration (V4.1/V5) in 2013 • GBTIA (Pin Diode Receiver) • Final iteration (V3) in 2012 • GBTSCA (Slow Control ASIC) • Final version expected in 2014 • GBT-FPGA firmware • Tracking evolution of major FPGA families • Available • Project delivers • Chipset for Front-End • GBT-FPGA Back-End firmware francois.vasey@cern.ch

  12. 2.4 Optical Link Project Status: VL ~40 Man-Years • Kick-off: April08 • Proof of concept: Sep09 • Feasibility demo: Sep11 • Project delivers • Custom built Rad Hard VTRx • Production readiness: Apr14 • Early delivery of rad-soft VTTX to CMS-Cal-Trig: Dec13 • Recommendations for • Fibre and connectors • Backend optics • Evaluation Interface boards (GLIB) • Experiments • Design their own system • Select passive and backend componentsbased on VL recommendations and on their own constraints francois.vasey@cern.ch

  13. 2.5 Packaging and Interconnects Status • GBT • 20x20 BGA with on-package crystal and decoupling capacitors • CERN<>Distributor<>Company<>Company • 5 iterations to freeze design • 1-4 weeks per iteration • 6 months to first prototype • Mask error, re-spin, +2months • VL • High speed PCB simulation and design • Injection-moulded ULTEM 2xLC connector latch and pcb support • Prototyping started 2009, moulded parts delivered 2013 francois.vasey@cern.ch

  14. 2.6 Rad Hard Optical Link Project Effort • 6 years of development • Launched in 2008 • Delivery in 2014-15 • 6 institutes involved • CERN, FNAL, IN2P3, INFN, Oxford, SMU • Estimated 80 Man-Years + 2-3 MCHF material • One of the largest common efforts in the community francois.vasey@cern.ch

  15. 3. Towards HL-LHC • Higher Data-rate • Lower Power • Smaller Footprint • Enhanced Radiation Resistance • Not to be forgotten: • Fast electrical links • Radiation-soft links Not all features in same link francois.vasey@cern.ch

  16. 3.1 Higher Data-Rate and Lower Power • ASICs: migrate to a more advanced technology node: ≤65nm • Qualify technology for environment • Establish stable support framework and design tools for full duration of development • Design new ASICs taking advantage of technology advantages • Either high speed (multiply by two) • Or low power (divide by four) • Opto: qualify new components and emerging technologies • VL opto are already 10Gbps capable • Electrical interconnects and packaging become performance limiters • Build up expertise • Train with relevant simulation and design Tools • Establish relationship with selected suppliers francois.vasey@cern.ch

  17. 3.2 Smaller Footprint VTRx SF-VTRx • GBT package size can be shrunk by limiting the number of IO pads and going to fine pitch BGA • Will affect host board design • VTRx concept has been pushed to its size limit: SF-VTRx • Not sufficient for some tracker layouts • Tracker frontends will need custom packaging • Industry to be approached francois.vasey@cern.ch

  18. 3.3 Enhanced Radiation Resistance Tx • ASICs likely to be OK • Active opto devices OK except for pixels • Tight margins • Are there alternatives for fluences beyond 1016 cm-2 ? • Reconsider Passives? • modulators HL-LHC TK Rx francois.vasey@cern.ch HL-LHC TK

  19. 3.4 Si-Photonics, a paradigm changing technology? • Si • is an excellent optical material with high refractive index (but indirect bandgap) • Is widely available in high quality grade • Can be processed with extreme precision using deep submicron CMOS processing techniques • So, why not build a photonic circuit in a CMOS Si-wafer? francois.vasey@cern.ch

  20. 3.5 Si-Photonics, status in the community • Commercial devices tested • Excellent functional performance • Moderate radiation resistance limited by controller ASIC failure • On-going collaborations with academic and industrial partners • Simulation tools in hands • Selected building blocks under test • No usable conclusion so far, much more work needed • Packaging is challenging • Assess radiation hardness first ! Luxtera QSFP+ Si-Photonics chip francois.vasey@cern.ch

  21. 3.6 Not to be forgotten • High speed electrical data links are not obsolete !!! • Short distance, on-board serial links • Aggregation to high speed opto-hubs • Low mass, highly radiation resistant (HL-LHC pixels) • Develop expertise and tools • Detectors with modest radiation levels may not need custom front-ends • Qualify COTS and/or Radiation-soft components • Shortlist recommended parts • Continuously track market evolution francois.vasey@cern.ch

  22. 4. Conclusions (1/2) Development • High speed links are the umbilical cords of the experiments • Meeting the HL-LHC challenge will require: • Qualifying new, emerging technologies and components • Designing electronics, interconnects, packages and perhaps even optoelectronics • Maintaining expertise, tools and facilities • Investing heavily with a few selected industrial partners • The community is healthy, but small and fragmented • Existing working groups and common projects are effective and should be continued forphase II upgrades • Additional projects and working groups could be created • WG on fast electrical links & signal integrity • WG on radiation-soft links & qualification • Exploratory Project on Si-photonics for HEP applications • Manpower is the real bottleneck • Close to or below critical mass in several institutes Design Service Liaison with Industry Common projects Working groups People

  23. 4. Conclusions (2/2) Time • Development time remains very long in comparison to industry • HL-LHC environment is unique and requires specific R&D and qualification procedures • Common building blocks are desirable, but… • … Take time to be specified • … Must be made available early to detector development teams • Limited manpower results in longer development time • Master schedule and requirements are evolving • 6 years, 6 institutes, 80 MY were required to reach production readiness for phase I • 2014+6=2020 • Common optical link project for HL-LHC must be started now ! • Evolving from phase I “Rad-Hard Optical Link” technological solution • Reusing and possibly expanding existing collaboration framework • Strengthening teams and avoiding parallel efforts wherever possible • Leaving door open to selected exploratory R&D, as long as schedule is still fluid

  24. Backups francois.vasey@cern.ch

  25. 1.1 Many different Link types • Readout - DAQ: • Unidirectional • Event frames. • High rate • Point to point • Trigger data: • Unidirectional • High constant data rate • Short and constant latency • Point to point • Detector Control System • Bidirectional • Low/moderate rate (“slow control”) • Bus/network or point to point • Timing: Clock, triggers, resets • Precise timing (low jitter and constant latency) • Low latency • Fan-out network(with partitioning) • Different link types remain physically separate, each with their own specific implementation francois.vasey@cern.ch

  26. 3.5 High Speed Electrical Links francois.vasey@cern.ch

More Related