1 / 12

2011 run debriefing workshop

2011 run debriefing workshop. The Muon Group November 8, 2011. Summary. What Went Well What Went not so Well and planned improvements Shutdown plans. Muon System in 2011: What Went W ell (WWW). DAQ system essentially stable and running smoothly

effie
Télécharger la présentation

2011 run debriefing workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2011 run debriefing workshop The Muon Group November 8, 2011

  2. Summary • What Went Well • What Went not so Well and planned improvements • Shutdown plans 2011 run debriefing workshop

  3. Muon System in 2011: What Went Well (WWW) • DAQ system essentially stable and running smoothly • Continuously and efficiently monitored both through the ECS and the online monitoring system • Reaction to problems is usually prompt thanks to the chain shift leader/data manager -> piquet ->experts • HV system very stable: trips (not too many indeed !) handled smoothly. 2011 run debriefing workshop

  4. WWW: dead chambers • Chamber mortality • two MWPC chambers to be replaced (0.15%) • two GEM chambers broken, to be replaced Reminder: # MWPC = 1368 2011 run debriefing workshop

  5. WWW: HV, dead electronics channels • HV system failures • CAEN system: In 2011 the situation was much more stable. We had only a very unfortunate failure that affected the last two fills of 2011 run. • UF/PNPI system. Only a few failures in 2011: a couple of faulty channels. • Dead electronics channels • 2010: 46 dead channels (0.18%) – 42 in M1 • 2011: 31 dead channels (0.12%) – 27 in M1 • mainly concentrated in single faulty FEBs • DAQ modules (ODE, sync, TELL1s etc) • 1 TELL1 replaced in 2011 • a few sync chips replaced in 2010-2011 (over 3648) • 2 ODE boards (over 152) replaced (GOL chip failure – both boards repaired) in 2010-2011 2011 run debriefing workshop

  6. WWW: aging • Aging should not be a concern yet ! • Not trivial to monitor it: • Efficiency is not a good indicator (4-fold redundancy; in addition needs A LOT of statistics to be measured) • Analysis in progress to measure possible effects. Two parameters: • Noise: perform periodical noise scans. Should in principle spot anomalous dark currents due for example to Malter effect or to sparks from impurities on the wires • Average time shift: the average time response is roughly proportional to gain. This method should be able to appreciate relative gain variations of ΔG/G > 3-5% 2011 run debriefing workshop

  7. WWW: hardware still very stable • Component failure rate seems to be well under control at the moment. 2011 run debriefing workshop

  8. WWW: spare availability • boards already equipped • CARDIAC: 12% (with the available components another 20%) • SB: 5% • PDM: 20% • ODE7% (with the available components another 19%) • SYNC boards >10% (with the available components another 15%) • chips(CARIOCA/DIALOG/SYNC/GOL/TTCrx/QPLL/ELMB) ~ 20% • HV system will be completed in 2012 and ~10% spare components will be available. • 1 spare TELL1 ready with the muon firmware installed 2011 run debriefing workshop

  9. What Went not so Well (WWnsW): ODE de-synchronization ODE de-synchronization problem. 2 kinds of errors affecting the data quality are observed: • SYNCS losing BXid synchronization: generate ODE errors seen in MUON monitoring (fast run change, automatically triggered, solves the problem). • ☞Identify and replace faulty SYNC chips • Full ODE de-synchronization. No error bits, nor any large anomaly on MUON monitoring. Can be detected by the L0muon monitoring (fast run change, automatically triggered, solves the problem). • ☞Improve diagnostics (error bits in TELL1s: source code desperately needed !!!!!); replace the more faulty ODEs; try to debug clocks, timing etc. in ODE. 2011 run debriefing workshop

  10. What Went not so Well (WWnsW): dead time • Sizable dead time observed in M1, M2 R1,R2,R3 and M5R4 when running at high luminosity. • A non negligible contribution to the dead time (especially in M2R3 and M5R4) comes from logical channel crossing. • ☞Can be reduced shortening the FE output signal time width. Tests performed on M5R4: width from 28 ns down to 15 ns. Results are very encouraging. A test on the whole system performed at the end of the 2011 run. Analysis in progress • ☞Additional shielding will be put in place behind M5 to further reduce the dead time effect in M5R4 (mainly due to backscattering from behind M5) 2011 run debriefing workshop

  11. What Went not so Well (WWnsW): the throttling crisis • M1 TELL1s start throttling producing dead time when running at high luminosity • ☞ Load balancing is not enough to suppress the effect • ☞Need firmware modification. New firmware release in preparation by Guido. To be tested in the coming weeks • ☞ As a VERY LAST resource, consider prescaling of M1 data • ☞In any case, ensure that the Muon System can run without throttling at 4x1032 2011 run debriefing workshop

  12. Additional plans for the shutdown • Prepare cables and electronics for new UF/PNPI HV modules • Prepare and test software/hardware for the new USB interface to PNPI system made by Moscow State Univ. • Some minor fixing and modifications on PVSS projects 2011 run debriefing workshop

More Related