1 / 15

Experience with H2 testbeam daq sandro ventura INFN padova

Experience with H2 testbeam daq sandro ventura INFN padova. ORCA. Small daq system based on daq column. Detector Frontend. Level 1 Trigger. Run Control based on Web. Readout. Units. Event Manager. Event Builder. Controls. Filter. Units. Computing Services. VME Readout Unit

vangie
Télécharger la présentation

Experience with H2 testbeam daq sandro ventura INFN padova

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experience with H2 testbeam daq sandro ventura INFN padova

  2. ORCA Small daq system based on daq column Detector Frontend Level 1 Trigger Run Control based on Web Readout Units Event Manager Event Builder Controls Filter Units Computing Services VME Readout Unit (OS vxWorks, Linux) PC/WKS Filter Unit (OS Solaris, Linux) MXI CAMAC INPUT PCI MXI ETHERNET Lv-1 TTL/NIM P M C VMEADCs TDCs P P C Filter Unit Lv-2 P M C Events Ethernet PC Readout Unit (OS vxWorks, Linux) OUTPUT ETHERNET SCSI OOBDMS MXI TTL/NIM Lv-2 Lv-1 Events Ethernet PII Readout Unit IDE SCSI

  3. H2 july ‘99 testbeam setup beam Silicon telescope to be read through dual port ram snooping (old H2 daq system) Muon chamber 64 TDC channels 1 PU for BTI output recording Rates ≈ 800 trig/spill - 400 hz 8 ktrigs/spill - 4 khz (no silicon) Setup: a “parallel” daq system based on daq column components Typ. sizes ≈ 200 kB/spill 600 kB/spill (no silicon) Goals: Provide real data to analysis through ORCA Verify needed resources to customize the generic DAQ column Validate building protocols Verify portability and code reusability

  4. ON ON OFF Spill On/Off 80 I/O TTL lines Trigger ALTERA PLX 9080 Busy Veto ORCA Filter Unit System Components: hardware setup Generic PMC board Silicon Data Trigger E B H2 EVENT BUILDER Busy Trigger MXI Trigger Spill ON/OFF Spill ON/OFF Busy Busy P M C Veto R U 1 P M C VME TDCs Veto BTI R U 2 Ethernet P M C P M C Ethernet RU1: PPC 2300 Generic PMC board TDC Kloe RU2: PPC 2300 Generic PMC board MXI connection Fast Ethernet OOBDMS GUI: PC FU: Sun Ultra 5

  5. Daq software architecture 1999 Run Control Backbone: (Java Corba) Lv-2: Ethernet Trigger Lv-1: - PCI - VME - Ethernet RU Manager - DLPI - TCP/IP - UDP/IP - SENS: - FastEth - GigaEth - MAZE (Myrinet) - Flat File DMA VME DMA PCI VME MXI PCI RUI RUM RUO ToolBOX Spy data flow for (;;) { try { // Waiting trigger *ruiTrgStream >> setl(sizeof(trigger)) >> (char*)&TBtrg; //Read Event *ruiInputStream >> setl(1) >> (char *)evt_data; //Write to RUM memory rumStream_->open(&event,vxios::write); *rumStream_ << setl(evt_data[0]*sizeof(int)) << (char *)evt_data; rumStream_->close(); } } generic daq loop RU measured rates: RUI ≈ 100 khz RUI+RUO ≈ 9 khz (256 B/ev) ≈ 6 khz (4 kB/ev)

  6. Daq software architecture 1999 Run Control Backbone: (Java Corba) FU Manager VME-MXI UDP TCP FUM FUO OODBMS Flat File FUI ToolBOX Spy data flow FU measured rates: RU+CFU ≈ 1 khz (256 B/ev) ≈ 700 hz (4 kB/ev) or OODB

  7. LVL 1 System Components : EVM - No software component. - Hardcoded logic for the synchronization (BUSY’s and VETO’s). - Sequential super event numbering drives requests. Due to silicon data snooping, data were collected as super events (1 per spill): LVL-1 triggers were appended up to the end of the spill Full Building Protocol TestBeam Simplified Protocol Spill On FU EVM FU FU FU Open Ev Readout Readout Alloc / Clear Spill Off Close Ev Confirm Send i / Clear Send Cache Cache Send i+1 / Clear Cache Effective super event rate 1/14.2 s

  8. Multi front-end integration The lack of pipeline in present testbeamfront-end involved a revision of the EVM-RU-BU protocol to insure proper trigger synchronization in a multi RU’s setup (e.g. integration of silicon data required spill sync). HW oriented sync: EVM trig ID broadcast: Acknowledged Readout Invocation: trig trig busy 1 EVM Busy Read out 1 Read out 2 busy 2 Global Busy Ack 1 Ack 2 Trig count ++ A RU can loose trigs due to time alignment problems. Every Readout (or broadcast) needs to be acknowledged. Deadtime sums up. Band limited when trig rate increases due to n-ack’s

  9. Multi front-end integration EVM trig ID broadcast: Timed Out Busy: Independent RU’s: trig trig EVM Busy Timeout EVM Busy Readout Readout Readout trig 1 Not Ready 1 Busy 1 Only unaccepted trigs are signaled to EVM. If none after timeout busy is cleared. Trig Rate limited. trig 2 Busy 2 Every trig ID is broadcasted. RU’s can accept or reject trig. (empty trig entries might be pushed on DPM for proper merging).

  10. System Components : Run Control Experiment Manager GUI logger Java Corba backbone command status RUManager FUManager config Working as a spy daq, the RCS actually didn’t provide any front end configuration, nor run condition logging.

  11. First data analysis through OODB by Annalina Vitelli andClaudio Grandi Cell Occupancy Drift Time Boxes Chamber Resolution 200 µm Efficiency 90%

  12. System evaluation Performances Total throughput wasn’t a big issue ( 10-100 kB/s) due to spill cycle. Level-1 trigger handling within requirements (> 500 hz) Uptime 60% of the two weeks run (mostly on single RU configuration). Half of the runs only on flat file storage. Required Manpowerthis setup to a new front end customization 3 man months ≈ 10 days integration 2 man months - final setup debugging 3 man weeks probably same Major inconveniences Bugs: found quite a few during integration and running, both on inherited toolbox and on our custom code. Systematic deadlock on RUI/RUO sync hang RU. Memory leaks on the FU side. Software exceptions handling problems (compiler?).

  13. System evaluation Major inconveniences (continued) Inadeguate RU model: the RU classes had to be modified to allow use of specialized RUI’s, with different trigger handling. Online Event Display: we didn’t manage to have online OODB tools to spy data flow, which resulted in DB being filled without any check. Although raw data spies had been added, at least a rough event display (whether OO or not) to qualify data will be a concern during future runs. Database Population: following the previous lack, problems with raw data encoding to DB objects gave much more troubles than they should have. Run Control: Run Control System unable to handle asynchronous error conditions. GUI had several misbehaviour (and was too slow). Switched to alphanumeric user interface. ORB interoperability problems forced the move of RU manager away from the RU cpu.

  14. System evaluation Near Future steps RU/BU API: a major revision of the whole toolbox went through, resulting on a new software model, based on remote method invocation, aimed to a higher flexibility. Testing is now being done, to achieve a new integration which will cover next July H2 daq needs. Run Control: the tracked bugs have been worked around. While the architecture hadn’t to be modified, the new release of the RCS provides a cleaner interface between components. Event Display: while no display can be generic enough to cover every setup, some basic general purpose tool (e.g. histo server) could be embedded on the builder.

  15. System evaluation Future steps (continued) Database support: a local, lightweight database will be integrated on the system to address all the issues related to system partitionning, run configuration and bookeeping. Among various products we are evaluating minisql (public domain), mysql (linux 6.1 distr.), Jdatastore (Borland), last two being JDBC compliant.

More Related