1 / 59

ABCD DAQ

ABCD DAQ. Timothy Phung. Topics. Introduction Status of Software Overall plan of the software Hardware Setup Threshold Scan Details of how Threshold Scan works Analysis of Data Known problems or issues Comparison of old and new system How we should split up the software development

loman
Télécharger la présentation

ABCD DAQ

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ABCD DAQ Timothy Phung

  2. Topics • Introduction • Status of Software • Overall plan of the software • Hardware Setup • Threshold Scan • Details of how Threshold Scan works • Analysis of Data • Known problems or issues • Comparison of old and new system • How we should split up the software development • Further things to look into • Lessons Learned so far

  3. Introduction • We are working on a DAQ to read out several modules on a stave • The DAQ uses a digital I/O board from National Instruments (the pxi 6561) which sits in a pxi chassis • We use labview to program the board to run these tests • Labview is also used for processing and analysis of data

  4. Triggers are on the back RSTI PXI 6561 The interface to the computer is through the PXI bus and through a MXI-4 connection We send clock and commands at the DDC connector. Data is also acquired at the DDC connector Generation and acquisition memory are independent

  5. PXI 6561 cont. • There are 16 channels of LVDS input/output • We will use 1 channel for output (commands) • The other 15 channels will be later used for acquiring data from modules • Clocks are also sent from the 6561 to the chips using the onboard clock

  6. Hardware Setup PXI chassis with MXI –4 and the 6561 board Power from SC2001 support card and the SCT LV Break out panel from NI

  7. Panel Connections Generation channel and Acquisition Channels Clock goes to chip

  8. Modules on a Stave readout case • Several channels of data coming out • One channel for generating data distributed to all modules • One clock signal coming in to all modules Data Clock and commands

  9. ABCD chip • We readout and decode the bit streams of the ABCD chip to find the occupancy of the 128 channels. Based on this we can make measurements of gain, noise, etc., • ABCD chip has an amplifier, shaper, and comparator at the front end. • We simulate the particle tracks by using the calibration circuit

  10. Threshold Scan • If V_in >= V_threshold, the comparator gives a true value • If V_in < V_threshold, the comparator gives a false value • As V_threshold increases and V_in stays constant, then less channels will respond true V_in > Comparator V_threshold occupancy An important point is the 50% point VT_50 V_threshold

  11. Threshold Scan cont. • Increasing charge • As charge injected increases, the Vt_50 value increases occupancy • Gain measurement • We fit a 2nd order polynomial to the vt-50 values as a function of charge injected. • The derivative of the curve at each charge point gives us the gain at that point V_threshold Vt_50 Charge injected

  12. Currently working on Noise occupancy Status of Software

  13. Overall Plan of Software Configure pxi 6561 hardware clocks, channels, resource names, etc Parsing of Raw Data Bit streams from ABCD chip Done in parallel File .xls or .bin file of raw data occupancy as a function of scan parameter and channel number Configure generation session, write waveform and scripts Acquisition of Data in a loop Each test follows the same basic structure

  14. Raw data files in .xls or .bin format are read in and analysis is performed We plan to use the reporting feature in labview Overall Plan of Software cont. Analysis Reports Raw Data Files Plots, Histograms on Front Panel

  15. Example Program: Threshold Scan Data Processing Hardware configure Queue structure allows for execution on multiple threads so both processes can occur in parallel Writing waveforms and scripts Acquire Data

  16. Configure Hardware Configure resource and channels The stream delay is for now placed with a default value of 0.5 * 25ns = 12.5ns The clock is at 40 MHz I export the on board clock to chip through the DDC connector

  17. Triggers are sent along the RSTI lines at the back of the 6561. There are 4 triggers Triggers are setup to do multirecord acqusition Ready to Start Event Ready to Advance Event Start Trigger Advance Trigger Configuration of Triggers

  18. Part of the threshold scan script Wait for ready to start event Generate configuration“X”“Y” where X = cal line and Y = threshold value rounded to decimal integer Generate trigger marker0(160) is the command sequence with the level 1 trigger. The marker event starts the multirecord acquisition. Repeat for (N-1) where N = number of (triggers/4) Wait for scripttrigger 1 to get a ready to advance event Generate trigger marker1(160) to advance to the next record. script myScript1 Wait until ScriptTrigger0 Generate config050 Generate trigger marker0(160) Repeat 999 Wait until ScriptTrigger1 Generate trigger marker1(160) end repeat Generate config150 Generate trigger marker0(160) Repeat 1000 Wait until ScriptTrigger1 Generate trigger marker1(160) end repeat Generate config250 Generate trigger marker0(160) Repeat 1000 Wait until ScriptTrigger1 Generate trigger marker1(160) end repeat Generate config350 Generate trigger marker0(160) Repeat 1000 Scripts

  19. For a threshold scan, we acquire for 4000 records with 3500 samples on each record to roughly account for the maximum data packet length that comes at each threshold step There is one trigger from a level 1 command to start the multirecord acqusition and subsequent level 1 commands are used to advance to the next record. These are exported as marker events in the script. Multirecord Acquisition

  20. Multirecord Acquisition cont., We setup multirecord acquisition and then fetch records inside of a loop and then place them into a queue. Another loop parses out data after pulling it out of the queue.

  21. Processing of data occurs by pulling data out of the queue and into a subroutine to do the parsing. • The parsing is done by using bit wise operations in labview • For now the parsing is able to keep up with the fetching with data, as the queue size stays zero throughout the entire scan. • For the case of many modules, the parsing may take longer than the fetching of data and the queue may build up. • We should look into faster and more efficient bit stream parsing algorithms Processing of data

  22. Results from this system • Threshold scan from 50 to 400 mV for 1.5 fC injected charge • Above plots show the occupancy as a function of channel number and threshold • The occupancy at 50mV • S-Curve for a channel

  23. Results continued • A strobe delay plot • Occupancy as a function of the channel number and strobe delay setting

  24. Analysis of Data: Finding the Vt_50 point • We have two cases, rising edges and falling edges • Currently, I am using the “sparsification threshold measurement” approach to find the vt_50 values • The vt_50 values are given below: • Plots to the right are for threshold scan and strobe delay

  25. Analysis cont. • Other analysis routines are just statistical caculations or fits to curves. • I use the routines from labview to perform these

  26. Speed, Optimization • The threshold scan for the case of 50 mV to 450 mV with 1.5fC of injected charge is much slower than the old system. • On a fast dual processor Xeon computer, it is slower by a factor of 2. • Old system just takes 1 min to do a threshold scan, it takes the new system 2 minutes to do this on a fast computer

  27. Speed Bottlenecks • Using the profile vis feature it has been identified that the transfer of data from the pxi 6561 to the host pc and the parsing of data are the most time consuming parts of the program

  28. Differences between old and new system • Old system parses data in hardware, we parse in software • Also we wait a fixed amount of time between triggers, old system decreases time between triggers as occupancy decreases MUSTARD VME readout card

  29. Relative Rates of Fetching Data and Parsing Data Fetching of data stays the same throughout, since we fetch the same amount of data always Rate Data Parsing Rate decreases because the parsing of data occurs in a while loop that stops when the trailer bit stream is found <1000,0000,0000,000>. As we move the threshold out this is found sooner since there is less occupancy. Time or increasing threshold or time as test progresses

  30. Time Between Triggers Compared to old system New system: Time between triggers is the same throughout, it has to wait for 3500 clock cycles and a ready to advance trigger to start on the next trigger Time Between Triggers Old sct test daq the time decreases since there is hardware parsing of data. When it finds the end of the bit stream for one event, it instructs through vme for another trigger to be sent next Time or increasing threshold or time as test progresses

  31. Suggested Solution Rate Current System Fetching Rate Make Fetching Rate decrease by taking data packet size from previous scan and using it as the record size of next scan Data Parsing Rate Time or increasing threshold or time as test progresses

  32. Suggested Solution cont. Configuration of record sizes and data parsing Boolean T/F Trailer Bit stream not found. Taking Data packet length from previous scan and feed it back as the record size of the next scan

  33. Are results consistent between two systems? The values for the gain, input noise, vt50 values, and the extrapolated offset are consistent

  34. Continue comparison

  35. Style of vis • It will would aid the software development and debugging if we could have a consistent style of the vis throughout • For example we should have a color code for the vi icons • Also consistent names for all the vis • This will help when we have many people working on the software

  36. A Proposed Color code • Blue = Threshold Scan • Yellow = 3pt gain • Purple = Strobe Delay • Etc. • All related vis analysis,data processing should be the same background color as well

  37. Right now most vis are documented inside of themselves We should also have some outside documentation Vi to the right is for 3 pt. gain Documentation

  38. It has not been decided and should be discussed what the user interface of the control program, reports, plots, histograms should look like A full response curve User Interface

  39. NI recommends to now use perforce or Microsoft visual source safe I have been using the built in source control, but it has limitations, you cannot rearrange vis into files after you place them into the source code control NI has discontinued its built in source code control in newer versions of labview Version control software will definitely help if more people are going to work on this project Version Control Software

  40. Unfinished tests and reports Mother vi to control everything How we should split up software

  41. Integration with software to control power (Code has already been written by Evan Wulf to control the SCT LV and HV modules) A configuration file format A fast parsing algorithm for the case of many modules in parallel Planning of clusters to pass data around instead of passing 20 controls or indicators to subvis We still also need to cleanup the code, document it, and optimize it as well There are lots of additional things to work on

  42. Embedded controller on the pxi chassis to do the parsing FPGA module from NI to do the parsing Streaming the data from an entire test to the disk and parsing out streamed data The speed of the system for the case of many modules on a stave Whether building the program into an executable and turning off all debugging increases speed a lot What the maximum data packet size for the ABCD chip is. I assumed a value of 3500 samples for 4 chips using a rough estimate. Things to look into further

  43. Multiple record acquisition allows for setting up complex trigger sequences on the pxi 6561. In streaming you just have one trigger to start and another to end. The idea behind streaming is to read in data from the 6561 to the host pc at faster rates than it comes into the 6561 from the device under test. For us this means we need to read in data at faster than 80 MB/s for 2 bytes * 40 MHz. Streaming is highly dependent upon the computer system. I did not look into using streaming in detail yet, because we are more interested in getting a working system for now. Brief Overview on Streaming vs. Multirecord

  44. Streaming data is very memory intensive on the computer. If we operate a test for 1 min we will have 4800MB or 4.8GB of data on the computer. 80MB/s * 60s. Also, we have to write out the entire script before we are beginning and makes sure that we wait sufficient time between all trigger sequences before sending the next trigger sequence. There is a lot of data to parse, we have to sift through a lot of data after it is streamed in. A lot of it is useless data Streaming requires less overhead than multirecord It will be hard to implement such tests as noise occupancy with streaming, where we have to decode some data before we send the next commands. For tests where the decoding of data and the next set of commands are not dependent on each other this may be a possible alternative. One should also use the benchmark vi before using streaming. Faster pcs usually are able to stream better. It is also dependent upon what other things are on the pci bus, since devices on the pci bus all share the same resources. Streaming cont.

  45. Water Analogy of Multirecord and Streaming Multirecord Streaming Data from the ABCD chip is like a bucket of dirty water • With multirecord acquisition it is like you turn on and off (based on a trigger) the faucet of dirty water coming in to the 6561 with a filter in front. • Refined Data comes into the 6561 and host pc. • Less parsing to do in the pc Filter • With streaming, you turn on the faucet once, and leave it on with no filter in front, all the dirty water comes in and must be filtered in the pc. • This is much faster, but the parsing at the end may be slower. • Also need a large hard drive or else host pc will overflow. 6561 storage container 6561 storage container Host pc Filter Host pc Large Filter Green is the dirty water, blue is cleaner, light blue is the cleanest

  46. Placing data into queues to process in a producer/consumer loop increases performance slightly by putting the two processes on separate threads There is some overhead into going into subvis. For example if a subvi resides in a loop, it is better to place the loop in the subvi. In cases where there is not possible, turn the execution priority of the subvi to subroutine to minimize the calling time to the subvi Making the vis reentrant also speeds up execution a little because multiple data sets can enter into the same subvi and execute along different threads. The stream delay setting on the PXI –6561 does not allow for setting the stream delay to be 0.2 to 0.3 * 25ns or 0.7 to 0.8 * 25ns Lessons learned so far See NI Application Note 168 Labview Performance and Memory Management for more details on optimization

  47. Application Note 199 LabVIEW™ and Hyper-Threading Application Note 114 Using LabVIEW to Create Multithreaded VIs for Maximum Performance and Reliability LabVIEW Development guidelines describes application of software engineering principles to development of large labview applications Lots of references on NI website as well. Other useful references from NI

  48. Developing the software in iterations looks like a good idea at this point. We are on iteration 1 at this point Version 0.1 (iteration 1) get a complete working system going without worrying about all the small details and optimize later, solution is not the most elegant at this point. We are more or less learning about how building this application in labview works, how to communicate with the abcd chips to perform this test, and how to analyze data (iteration 2) we will start to optimize the application and make the solution look cleaner and more modular An iterative software development lifecycle

  49. Suggestions after meeting with NI engineer and our own discussion How software will be split up Things we can do to make the development uniform Working Session Results

  50. We spoke to an engineer from NI (Matt Thompson). We also came up with a list of ideas talking amongst ourselves about how to increase performance. Two main suggestions categories Immediate suggestions Extra Hardware suggestions Suggestions

More Related