1 / 15

Year 2001 Data and Analysis

Year 2001 Data and Analysis. Thomas Ullrich, Nov 30, 2001. RHIC performance STAR Trigger Recorded Event Statistics Offline Production pp Running Summary/Outlook. RHIC – Performance 2001 compared to 2000. Run 2000 s = 130 GeV 6  56 bunches * ~ 5 m L  2 · 10 25 (10% design).

cecile
Télécharger la présentation

Year 2001 Data and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Year 2001 Data and Analysis Thomas Ullrich, Nov 30, 2001 RHIC performance STAR Trigger Recorded Event Statistics Offline Production pp Running Summary/Outlook

  2. RHIC – Performance 2001 compared to 2000 Run 2000 s = 130 GeV 6  56 bunches * ~ 5 m L  2·1025 (10% design) Run 2001 s = 200 GeV 56 bunches * ~ 5  3 m L  2·1026  L dt  19  year 2000 STAR 2000 ~ 8.5 106 sec

  3. Event Selection – The STAR Trigger in 2001 • Considerably more complex than in 2000: • New detectors and components • TDC in ZDC  estimate of z-vertex [cut |z| < (25) 35 cm] • SVT noise  ~ 2 MB for empty events  low rates at low Nch • L3 trigger (rare probes) • ToF, FTPC, EMC no impact on event selection (almost for EMC) • Optimization of event selection • Cope with 3½ orders of multiplicity: UPC (1)  central (5000) • Maximize “useful” events (in terms of physics analysis) • Trigger setup depends on rates (e.g. L3 only useful at high intensities) • Minimize bias: ZDC z-vertex cut (L0), L3, etc. • Rate studies and tests before implementation of triggers • Trigger Board (chair Tonko) addresses the technical questions (see http://www.star.bnl.gov/protected/trgboard/index.html)

  4. STAR Trigger (continued) • The setup of the trigger was a learning process! The following trigger sets (aka trigger group or trigger configurations) were used in Au+Au: Trigger set contains several triggers (trigger mix)

  5. STAR Trigger: two examples Trigger Set: productionMinBias hi-mult 1101 UPC minbias 1001 Hadronic minbias 1000 pulserSVT F101 Laser F200 Trigger Set: productionCentral hi-mult & ZDC 1102 hi-mult 1101 Hadronic central 1100 TOPO 3001 TOPO & ZDC 3002 TOPO Efficiency 3011 pulserSVT F101 Laser F200 In RED: counted for “physics” stats

  6. STAR Trigger: Usage in Analysis • Check and understand the trigger definitions of the type of events you want to study • Run log browser: http://online.star.bnl.gov/RunLog2001/ • You might need more than one trigger word to your assemble dataset • In few cases the trigger word changed meaning • Do not assume the same trigger word describes the same in all trigger sets • In few cases the vertex cut and other parameters changed in the same trigger set (same configuration name) – this cannot be detected via the offline trigger info • Central triggers with L3 are biased • L3 group has lots of info on this on their web pages, also methods in StEvent • Dedicated web page: http://www.star.bnl.gov/STAR/html/all_l/trigger2001/ • Contains description and details on trigger 2001 • FAQ (describing pitfalls), useful links and more • Please help adding useful info • And for the real critical stuff (cross-sections, multiplicity distributions etc) talk to the experts: Hank, Jeff, Tonko, Zhangbu, Falk

  7. Run 2001: “hadronic central” events Total Sum: 3,608,625 events(72% of the initially planned 5·106) Used ZDC cut at 10% centrality (720 mbarn)  5 b-1 RHIC:  L dt (hadronic)  34 b-1 max  24·106 events We recorded 15% of all (10%) central collisions delivered by RHIC (consider we run central only half the time and with |z|<35 cm) Nov 14 – Nov 24: 1.3 ·106 events Record day 11/23/2001: 211k hadronic central events

  8. Run 2001: “hadronic minimum bias” events Total Sum: 4,717,379 events(94% of the initially planned 5·106) Note: these are useful events, i.e. |z|<35 cm and vertex can be found Assume observ  0.94  7.2 b = 6.7 b  0.7 b-1 RHIC:  L dt (hadronic)  34 b-1 max  227·106 events We recorded 2% of all ‘observable’ hadronic interactions delivered by RHIC (observable means: vertex can be found offline but no |z| range) …. 0.5 ·106 min biasevents with zero field  and ?? usable events from 19.6 GeV run

  9. Run 2001: Things to keep in mind • Lots of important info is not in the run log (sparse comments) • Volunteers needed: log book  run log • Detectors in data stream • TPC – always • RICH – always • FTPC – mostly • pToF – mostly • SVT – with interruptions • EMC – only towards the end • SMD – no • FPD – rarely • Detector hiccups • TPC: • # of dead channels varies in time (RDO boards) • FTPC • sometimes only FTPC-West • sometimes with missing sector in FTPC-East • SVT • see Rene’s talk earlier • EMC • see Alex’s talk earlier

  10. Example: Bad Sectors Run 2266012 Art Poskanzer 230 < day < 253 RDO 21-3 bad 254 < day < 266 RDO 9-3 and 21-3 bad 266 < day RDO 9-3 bad

  11. Fast Offline • Purpose: complete reconstruction of parts of the recorded events processed in a timely fashion (hours) for QA • Setup, maintained, petted, and cursed by Jerome • Use 7 nodes (14 CPUs) out of 124 CRS nodes • Use up to 60% of reserved 1 TB disk (purged frequently) • Of 12 · 106 recorded events, 11% processed over the entire time • Not always timely (days) depending on RCF • Long term: use online cluster sitting in DAQ-room • Fast Offline runs with the “dev” version • Latest status but also with the most recent bug Please Note: results from fast offline are NOT publishable (no stable chain, bugs in chain, calibration pass not reliable)

  12. Offline Test Runs • Short (test) productions as requested by detector subgroups via period-coordinators or SAC (e.g. for ToF, L3, EMC, FTPC, calibration studies, 19.6 GeV run) • Managed by Lidia and Jerome • Competes with “standard” production on 117 (of 124) CRS nodes • So far ~ 2M events processed • Sometimes run in “dev”, sometimes in “pro” depending on needs Parts of Jerome’s TODO List Please Note: results from offline test runs are NOT publishable (no stable chain, bugs in chain, calibration pass not reliable)

  13. Year 2001 Au+Au Reconstruction • Current ‘official’ production version is P01gk • Original plan: produce only parts and have people check data quality • Reconstruction of TPC, RICH, L3 only (ToF raw data on DST) • So far: ~ 106 events minimum bias • Up to now no requests from PWGs for more … • Next round: • might be the one for QM2001 in Nantes • aim for complete production (i.e. all y2001 data) • include FTPC (at least let’s try hard) • require RICH to provide PID which can be used directly by any user (StRichPidTraits) • possibly new corrections for TPC (see Jamies talk) • might be the last big production before ITTF comes in (see Mikes talk) • We need to fix TRS (“loving owner” issue) a.s.a.p • Still problems with disks at RCF, need solution soon • ITTF evaluation will need some resources • Near future: need coordination of all slow simulators (simulation leader)

  14. So far: Reconstruction optimized for heavy-ions making use of: very precise vertex lots of tracks/event for calibration pp spin program focus on the lower end in Nch worse vertex resolution many small events (I/O vs CPU) need to keep track of polarization handling pile-up events new trigger configurations pp software workshop on 11/19 Event vertex resolution ppLMV has a sigma of about 2 cm for x,y,z Need to do much better (new code, EVR?) How stable is beam spot in x,y ? pp-production chain needs to be finalized Currently big memory leaks that kill the jobs after ~200 events Trigger info. trigger scalers info in Db, work in progress Drift velocity calibration Present algorithm is not useful for pp rely more on laser runs All minutes and some (important) slides posted on reco pages: http://www.star.bnl.gov/STARAFS/comp/reco/ Preparation for pp Analysis

  15. We have plenty of data on tape min bias: y2001  10  y2000 central: y2001  7  y2000 Reach new physics High-pt h spectra and anisotropy out to 12 GeV/c,  (?), ratios at high-pt (RICH) Strangeness ,  (width, mass-shifts), higher reach in pT for (multi-) strange baryons, (1520), K*(1430), (1385) HBT 3D K-K, K- w/ greater resolution, K-p, -p, K0s-K0s, pt dependence of R() Spectra , He4, particle ration EbyE , K* flow, K/ … and much more … Lots of work to do to get analysis chain fully functional Need more help on: TRS Embedding production Code development and maintenance pp chain First steps for analysis: normalized multiplicity distribution Define common multiplicity classes Future ITTF Reshape chain (tables -> StEvent) DAQ100 Summary and Outlook

More Related