80 likes | 212 Vues
The PHENIX collaboration focuses on discovering and understanding QCD properties under extreme conditions using RHIC's unique capabilities. With experiments ranging from p+p to Au+Au collisions at √sNN = 500 GeV, we investigate gluon saturation, confinement, and the nucleon's spin structure. Our extensive data management employs multiple detector subsystems, processing 100 TB of data. We emphasize dynamic integration of file catalogs and collaboration across continents, aiming for effective data analysis and innovative research developments in particle physics.
E N D
PHENIX: discovery and precision • Exploit flexibility, uniqueness of RHIC physics • p+p to Au+Au at sNN = 500 Z/A GeV • asymmetric beams and polarized protons too • Properties of QCD under extreme conditions • confinement, phase transitions, gluon saturation, parton propagation, hadron modification, etc. • Spin structure of nucleon • gluon, anti-quark spin structure; transversity
PHENIX by the numbers • 12 detector subsystems (drift/pixel/strip chambers, RICH, EMCal, TOF, dE/dx, silicon) • 300,000 channels (25,000 channels for EMCal) • 350 kB/event, 20-60 MB/sec recording rate • 100 TB data on tape – raw, reconstructed, and skimmed • little volume reduction during reconstruction • 70,000 files, expect 300,000+ more in 2003
PHENIX from above beam axis
PHENIX analysis • (semi-) inclusive cross-sections, correlations • mixed event background subtraction • large volume data scans • evolving analysis topics, techniques • single physics data stream through reconstruction • file based approach
Computing components • Variety of frameworks, but single, common interface to data • ROOT for I/O of non-raw data • Objectivity OODB for calibrations, meta-data • file catalog, data “carousel”, query utilities • integrating file catalog query into standard frameworks before opening
Development plans • Focus on data management • intra- and inter-site; integration of file catalog into daily life; dynamic population/balancing of disk resident data • GRID middleware use for authentication, authorization • start small/realistic and work from there • few hundred analyzers, multiple continents • Replication, remote jobs tried, but tough • reconstruction of 200 GeV p+p done at CC-J
Major PHENIX Computing Sites BNL LLNL Lund SUNYSB CC-J CC-F UNM VU
Goals and needs • Projects • replicated rdb images as file catalogs • file server load-balancing, inter-site data replication and intra-site data migration • Support • ITR support from NSF would leverage existing funds that can provide some support for ~3 years • DOE component to BNL permit recruiting of IT professional to sit at BNL