1 / 12

O bject R econstruction for C MS A nalysis

0. O bject R econstruction for C MS A nalysis. H igher L evel T rigger. Report to FOCUS June 8, 2000 David Stickland, Princeton University. ORCA/HLT Challenge. HLT validation to be performed using full simuation and reconstruction of high luminosity LHC events:

zlhna
Télécharger la présentation

O bject R econstruction for C MS A nalysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 0 Object Reconstruction for CMS Analysis Higher Level Trigger Report to FOCUS June 8, 2000 David Stickland, Princeton University

  2. ORCA/HLT Challenge • HLT validation to be performed using full simuation and reconstruction of high luminosity LHC events: • 17.3 minimum bias events per crossing • 9 crossings contribute to Calorimetry digitization • Event sample sizes ~ 1 million background and signal events • Only Calorimetry and Muon in level2.0, so delay Tracker digitization till later phase • Massive data moving exercise ~150 minimum bias events randomly chosen from a sample of 100k and with a size of ~35MB for every signal event (~1MB) • 1 million events, 35TB to move about • One CMS event requires of order 100-400 times more computing to digitize than a RUN II event • 1 million events = 1million CPU minutes (10^9 SI95.sec) DPS June/8/2000

  3. Objectivity Database Objectivity Database ytivitcejbO esabataD ORCA Production 2000 Signal Zebra files with HITS HEPEVT ntuples CMSIM MC Prod. MB Catalog import Objectivity Database ORCA Digitization (merge signal and MB) Objectivity Database ORCA ooHit Formatter ORCA Prod. Catalog import HLT Algorithms New Reconstructed Objects Objectivity Database HLT Grp Databases Mirrored Db’s (US, Russia, Italy..) DPS June/8/2000

  4. Production Farm HPSS Data Flow Shift 20 Digit DB’s Pileup “Events” Lockserver Objectivity communication On loan from EFF 70 CPUS used for jetmet production 70 CPUS used for muon production eff031 jetmet FDDB eff032 jetmet journal eff103 muon FDDB eff104 muon journal 24 nodes serving Pileup “Hits” eff001-10,33-39,76-78,105-108 6 nodes serving all other DB Files from HPSS eff073-75,79-81 HPSS DPS June/8/2000

  5. CMS Objectivity AMS Service for HLT/2000 IO Activity on Sun Datastore 24 Linux Servers for pileup 140 Batch Linux CPU’s 4 Linux Servers for federations and Journals 6 LinuxServers for 2TB ooHits DPS June/8/2000

  6. Performance • More like an Analysis facility than a DAQ facility • 140 jobs reading asynchronously and chaotically from 30 AMS server’s, writing to a high speed SUN server • Non disk-resident data being staged from tape • 70 jetmet jobs at ~60 seconds/event and 35MB/event • 70 muon jobs at ~90 seconds/event and 20MB/event • Best Reading rate out of Objectivity ~ 70MB/sec • Continuous 50MB/sec reading rate • 1million jetmet events, ~ 10 days • 1million muon events, ~15 days }in parallel DPS June/8/2000

  7. IT Services • Excellent response from shift team, many reconfigurations of shift and EFF nodes while optimizing system • Rely on CDR services to copy completed DB files to HPSS and update an “export federation” • Rely on AMS/HPSS service on Sun (6 and 7) and Linux • All data (Simulation 2TB, ooHits 2TB, Digis 1TB) in HPSS • But now we depend completely on all these services being of production quality: • AFS • AMS • RFIOD • HPSS • CDR • LSF • Breakdown in any part, can cause across the board failures. Please: All changes in services and key link personnel should be discussed with the experiment beforehand DPS June/8/2000

  8. Limitations • ooHitFormqtting was completely IO bound into and out of MSS, but relatively fast • ooDigitReconstruction was IO bound in reading Pileup events from dedicated servers • Current analysis is again IO bound due to limited disk space (Tape/Disk=6) and continuous staging from HPSS • FARM nodes best used as pure CPU. • We had to misuse them this time to serve data. • We will ask COCOTIME for dedicated data-servers • We estimate currently one high performance data-server for each 40 CPU’s in production • gigabit connectivity • SCSI RAID disk • Monitoring crucial • Control/Signaling not present, will be implemented next time DPS June/8/2000

  9. MONARC Simulation:Network Traffic Measurement Mean measured Value ~48MB/s Simulation DPS June/8/2000

  10. Muon <0.90> Jet <0.52> MONARC SimulationCPU Utilization DPS June/8/2000

  11. MONARC SimulationTotal Time for Muon Production DPS June/8/2000

  12. Near Future • Series of Milestones with ever increasing functionality • Necessary to test the many new features in the Computing and Software • Necessary to answer the fundamental questions of the collaboration on Trigger and Physics performance • Next exercise, September 2000, further factor of 10 in HLT rejection • Aim to do some of the Objectivity production off-site in prototype Tier-1 regional centers • Spring 2001 Final factor of 10, Trigger/DAQ TDR • 2002 Computing and Software TDR • 2003 Physics TDR • 2004 20% data challenge DPS June/8/2000

More Related